The Use of Generative AI Tools Such as ChatGPT and Google Gemini in the Workplace

28.02.2026 Sevgi Ünsal Özden

Introduction

Generative AI technologies, particularly tools such as ChatGPT, Microsoft Copilot, and Google Gemini, have rapidly created a transformative impact on the business world. By generating original outputs that closely resemble human-produced content, drawing on patterns identified within existing data, these technologies span an extraordinarily wide range of applications from text generation and data analysis to software and music development and customer communication. This breadth has elevated them from the status of mere assistive tools to indispensable components of business processes. Indeed, recent research indicates that overall adoption and usage rates recorded two years after ChatGPT's market launch are nearly double those achieved three years after the release of the IBM PC in 1981.[1]

Among the most prevalent use cases of these tools in the workplace, writing assistance, information retrieval, and obtaining detailed instructions stand out as the leading functions. It is evident that these functions enhance employee productivity and accelerate both decision-making and access to information.[2] Yet this picture has an invisible side. The rate at which generative AI (GAI) tools are being adopted in workplaces has clearly outpaced organizational capacity to manage them. While employees who integrate these tools into their daily workflows on their own initiative or through personal subscriptions gain operational efficiency on one hand, they simultaneously give rise to new risks in areas such as data security, privacy, and accountability on the other. This phenomenon, referred to in the literature as "shadow AI" [3] - meaning the adoption of tools without the approval or knowledge of the IT department or senior management- creates a serious gap in corporate oversight mechanisms. Accordingly, this issue has evolved beyond being merely a matter of technological transformation for employers, and has become an important subject that must be addressed from the perspectives of governance, compliance, and risk management.

In this context, the document titled "Use of Generative AI Tools in the Workplace" [4](Guidance) prepared by the Personal Data Protection Authority (Authority) has been designed to provide a general framework for the use of GAI tools offered by third parties and publicly accessible in the workplace, with the aim of raising awareness among companies, drawing attention to potential risks, and promoting informed usage.

This article summarizes the framework set out in the Guidance, examines the key risk areas arising from the use of GAI tools in the workplace as well as the need for management and oversight, and ultimately develops actionable policy and compliance recommendations for companies.

The Use of Generative AI Tools Such as ChatGPT and Google Gemini in the Workplace
% 0

What Does the Authority's Guidance Cover?

The Guidance published by the Authority in March 2026 begins by explaining the nature of GAI systems and their impact on business processes. It emphasizes that these tools are actively used by employees across a wide range of tasks, including text generation, email drafting, report creation, research, summarization, translation, software development, and decision support. Particular attention is drawn to the widespread use of such tools in areas including customer services, marketing and advertising, education, healthcare, law, and software development extending across virtually all organizational functions.

The most critical part of the Guidance is undoubtedly the second section, which addresses the phenomenon of "shadow AI" and the risks it poses for organizations. The Authority defines Shadow AI as the integration of AI tools into business processes by employees without the company's knowledge, approval, or institutional oversight and underscores that this is no longer a theoretical concern but a concrete reality encountered in everyday workflows. The Guidance does not attribute this phenomenon solely to employee preferences; it also acknowledges that the free or low-cost nature of these tools, their ease of use, the minimal technical knowledge they require, and the absence of clear corporate policy all directly fuel this trend.

Noting that a similar dynamic has long been observed under the concept of "Shadow IT", the Guidance characterizes Shadow AI as a more layered problem than its predecessor. This is because GAI tools do not merely store data; they process it, generate content, and carry the risk of directly influencing the decision-making mechanisms that shape business processes. Furthermore, when organizations lack sufficient visibility into which AI tools are being used for what purposes and what types of data are being shared with them, corporate risk management becomes significantly more challenging. In this context, the Guidance outlines the risks associated with Shadow AI use for companies as follows:

Risks Associated with Shadow AI Use

The Guidance identifies six key risk categories: (i) auditability and accountability, (ii) decision quality and accuracy, (iii) protection of intellectual property and trade secrets, (iv) corporate reputation and trust, (v) information security and cybersecurity, and (vi) protection of personal data.

  • Auditability and accountability risk: The use of GAI tools outside corporate logging and oversight mechanisms makes it difficult to subsequently determine which data was processed for what purpose - complicating both compliance processes and incident response.
  • Decision quality and accuracy risk: GAI outputs that have not been subject to validation processes may be erroneous, misleading, or biased; decisions based on such outputs may lead to serious errors in business processes and carry the risk of producing results inconsistent with the company's quality standards or ethical principles.
  • Intellectual property and trade secret risk: Sharing source code, product designs, business strategies, and information constituting trade secrets with external GAI tools creates the risk that such information may be exposed or made accessible to unauthorized persons.
  • Corporate reputation and trust risk: The use of unverified GAI outputs may undermine the organization's credibility among stakeholders through erroneous communication or low-quality content generation.
  • Information and cybersecurity risk: GAI tools operating outside institutional oversight expand organizations' attack surface through insecure interfaces, personal devices, or unmanaged integrations, increasing the risk of unauthorized access, data loss, and malicious software.
  • Personal data protection risk: Uncontrolled GAI use may pave the way for unlawful processing of personal data, unauthorized access, or use beyond the original purpose. The risk that personal data or sensitive corporate information shared through prompts may be reflected in generated outputs and thereby become accessible to third parties is also a significant concern in this regard.

With regard to personal data, the Guidance specifically emphasizes that the Personal Data Protection Law No. 6698 (KVKK) establishes a general legal framework applicable to all instances of personal data processing, regardless of the technology involved, and that the principles, rules, and obligations of the KVKK must be observed in processing activities carried out through GAI tools.

This approach is complemented by the Guide on Generative Artificial Intelligence and the Protection of Personal Data (15 Questions)[5] published by the Authority on the same subject in November 2025. This Guide addresses the topic in considerably greater depth, noting that the risks posed by Shadow AI use in the workplace are not limited to data breaches; it also identifies complex compliance challenges relating to the inability to establish lawful processing conditions, lack of transparency, cross-border data transfers, and the provenance and accuracy of datasets used to train AI models. The Guide further addresses risks such as "hallucination", "deepfake", and algorithmic bias, stressing the need to question the reliability of generative AI outputs.

Policy and Compliance Recommendations for Companies

The widespread adoption of GAI tools and the risks they carry require companies to fundamentally redefine their approach to these technologies. However, blanket prohibition is neither practical nor effective; it would simply push employees toward using these tools outside any oversight, making uncontrolled usage worse. Companies must therefore embrace an approach that guides rather than prohibits, draws clear boundaries, and builds awareness. In this context, five key action areas stand out for companies:

  • Establishing a corporate policy framework: Every organization should first develop a clear and accessible internal policy on GAI use. This policy should set out which tools may be used, for what purposes and under what conditions, what types of data may be entered into these tools, and how generated outputs are to be assessed. Clearly defining "red lines" (particularly with regard to personal data, trade secrets, and sensitive corporate information) is critical to preventing uncontrolled usage. Rather than adopting a one-size-fits-all model, companies should tailor their approach to their specific field of activity and risk profile.
  • Protection of personal data and sensitive information: Employees must be made aware of what information they may and may not share when interacting with GAI tools. In particular, principles such as anonymization, generalization, and data minimization should be applied with respect to personal data. The data processing practices, retention periods, and third-party data sharing arrangements of the tools in use should also be carefully assessed.
  • Data security measures: Access and security controls must be implemented to enable institutional management of GAI use. Approaches that ensure employees can only access tools for which the company has defined terms of use will help prevent uncontrolled usage. Restricting network-level access to external platforms, limiting GAI tool access to corporate devices only, and adopting role-based approaches that define which employee groups may use specific tools are examples of complementary measures in this area.
  • Awareness, training, and feedback mechanisms: Raising employee awareness of both risks and proper usage through training programs, guidance materials, and regular communications is essential; as is monitoring real-world practices through feedback channels. This approach will allow companies to ensure the sustainability of their institutional framework for GAI use.
  • Human oversight: GAI outputs should be positioned as tools that support decision-making processes rather than replace final decisions. Against the risk of erroneous, misleading, or "persuasive but wrong" (hallucinatory) outputs, companies should adopt an approach that involves checking content for accuracy and contextual appropriateness.

Conclusion

GAI tools have set in motion an irreversible transformation in the business world. As applications such as ChatGPT, Microsoft Copilot, and Google Gemini become rapidly integrated into corporate workflows, the real danger lies not in the technology itself but in its mismanagement. Companies' approaches should therefore be built not on prohibition but on clear rules, effective oversight mechanisms, and a robust culture of institutional awareness. Supporting GAI use with corporate policy, data protection principles, and human oversight will both enable effective risk management and allow organizations to leverage the opportunities this technology offers in a sustainable and secure manner.

References
  • “The State of Generative AI Adoption in 2025”, Federal Reserve Bank of St. Louis, https://www.stlouisfed.org/on-the-economy/2025/nov/state-generative-ai-adoption-2025 , (Access Date: 29.03.2026); “Workplace Adoption of Generative AI”, National Bureau of Economic Research, https://www.nber.org/digest/202412/workplace-adoption-generative-ai?page=1&perPage=50 , (Access Date: 29.03.2026).
  • Ibid.
  • “What is shadow AI?”, IBM, https://www.ibm.com/think/topics/shadow-ai , (Access Date: 29.03.2026).
  • “Use of Generative AI Tools in the Workplace”, the Personal Data Protection Authority, 05.03.2026, https://www.kvkk.gov.tr/Icerik/8674/is-yerlerinde-uretken-yapay-zeka-araclarinin-kullanimi , (Access Date: 29.03.2026). 
  • “The Guide on Generative Artificial Intelligence and the Protection of Personal Data (15 Questions)”, the Personal Data Protection Authority, 24.11.2025, https://www.kvkk.gov.tr/Icerik/8547/uretken-yapay-zeka-ve-kisisel-verilerin-korunmasi-rehberi-15-soruda , (Access Date: 29.03.2026).

All rights of this article are reserved. This article may not be used, reproduced, copied, published, distributed, or otherwise disseminated without quotation or Erdem & Erdem Law Firm's written consent. Any content created without citing the resource or Erdem & Erdem Law Firm’s written consent is regularly tracked, and legal action will be taken in case of violation.

Other Contents

Processing of Personal Data in the Context of Artificial Intelligence Models
Newsletter Articles
Processing of Personal Data in the Context of Artificial Intelligence Models

The European Data Protection Board (“EDPB”) issued Opinion 28/2024 addressing key data protection concerns related to the processing of personal data in the context of artificial intelligence (“AI”) models. This Opinion was prepared in response to the Irish Supervisory Authority’s request under Article 64(2) GDPR...

IT Law and Artificial Intelligence 31.01.2025
Artificial Intelligence in Arbitration
Newsletter Articles
Artificial Intelligence in Arbitration

As technology advances, artificial intelligence (“AI”) is steadily making its way into dispute resolution, promising enhanced efficiency. Practitioners are carefully weighing its capabilities against its limitations...

IT Law and Artificial Intelligence 31.10.2024
Framework Convention on Artificial Intelligence
Newsletter Articles
Framework Convention on Artificial Intelligence

The Framework Convention on Artificial Intelligence (Convention) is an international treaty proposed by the Council of Europe that was recently opened for signature . This is the first legally binding international framework regulating the entire lifecycle of Artificial Intelligence (AI) systems. The Convention ensures...

IT Law and Artificial Intelligence 30.09.2024
Reflections of the European Union Artificial Intelligence Act on Actors in Turkiye
Newsletter Articles
Reflections of the European Union Artificial Intelligence Act on Actors in Turkiye

The "Brussels Effect" refers to the phenomenon where European Union (“EU”) regulations influence or set standards globally. Since the EU is a significant market, global companies often find it practical and economically beneficial to adopt EU standards across all their operations rather than comply with multiple...

IT Law and Artificial Intelligence 31.08.2024
Amendments Introduced to the Law on Regulation of Internet Publications
Newsletter Articles
Amendments Introduced to the Law on Regulation of Internet Publications
IT Law and Artificial Intelligence March 2014
Constitutional Court's Annulment Decision on Certain Provisions of the Internet Law No. 5651
Newsletter Articles
Constitutional Court's Annulment Decision on Certain Provisions of the Internet Law No. 5651

With its decision dated 11.10.2023 and numbered 2020/76 E., 2023/172 K. published in the Official Gazette dated 10 January 2024 and numbered 32425 ("Decision"), the Constitutional Court ("Constitutional Court") evaluated the requests for the annulment of certain articles of the Law No. 7253 on the...

IT Law and Artificial Intelligence 29.02.2024
Latest Development As Regards to the Social Network Providers
Newsletter Articles
Latest Development As Regards to the Social Network Providers

The Information Technologies and Communications Board adopted the Procedures and Principles for Social Network Providers (“Procedures and Principles”) with its decision dated 28.03.2023 and numbered 2023/DK-ID/119. The said decision was published in the Official Gazette dated 01.04.2023, and entered into...

IT Law and Artificial Intelligence 31.07.2023
Artificial Intelligence Act Adopted by the European Parliament
Newsletter Articles
Artificial Intelligence Act Adopted by the European Parliament

The first “Artificial Intelligence Act” of all time, which includes rules and regulations that directly affect tools such as ChatGPT, Bard and Midjourney adopted by the European Parliament with a majority of votes. Thus, the European Parliament has officially taken the steps of a regulation that could be a turning point for...

IT Law and Artificial Intelligence 31.07.2023
ChatGPT: A Grey Zone Between Privacy, Cybersecurity, Human Rights and Innovation
Newsletter Articles
ChatGPT: A Grey Zone Between Privacy, Cybersecurity, Human Rights and Innovation

ChatGPT, a large language model (LLM) developed by OpenAI, is an artificial intelligence (AI) system based on deep learning techniques and neural networks for natural language processing. ChatGPT can process and generate human-like text, chat, analyse and answer follow-up questions, and acknowledge errors...

IT Law and Artificial Intelligence 30.04.2023
Did Social Network Platforms Comply with the New Regulations in Turkey?
Newsletter Articles
Did Social Network Platforms Comply with the New Regulations in Turkey?
IT Law and Artificial Intelligence January 2021
What Has Come About through the Social Media Regulation?
Newsletter Articles
What Has Come About through the Social Media Regulation?
IT Law and Artificial Intelligence June 2020
Internet Actors in Law No. 5651
Newsletter Articles
Internet Actors in Law No. 5651
IT Law and Artificial Intelligence June 2020

For creative legal solutions, please contact us.