ChatGPT: A Grey Zone Between Privacy, Cybersecurity, Human Rights and Innovation

30.04.2023 Tilbe Birengel

Introduction

ChatGPT, a large language model (LLM) developed by OpenAI, is an artificial intelligence (AI) system based on deep learning techniques and neural networks for natural language processing.[1]

ChatGPT can process and generate human-like text, chat, analyse and answer follow-up questions, and acknowledge errors. It is good at improving code in programming languages such as Python in a matter of seconds. With the release of an advanced model, ChatGPT-4, in March 2023, it has achieved higher performance and greater functionality in many aspects, including problem solving and image processing.[2]

AI models are expected to open up many opportunities by increasing productivity, creating new search engine architectures, and reducing costs in healthcare, finance and public administration.[3] The mass development in this area by OpenAI and its competitors such as Google and Meta are exciting to watch, but raises major concerns, which are discussed below.

ChatGPT: A Grey Zone Between Privacy, Cybersecurity, Human Rights and Innovation
% 0

Potential Risks of ChatGPT

Given that LLMs are a form of “generative AI”, these models generate their output according to the training data in hand which may include copyrighted material, confidential, biased or discriminatory information.[4] This means that any data fed into the system becomes a training material for the next models.

The massive data collection and process for AI training does not comply with applicable privacy rules such as GDPR[5] as it lacks transparency and legal justification.[6] Furthermore, the chatbot does not provide an immediate option to remove the previously stored data. It is unclear whether the data collected will be shared with OpenAI’s other tools, leaving it open to information hazards.[7]

The unreliability of the responses generated by ChatGPT leads to inaccuracies in data processing and increases the likelihood of misinformation.[8] Widespread use of such models could trigger disinformation and manipulation at a level where users cannot distinguish whether a text is human or AI generated, a reality or a deepfake.[9] This could lead to widespread deception in media, education and politics.

ChatGPT’s high level of text creation and social-engineering makes it much easier to use for malicious purposes such as phishing. Europol’s recent report on the subject shows that the preparation of personalised scams has become much easier, and faster.[10] Impersonating speech style and producing authentic sounding fraud text makes it easier for victims to believe they are in contact with their loved ones. ChatGPT’s ability to generate codes in a short amount of time has since facilitated the development of malicious software codes for cyberattacks. It provides a valuable tool to criminal actors with little technical knowledge and becomes a major threat for cybersecurity.

There are ethical concerns such as discrimination, exclusion and toxicity about the outputs of AI technology.[11] As AI models are trained with existing online data which by nature belong to dominant social groups, the outputs are expected to raise issues of diversity and inclusion.[12] Although developers work on some safeguards to minimize such results, prompt engineering, namely paraphrasing the way a question is asked, seems to be effective in bypassing the safety measures of ChatGPT.

Accountability and explainability are further concerns in the use of LLM.[13] The authenticity of the tool's output is highly controversial, as ChatGPT is not yet able to respect intellectual property rights in its responses. The neural networks and internal operation principles of the technology are complex and opaque. As the impact of LLMs increases significantly, the accountability and liability of their developers and operators remain as an important issue for policymakers to address.

LLMs such as ChatGPT are likely to have other impacts at a societal level. By automating some tasks and jobs, such as translation and software code development, they could lead to job displacement in some areas.[14]

Recent Responses to ChatGPT from Regulators and Private Sector

At the time of drafting this article, the discomfort caused to society by LLMs is higher than ever.

The Future of Life Institute published an open letter outlining the risks of human-competitive intelligence to society, and calling for a pause of at least 6 months in AI development, arguing that the safety protocols are inadequate. The letter was signed by public figures such as Steve Wozniak, and Elon Musk in addition to many scholars and technologists.[15]

At the end of March 2023, the Italian Data Protection Authority temporarily restricted ChatGPT, citing privacy breach concerns.[16] The lack of legal basis and information on the mass collection and processing of personal data for the purpose of training AI algorithms was criticised. Inaccuracies in data processing and the lack of an age verification mechanism to exclude use by children were also raised as concerns. OpenAI has until the end of April 2023 to fulfil the measures requested by the Italian Data Protection Authority, in order to continue operating in Italy.

Meanwhile, the Spanish and French data protection authorities have launched investigations to review potential data breaches by ChatGPT, and the European Data Protection Board has set up a task force for EU-wide cooperation.[17]

Private industry is taking its own measures and restrictions. Since it was reported that some Samsung employees fed ChatGPT with sensitive data source code, the need to educate employees about the risks of the tool has become more apparent. Some major companies such as Verizon, Amazon, Bank of America Corp, Goldman Sachs, Citigroup Inc, Deutsche Bank AG have banned the use of ChatGPT in the workplace, while some others are drafting policies on its acceptable use.

Conclusion

For many tasks, ChatGPT seems to be a useful tool. It is likely to increase opportunities and productivity in various fields. However, users need to be aware of the risks of such AI technologies and avoid feeding it with sensitive data until adequate safeguards are in place.

In terms of privacy, the tool lacks transparency and legal grounds. Unless users explicitly opt out, any data provided to the chatbot will be used for the LLM's training purposes. The same data could appear as output for other users' needs.

If used on a large scale, the chatbot could lead to misinformation and manipulation due to the unreliability of its answers, as users might not be able to distinguish whether an output is real or deepfake. ChatGPT appears to be a strong threat to cybersecurity, given its ability to generate text and facilitate the creation of malicious software code in a short period of time.

Although ChatGPT developers are working on some safeguards, ethical concerns remain. These include discrimination, exclusion and toxicity in the chatbot's output. Accountability and explainability of this complex technology are further concerns.

References

All rights of this article are reserved. This article may not be used, reproduced, copied, published, distributed, or otherwise disseminated without quotation or Erdem & Erdem Law Firm's written consent. Any content created without citing the resource or Erdem & Erdem Law Firm’s written consent is regularly tracked, and legal action will be taken in case of violation.

Other Contents

Artificial Intelligence Act Adopted by the European Parliament
Newsletter Articles
Artificial Intelligence Act Adopted by the European Parliament

The first “Artificial Intelligence Act” of all time, which includes rules and regulations that directly affect tools such as ChatGPT, Bard and Midjourney adopted by the European Parliament with a majority of votes. Thus, the European Parliament has officially taken the steps of a regulation that could be a turning point for...

Personal Data Protection 31.07.2023
CJEU Decides That A Mere Infringement of the GDPR Is Not Sufficient for Non-Material Compensation
Newsletter Articles
CJEU Decides That A Mere Infringement of the GDPR Is Not Sufficient for Non-Material Compensation

In its decision regarding Case-300/21 and dated May 4, 2023, the Court of Justice of the European Union (“CJEU”) evaluates the right to compensation for an infringement of the European Union General Data Protection Regulation (“GDPR”) regulated in Article 82 of the GDPR. The CJEU decided that a mere...

Personal Data Protection 31.05.2023
A Comparative Approach to Joint Controllers
Newsletter Articles
A Comparative Approach to Joint Controllers

The Personal Data Protection Law numbered 6698 (“PDPL”) introduces definitions for many concepts such as personal data, data controller, data processor and data subject. In terms of understanding and interpreting these concepts, secondary legislation, Personal Data Protection Authority (“Authority”) guidelines...

Personal Data Protection 31.03.2023
The EU’s Digital Operational Resilience Act for Financial Services Industry Actors Entered into Force
Newsletter Articles
The EU’s Digital Operational Resilience Act for Financial Services Industry Actors Entered into Force

The Covid-19 pandemic and recent technological developments have significantly accelerated the digital transformation of all sectors. However, this rapid change especially in the financial sector (mobile banking, e-commerce, contactless payments, etc.) has brought some risks along with making life extremely...

Personal Data Protection 31.01.2023
Smartwatch Privacy: A Beginner’s Guide
Newsletter Articles
Smartwatch Privacy: A Beginner’s Guide

Smartwatches have undeniably revolutionized our lives in the past decade. Apart from their core function as a timepiece, these wearable computers packaged in the form of a watch enable us to answer incoming calls, reply to messages and skim through social media notifications in seconds. Their steady rechargeable...

Personal Data Protection 31.01.2023
An Examination of Loyalty Programs Under Personal Data Protection Legislation
Newsletter Articles
An Examination of Loyalty Programs Under Personal Data Protection Legislation

The Personal Data Protection Authority (“DPA”), on 16.06.2022, published the Draft Guidelines on Examination of Loyalty Programs within the Scope of Personal Data Protection Legislation (“Draft Guidelines”). The public has until 16.07.2022 to submit comments on them, and after these are evaluated...

Personal Data Protection 30.11.2022
Is the Missing Piece of the Puzzle Found in the Intersection Between GDPR and Antitrust Law?
Newsletter Articles
Is the Missing Piece of the Puzzle Found in the Intersection Between GDPR and Antitrust Law?

The German Competition Authority (“Bundeskartellamt”) had previously found Meta (formerly Facebook) responsible for abusing its dominant position in the social network market by collecting and processing the personal data of its users without their consent and imposed measures on Meta and its associated...

Personal Data Protection 31.10.2022
Guidelines on Personal Data Protection in the Banking Sector Published by the Turkish Personal Data Protection Authority
Newsletter Articles
Guidelines on Personal Data Protection in the Banking Sector Published by the Turkish Personal Data Protection Authority

Banks process large volumes of personal data in their daily operations. In order to deal with this sensitive information, the Turkish Personal Data Protection Authority, in cooperation with the Banks Association of Turkey, published Good Practice Guidelines on Personal Data Protection in the Banking...

Personal Data Protection 30.09.2022
GDPR and Mass Claims
Newsletter Articles
GDPR and Mass Claims

The procedural rules on mass claims within European Union (“EU”) Member States is not uniform. To improve the position of consumers who might wish to make such claims, the European Parliament passed the Collective Redress Directive (“Directive”). The impact of the Directive is expected to...

Personal Data Protection 31.08.2022
Briefing for the Impact Assessment of the Data Act Has Been Published
Newsletter Articles
Briefing for the Impact Assessment of the Data Act Has Been Published

In February 2020, the European Commission (“Commission”) published “A European Strategy for Data” as part of a wider drive concerning digital transformation and policy. Through this communication, the European Union (“EU”), defining itself as having a leading role in the data economy...

Personal Data Protection 31.07.2022
The Regulation on Protection and Processing of Personal Data by the Social Security Institution
Newsletter Articles
The Regulation on Protection and Processing of Personal Data by the Social Security Institution

The Regulation on Protection and Processing of Personal Data by the Social Security Institution (the “Regulation”), the purpose of which is to determine the procedures and principles for processing data obtained within the scope of the duties and authority of...

Personal Data Protection February 2022
A New Era: The Personal Information Protection Law of the People’s Republic of China
Newsletter Articles
A New Era: The Personal Information Protection Law of the People’s Republic of China

The Personal Information Protection Law of the People’s Republic of China (“PIPL”) passed at the 30th meeting of the Standing Committee of the 13th National People’s Congress on 20 August 2021 and entered into force on 1 November 2021 as per Article 74...

Personal Data Protection February 2022
All Eyes of the Data Protection Authorities are on Cookies!
Newsletter Articles
All Eyes of the Data Protection Authorities are on Cookies!

In today's world, there is no doubt that data has become one of the most valuable assets and resources for some companies. The ability to collect, store, process, and analyze data on a large scale has dramatically changed...

Personal Data Protection January 2022
The Right to Be Forgotten
Newsletter Articles
The Right to Be Forgotten
Personal Data Protection November 2021
A Groundbreaking Whatsapp Decision by the Irish Supervisory Authority
Newsletter Articles
Healthcare Sector Publishes a Guideline on Data Protection
Newsletter Articles
Healthcare Sector Publishes a Guideline on Data Protection
Personal Data Protection September 2019
The General Data Protection Regulation in Force
Newsletter Articles
The General Data Protection Regulation in Force
Personal Data Protection May 2018
Destruction of Personal Data
Newsletter Articles
Destruction of Personal Data
Personal Data Protection November 2017
The EU General Data Protection Regulation and Its Territorial Scope
Newsletter Articles

For creative legal solutions, please contact us.