PA life
Roccabella
Treat Your Staff
Treat Your Staff
Story Events - until Feb

Concerns over employees inputting sensitive data into ChatGPT

sensitive-data-and-ChatGPT

Legal Expert Raises Confidentiality Concerns Over Employees Inputting Sensitive Data into ChatGPT.

Richard Forrest, Legal Director at the UKโ€™s leading data breach law firm, Hayes Connor, discusses the potential risks of employees sharing sensitive data on chatbots, as well as providing actionable tips on how to use ChatGPT safely in the workplace.

In light of recent ChatGPT concerns in the news, Richard Forrest expresses major apprehensions that a considerable proportion of the population lacks a proper understanding of how generative AI, such as ChatGPT, operates. This situation, he fears, could lead to the inadvertent disclosure of private information, and therefore a breach of GDPR.

Businesses need compliance measures to handle input of sensitive data into ChatGPT

As such, he urges businesses to implement compliance measures to ensure employees in all sectors, including healthcare and education, are remaining compliant.

This comes after a recent investigation by Cyberhaven revealed that sensitive data makes up 11% of what employees copy and paste into ChatGPT. In one instance, the investigation provided details of a medical practitioner who inputted private patient details into the chatbot, the repercussions of which are still unknown. Richard Forrest says this raises serious GDPR compliance and confidentiality concerns.

Due to the chatbotโ€™s recent appraisals of being able to assist business growth and efficiency, there has been an increase in users across many sectors. However, concerns have arisen after a number of employees have been found to be negligently submitting sensitive corporate data to the chatbot, as well as sensitive patient and client information.

As a result of these ongoing privacy fears, several large-scale companies, including JP Morgan, Amazon, and Accenture, have since restricted the use of ChatGPT by employees.

A long way to go to understand the implications of applications such as ChatGPT

Richard Forrest weighs in on the matter: โ€œChatGPT, and other similar Large Language Models (LLMs), are still very much in their infancy. This means we are in unchartered territory in terms of business compliance, and regulations surrounding their usage.

โ€œThe nature of LLMs, like ChatGPT, has sparked ongoing discussions about the integration and retrieval of data within these systems. If these services do not have appropriate data protection and security measures in place, then sensitive data could become unintentionally compromised.

โ€œThe issue at hand is that a significant proportion of the population lacks a clear understanding of how LLMs function, which can result in the inadvertent submission of private information. Whatโ€™s more, the interfaces themselves may not necessarily

be GDPR compliant. If company or client data becomes compromised due its usage, current laws are blurred in terms of which party may be liable.

โ€œBusinesses that use chatbots like ChatGPT without proper training and caution may unknowingly expose themselves to GDPR data breaches, resulting in significant fines, reputational damage, and legal action. As such, usage as a workplace tool without proper training and regulatory measures is ill-advised.

โ€œIt is the onus of businesses to take action to ensure regulations are drawn up within their business, and to educate employees on how AI chatbots integrate and retrieve data. It is also imperative that the UK engages in discussions for the development of a pro-innovation approach to AI regulation.”

Each company needs to ensure that their company and client data is not compromised and the company is not in breach of GDPR using LLMs.

Richard Forrest provides actionable tips on how businesses and employees can remain vigilant

1. Assume that anything you enter could later be accessible in the public domain

2. Donโ€™t input software code or internal data

3. Revise confidentiality agreements to include the use of AI

4. Create an explicit clause in employee contracts

5. Hold sufficient company training on the use of AI

6. Create a company policy and an employee user guide

Currently, one of the biggest causes of data breaches in the UK across most sectors is human error. As AI is being utilised more frequently in the corporate sphere, it is important to make training a priority.

By making it clear what constitutes as private and confidential data and explaining the legal consequences of sharing such sensitive information, you should be able to drastically reduce the risk of this data being leaked.

Hayes Connor Solicitors have significant expertise and experience supporting clients who have had their data exposed due to data protection negligence. They can support claims for privacy loss, identity theft, and financial losses.

1 in 3 employees have picked up bad cybersecurity habits since starting to work from home.