BlogsArtificial IntelligenceBig DataDigitalization

5 Ways ChatGPT Could Put Your Data (and Job) at Risk

There's a dark side to ChatGPT You Need to Know About

Using ChatGPT is as easy as typing a question, but convenience often comes at a cost. Have you considered what data you’re revealing and how it might be used behind the scenes?

The truth is, every time you share information with a public Artificial Intelligece (AI) system like ChatGPT and others, you’re essentially publishing information to the cloud. That clever email you asked it to draft? It’s stored somewhere. The confidential project details you shared for brainstorming ideas? Probably floating around in a server log.

And it gets worse. Did you know there was a bug back in March 2023 that exposed user conversations, payment details, and even chat titles? If that doesn’t send shivers down your spine, you should always know that hackers are always lurking, as they always do. And third-party platforms like ChatGPT, no matter how robust their security measures, are always prime targets for these cyber leeches.

You might also wonder, ‘What happens to all this data once it’s collected?’ Let’s break it down.

1.  Your Conversations Are Stored Somewhere on a Server

Here’s the first kicker: if you’ve ever used ChatGPT or any similar tool, you should know that they store your conversations. According to OpenAI, they keep user inputs for 30 days, even if you’re in “incognito mode”. And during this window, authorised personnel might access them for “debugging” or “fine-tuning the model”.

But let’s be real, how many of us read the fine print anyway? Until recently, opting out of data sharing required filling out a Google Form buried deep within OpenAI’s FAQs. Thankfully, they’ve added a toggle feature now. But unless you use it, everything you type could be feeding the AI’s training data.

Oh, and before you say, “But it’s anonymised!” Let’s keep in mind that even anonymised data isn’t bulletproof. A slip of personal information here or a unique detail there could make your inputs identifiable.

2.  Hackers Love a Good Vulnerability

The March 2023 bug, mentioned earlier, was a stark reminder that no platform is invincible. While OpenAI has tightened security since then, cyberattack threats looms large.

Just last year, Samsung made headlines when employees fed sensitive code into ChatGPT, unknowingly leaking trade secrets. Samsung’s response? A full ban on ChatGPT, paired with threats of disciplinary action for non-compliance.

And they’re not alone. Financial institutions like JPMorgan, Citigroup, and the Bank of America have implemented strict rules around AI usage. Even Apple has barred employees from using ChatGPT. Why? Because the risk of regulatory breaches or data exposure is too high.

3.  Your Data Trains the Model (Unless You Opt Out)

This is where things get murky. Unless you explicitly opt out, your conversations are used to train ChatGPT. That quirky idea you brainstormed with the AI? It’s now part of the model’s learning journey, as seen on their Privacy Policy page.

OpenAI claims that data is de-identified before being used, but like any other disclaimer, there’s a catch: it exists in raw form for a period before anonymisation. And during this time, there’s a chance, however small it is, that your data could be mishandled.

Case in point, Samsung. Those employees didn’t just share code – they effectively handed over intellectual property to a system they couldn’t control. And while OpenAI promises not to sell your data to third parties, they do share it with “trusted service providers.”

4.  What Happens at Work Stays at Work… Or Does It?

When it comes to the workplace, tools like ChatGPT have revolutionised productivity, helping with tasks like writing reports or summarising meetings. But as with any powerful tool, it’s important to be aware of the risks that come with its use.

Here’s a crucial point to consider: Each time you use ChatGPT for work, you might be violating some company policies without even realising it. Many organisations are now cracking down on the usage of AI within the vicinity of their company, precisely because of incidents like Samsung’s.

The urge to offload routine tasks to ChatGPT is understandable, but keep in mind that every input carries the potential for a data leak. That brilliant marketing strategy you just brainstormed? It’s now sitting on a server, accessible to someone you’ve never met.

5.  Nothing Is Quite Safe Here

This one may seem like a no-brainer. But never, EVER share passwords or login credentials, particularly those tied to your organisation, with anyone—or any public AI platform. It’s like handing over your keys to a stranger and just crossing your fingers, hoping that they can keep a secret.

Passwords are meant to be private. If you need help managing them, use a secure password manager, not an AI chatbot.  Financial Information? Just don’t get me started on that topic.

Sharing credit card numbers, bank details, or any other financial information with ChatGPT is a disaster waiting to happen. Even if the platform seems secure, why take the risk?

Need to crunch some numbers? Use a calculator. Want budgeting advice? Try an app designed for that purpose, or easier, ask an advisor. Your financial data is too precious to gamble with.

But What About Productivity?

Now, I’m not saying you should ditch ChatGPT altogether. When used responsibly, it’s an incredible tool. But boundaries are crucial.

Treat ChatGPT and similar tools like a public forum. If you wouldn’t post something on social media, don’t share it with AI. And if you’re using it for work, always double-check your company’s policies first.

The Samsung debacle serves as a cautionary tale. When employees used ChatGPT to debug code, they unknowingly jeopardised corporate secrets. It’s a mistake anyone could make – especially when the platform feels so intuitive and harmless.

ChatGPT isn’t your diary, and it certainly isn’t your confidant. Every input is a potential vulnerability.

Proceed with Caution

At the end of the day, ChatGPT is a tool – a powerful, fascinating, game-changing tool. But like any other tool, it’s only as effective (and safe) as the person wielding it.

So, the next time you sit down for a chat with AI, pause and ask yourself:

Is this information I’d be comfortable sharing in a room full of strangers?

If the answer is no, keep it to yourself.

Because while tools like ChatGPT are great at helping you solve problems, they might not be the most reliable when it comes to safeguarding your confidential information.

Izzat Najmi Abdullah

Izzat Najmi bin Abdullah is an up-and-coming journalist in the tech world, working for Asia Online Publishing Group. He specialises in cloud computing, artificial intelligence, and cybersecurity, and has a passion for exploring the latest innovations and trends in these fields. Najmi is determined to become a recognised expert in the industry and hopes that his articles provide readers with valuable insights into the fast-paced world of technology. As an English Literature graduate, he combines his love for the language with his interest in the tech field to offer a unique perspective on how technology is evolving, with the goal of becoming the Shakespeare of the tech society.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *