As modern consumers we’re surrounded by more information than ever before. For us to access this information on our smart devices or computers, we’re often asked to sacrifice our privacy. The quicker we tap ‘accept all’, the faster we get to read the articles or watch the videos.
Put simply, we trade privacy for convenience. Why? Because many people perceive the benefits of convenience outweigh the potential risks of losing their privacy.
The give and take of data mining.
Millions of people use social media platforms like Facebook and Instagram every hour of every day. Those platforms collect and analyse user data to provide the consumer with targeted advertisements and content. In exchange, the user is able to easily connect with friends and family, share photos and videos, and stay up-to-date on news and events.
Similarly, many people use voice assistants like Amazon’s Alexa and Google Home. These devices collect and store recordings of their voice commands, and this helps them improve the accuracy of their responses. The pay-off is that users can easily control smart home devices, play music and get answers to questions without having to manually search for them.
So what about the new kid on the block, AI language model, ChatGPT?
How does ChatGPT use personal data?
When a product is free, a company usually makes an income by monetising the data generated by its users — essentially, your personal data is sold. This is the case and then some with ChatGPT.
It’s said that ChatGPT has something of a split personality. Despite everyone knowing it’s AI, it often appears to be eerily human. There’s a very good reason for that. ChatGPT is actually a Frankenstein of personal data.
ChatGPT was born using posts from Reddit, as well as swathes of web posts and social content. It was programmed and trained to take this information and build its knowledge base. And just like your ‘DNA relatives’ on 23andMe, you probably share about 0.00001% of your personality with it — assuming you’re one of the many millions of people that are active online.
ChatGPT has taken AI to another level, and that’s simply down to the personal data we’ve chosen to share. Now that’s something to get your human brain around.
Why Italy gave ChatGPT the boot
On 31st March, Italy decided that this use of personal data wasn’t legal under GDPR, and ordered OpenAI, the company behind ChatGPT, to remove any information that belonged to Italian citizens from its training data. This was an impossible ask. The only option OpenAI had was to block access to Chat GPT from Italian citizens, which was a lose-lose situation for the company and those desperate to try out the AI phenomena that’s got the world talking.
This time it’s personal...data
This isn’t the first time something like this has happened. If you look at Dall-E, it’s a similar story, but in that instance, it was about using artwork to create the training data. The key difference here is that because personal data is being used, people feel more threatened. And rightly so.
The breach...of trust
To add insult to injury, OpenAI also faced a global PR disaster. A bug in a portion of their code meant that for some users, they were served chats that actually belonged to a completely different user. Imagine getting a text that was intended for another person, there could literally be anything in that message.
Despite this being a minor technical glitch, it has a major impact on the user. It fuels imaginations and gets people thinking, and talking, about what else could go wrong, and who (or what) would be accountable when it does. In a similar way to how we demand self-driving AI cars to be 1000x safer than a human counterpart, we expect technology to protect our privacy way more than any human could do.
But is it really that bad?
The simple answer is no. Well, no worse than many other centralised Large Language Models. As an AI language model, ChatGPT provides the convenience of instant and personalised responses to a world of questions and topics. Not only can it help users find information and solve problems, but it can also engage in casual conversation and help make tedious tasks simpler. Just remember that this technology records and analyses conversations to help improve its performance.
So what will it be? Privacy or convenience?
As you can see, it’s near impossible to completely stop your data being used, and potentially misused. But you can control how much of your information you share. All it takes is a little knowledge and a lot of patience. So think before you swipe, tap and click, and really consider the benefits of convenience versus the potential risks of losing your privacy.
Let’s ChatGPT about your privacy. was originally published in visible on Medium, where people are continuing the conversation by highlighting and responding to this story.