In 2017, Cambridge Analytica, a British consulting firm, harvested the personal data of 50 millions of Facebook users without their consent. This data was then used to target voters with political ads during the 2016 US presidential election.
They filed for bankruptcy in May 2018, following a series of scandals involving the misuse of personal data. Facebook suspended Cambridge Analytica from its platform, and several of the company's executives resigned. The company also faced a number of lawsuits from regulators and individuals.
The Cambridge Analytica scandal served as a cautionary tale about the dangers of data misuse. Online users have since become much more aware and careful with how they interact in the digital space and data brokers have all closed their businesses in fear of being sued.
The internet is now a much safer space for all of its users, especially children and vulnerable adults.
The End.
Above is the article I wish I could have written.
Here's what I have to write instead...
According to the Privacy Rights Clearinghouse (PRC) between 2005 and 2022, there's been 20,161 data breaches reported in the US (that we know of) with 2,048,320,547 impacted records (that we know of.) Some of them you may have heard of: like what happened recently with 23andme, the LinkedIn data breach in 2021, Facebook in 2019, and Yahoo in 2017 and 2014 (among many, many others)
The reason the 23andMe story jumps out, is because it highlights the dangers behind our interconnected data. Unlike most data breaching cases, the company's servers remained uncompromised. Instead, hackers focused their efforts on individual user accounts, specifically targeting those with weak or frequently reused passwords. Once these accounts were breached, the hackers could exploit 23andMe's "DNA relatives matches" feature to access information about other thousands of users.
Your data can be hacked through a platform you don't even use.
How did we get here?
"Information Policy -- Thoughts for the 80s" is a paper I recently stumbled upon, written by computer pioneer, Willis H. Ware, in August of 1982. Ware co-developed the IAS machine, and accurately forecasted that “a small computer may conceivably become another appliance in the home.” I was particularly curious to read his thoughts on the matter because Willis Ware also happens to be a pioneer of privacy rights and computer security.
Let me preface this by saying:
Reading old technology papers makes for a very sobering experience.
On the one hand, you understand how much this space has advanced in just a few decades, while also being forced to realize that many of the problems we were worried about "back then," are not only still relevant today, but in some cases, they're even worse.
Ware's paper is a perfect example of that.
Less than 50 years ago, the dominant concern for privacy was directed towards recordkeeping systems.
For example, mail in the custody of the U.S. Postal Service is governed by various statues which ensures the mail must be physically protected, and failure to do so has imposed consequences. Vendors could comply because "physical protection is well understood."
But what will then happen in the digital era, once an electronic message moves from the custody of the postal service to that of a telecommunications carrier?
This question sets the stage for Ware's paper. And, in the spirit of Cyber Security Awareness Month, it made me wonder:
How Have Data Privacy Concerns Evolved Over Time?
We live in a world where our data is collected and used by companies and governments in ways that we often don't understand or consent to.
Our data is used to target us with advertising, to track our movements, and to make decisions about our lives. And the emergence of AI has thrown a monkey wrench in an already struggling system.
Early Data Privacy Concerns
Data privacy concerns have existed for as long as information has been recorded and shared. In the early days, these concerns were often centred around government surveillance and recordkeeping.
Key pieces of privacy legislation, such as the U.S. Privacy Act of 1974 and the EU Data Protection Directive of 1995, laid the groundwork for modern data protection regulations. These early laws shaped the development of more recent, far-reaching standards like the European Union's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).
In the 1980s, data privacy concerns began to shift from the government to the private sector. Companies started to collect and use more personal data, leading to concerns about how this information was being handled. This transition marked the start of a significant shift in the data privacy landscape.
The Internet and Social Media
The advent of the internet and the proliferation of social media platforms accelerated the collection and use of personal data by companies. The sharing of personal information on these platforms, often without a full understanding of the extent of data being disclosed, led to numerous privacy breaches and controversies. (see MySpace 2016, LinkedIn 2012 and 2016, Twitter 2020 and the many ongoing privacy concerns surrounding TikTok)
Privacy Advocacy and Activism
Privacy advocates, whistle-blowers, and activists have played a vital role in raising awareness and driving change. Figures like Edward Snowden and organizations like the Electronic Frontier Foundation have been instrumental in exposing privacy violations and advocating for stronger data protection measures.
The emergence of big data and artificial intelligence has given companies and governments unprecedented power to collect and analyze personal data.
This newfound capability has led to concerns about the use of data for mass surveillance, social engineering, and discriminatory decision-making.
We don't yet have any solid stats on the amount of people working in tech/AI alignment, but it's not nearly enough. This imbalance is a concern, as it means that there are fewer people working to protect our technology from misuse.
And as data privacy concerns continue to evolve, there is a growing emphasis on the ethical dimensions of data collection and usage. Issues like consent, data transparency, and responsible data handling are at the forefront of discussions.
Issues that Willis H. Ware rightly brought up more than 40 years ago.
We would love it if people would learn about online security in the same boring way you learn about nutrition - a "security pyramid" with all of the essentials △
Base:
Strong passwords and two-factor authentication (2FA): Use strong, unique passwords for all of your online accounts, and enable 2FA whenever possible.
Privacy-focused browsers: Use a privacy-focused browser such as DuckDuckGo or Brave to protect your browsing history and online activity from tracking.
Virtual private network (VPN): Use a VPN to encrypt your internet traffic and hide your IP address.
Middle:
Use a password manager: A password manager can help you to create and manage strong, unique passwords for all of your online accounts.
Keep your software up to date: Software updates often include security patches that can protect you from known vulnerabilities.
Be careful about what information you share online: Only share personal information with websites and people that you trust.
Top:
Use end-to-end encryption: End-to-end encryption encrypts your messages and files so that no one can read them, even the company that provides the service.
Use a security key: A security key is a physical device that adds an extra layer of security to your online accounts.
Be aware of social engineering attacks: Social engineering attacks are attempts to trick you into revealing personal information or clicking on malicious links.
What Are The Next 40 Years Going to Look Like?
Data privacy concerns are no longer confined to specific regions; they have global implications. International efforts are underway to harmonize data protection standards and facilitate cross-border data flows. However, a lot of work still needs to be done, and, in most cases, this work has to start with you, the consumer.
New and emerging threats to data privacy include deepfakes, quantum computing's potential impact on encryption, and the ethical considerations surrounding the Internet of Things (IoT) and wearable technology.
To counteract some of these threats we could focus on:
Stricter regulations
Enhanced User Control
Privacy by Design - what we’re doing at Visible
Advanced Encryption
AI-Powered Privacy Tools
Transparent Data Practices
Global Collaboration
Education and Awareness
The future of data privacy is crucial for the functioning of society, and it requires proactive engagement from people, businesses, and governments. We must continue to evolve and adapt to the ever-changing challenges of data privacy to ensure that our digital world remains a safe and secure space for all.