chevron-down Created with Sketch Beta.
February 21, 2024

Importance of Digital Privacy in Light of Emerging Technology

This is a longer form piece of “Big Question: How Does Digital Privacy Matter for Democracy and its Advocates?” published by the National Endowment for Democracy on January 22, 2024.

The materials contained herein represent the opinions of the authors and editors and should not be construed to be those of either the American Bar Association unless adopted pursuant to the bylaws of the Association. Nothing contained herein is to be considered as the rendering of legal advice for specific cases, and readers are responsible for obtaining such advice from their own legal counsel. These materials and any forms and agreements herein are intended for educational and informational purposes only.

Privacy is often referred to as a 'gateway' right fundamental to achieving human rights such as freedom of expression, thought, belief, association, assembly, and non-discrimination. At its core Privacy establishes the boundaries that give us space to develop our personality and how we interact with the world around us. However, Privacy is not a static concept, but rather shaped by cultural, social, and individual norms and contexts including technological advancement and modern innovations – you share different information about yourself to your friends than you do with your boss. Our definition of Privacy is also influenced by how members of our community and networks use information about us. The Cambridge Analytica scandal, where the personal information of 300,000 Facebook users was collected and used to collect information about their contacts affected about 87 million users, highlights the networked nature of privacy in the digital age. Our privacy can be compromised not only by the information we choose to share about ourselves, but also by others sharing or even gathering information, such as contact information or pictures, of us. Violations of privacy like this can lead to identity theft, reputational damage, and harassment both online. As Cambridge Analytica demonstrated, Privacy breaches can also be used to manipulate public opinion, threatening key government institutions and processes such as elections.

The speed of technological innovation made possible through large data sets and AI exacerbate challenges in protecting Privacy in the digital world. This article explores three critical challenges to our right to privacy in the digital world: 

  1. The predominant business model of most technology companies, based on harvesting and analyzing mass amounts of personal and non-personal information. 

  2. The impact of emerging technologies blurring the lines between online and offline harms; and 

  3. Cybersecurity threats posed by Spyware such as Pegasus, specifically targeted at undermining our digital privacy. 

Business Model

The primary business model for tech companies relies on collecting, analyzing, and selling massive amounts of user data conflicts with our rights to privacy, enshrined in documents such as the Universal Declaration of Human Rights (Art. 12) and International Covenant on Civil and Political Rights (Art. 17). This is not only a safety risk for Human Rights Defenders (HRDs), but also forces them into a Faustian bargain: HRDs must use a system founded on the violation of privacy rights to communicate with the world about this same abuse and others. Scarce avenues for engaging tech companies on rights abuses stemming from their programs are overwhelmed by demand and manned by the representatives who often lack authority to implement the wide-ranging policy reforms limiting the possibility for system-wide reforms necessary to protect digital privacy rights to be implemented by the private sector. 

Current efforts to address digital privacy challenges seem to accept this business model and the necessity of data collection as a given. There is an overemphasis on data protection, which concentrates solely on ensuring data remains secure during processing, is protected from unauthorized access and its integrity is maintained neglects other crucial aspects to digital Privacy. “Security” carve-outs prevalent in most data protection laws grant authorities extensive discretion and create a security-centric approach that compromises privacy rights. This can be seen in draft legislation like the UK Online Streaming Act, which undermines end to end encryption, or real name registration requirements in places like Australia, India, Germany, France and Sweden, all of which can compromise communication with Human Rights Defenders (HRDs) in repressive states. 

Furthermore, while a robust body of human rights law for online privacy exists, global technology discourse that shapes privacy standards often fails to resonate in global majority countries due to its Eurocentric bent—a predictable outcome given the underrepresentation of these countries at forums, like the UK’s recent AI Safety Summit, where these issues are discussed. This homogenization results in a misalignment of values and creates space for authoritarian governments to promote repressive tech regulations that further threaten privacy rights. 

Emerging Technologies 

Emerging technologies such as the Internet of Things (IoT), augmented and virtual realities (AR/VR), along with increasingly sophisticated algorithms to support AI and automated decision making will only enhance the challenges posed to Privacy in the digital world. Take for example the challenges posed by facial recognition technology (FRT), used here to refer to both one-to-one and one-to-many recognition technologies. In real life (IRL), even though our faces can be seen by everyone we encounter in our daily lives, most people, apart from celebrities and public figures, have a reasonable expectation of walking down the street, going into a shop, hotel, or house anonymously. 

The increasing use of FRT, however, equips governments and other actors with the power to track the movements of individuals IRL, through CCTV and body cameras along with other means. This capability has been used to arrest protesters not only by authoritarian regimes such as Russia, but also democracies from India to the United States. While protesting has always carried risks with governments able to identify protesters through traditional means, emerging technology from AI-enabled FRT and GPS tracking greatly amplifies these risks. These technologies also pose serious risks to our private lives as well. Consider the impact tracking via FRT can have on homosexuals seeking a private liaison at a hotel or political dissidents who fear putting family and friends at risk simply by meeting for coffee. 

Faceprints – digital maps of a face created by analyzing multiple images – further complicate the issue of facial anonymity. Faceprints can be used in data sets to train AI systems or track individuals after the original images have been deleted, thus providing a means to circumvent existing data protection laws. While recent cases have found companies such as Meta (formerly Facebook) and Clearview AI guilty of violating privacy laws in Illinois and Italy, the impact of these cases is limited to those jurisdictions. Exact data deletion – removing all traces of a specific piece of personal data – is nearly impossible. Moreover, given the dynamic nature of privacy, and potential for individuals to remove their consent at any time, data sets will need to be constantly monitored and updated, making such a solution costly and overwhelming.

Spyware 

The modern development of spyware creates one of the major challenges against digital privacy that sometimes is hard to track and hard to prevent. Highly invasive spyware has become a bargaining chip between states and a tool for the states or private entities to look into your personal domain, steal your personal data or even control your digital devices. This is a major challenge especially in the context of using spyware against political activists, political dissidents or HRDs. Also, the blurred line between the protection of privacy and the derogation of privacy for the sake of national security or national intelligence and/or the lack of comprehensive regulations only exacerbates the situation. 

For example, the development of highly invasive Pegasus Spyware presents a real threat to digital privacy across the globe. Pegasus Spyware can infect your phones without your knowledge or even without your action with its “Zero-Click Exploitation” technology. Not only that, but it also can access almost every data in your phones such as passwords, bank accounts, personal chats and messages, photos, and locations, and it can control your microphone and camera. 

Further complicating the ability to prevent privacy violations using Pegasus Spyware is  that it can only be sold to the governments or state entities. According to reports by organizations such as Amnesty International, Citizen Lab or iLaw many governments, use Pegasus Spyware against many HRDs, political dissidents, political activists and journalists who come out to criticize their governments which could lead to chilling effects or self-censorship, restriction against the exercise of their rights, or even deaths. As technology evolves quite rapidly, this rising threat against privacy can only become more complicated and harder to solve. In Thailand, victims of Pegasus Spyware are challenging the government in court for violating their privacy. There are two cases pending in the courts right now: the first, an Administrative Case against the government authorities who purchased and used Pegasus Spyware against political activist and a human rights lawyer. Second, a Civil Case against N.S.O Group Technologies, an Israeli-company who designs, develops and sells Pegasus Spyware to the states. In the first instance the Administrative Court dismissed the claim on the basis that it was part of criminal justice process and thus not within the Administrative Court’s jurisdiction and is now under appeal. However, the Civil Case remains ongoing with the pre-trial conference date set on 5th of February, 2024 and is hoped to establish a benchmark case providing a higher standard of protection for digital privacy. 

Questions To Think About

How do we deal with systems and tools created using data that, at the time, was not considered private? 

The dynamic nature of privacy and its interrelation with photos or other information that we or others might share of us is an increasing challenge in relation use of faceprints – a digital map of your face created by analyzing multiple photos – enables actors to track your face without relying on photos which may be protected through existing consent regulations. Moreover, the dynamic nature of privacy necessitates a dynamic framework of the rights relating to digital privacy. The emerging concepts of human rights related to digital privacy e.g. the right to be forgotten could be one of the ways to deal with this kind of system and tool. All the relevant laws and business models can consider adopting or integrating these emerging concepts so that the level of privacy protection matches the dynamic change of privacy.

Should we accept data collection as a necessity or require companies and coders to come up a with a new, innovative business model, one that is not based upon a human rights abuse. 

AI tools are increasingly being used to make decisions with significant impacts on people – from housing to sentencing etc. – numerous studies have shown the ill-effects of this. But the impact of these types of decision making systems on our privacy is under explored. There is a very real risk that, in attempting to address discrimination underlying AI decision-making modules will result in a larger collection of personal data. We must closely explore the risks and benefits of such an approach – while data collection might be necessary for some services, we cannot neglect the challenge this poses to our privacy. A new innovative business model or comprehensive regulations should be developed soon to address this challenge by finding balance between the necessity of data collection and the protection of right to privacy. 

How can we effectively challenge Western-centric views in adapting human rights frameworks to technological advancements? 

There is no “one size fits all” when it comes to human rights frameworks adaptation as each community represents different cultures and values. This can include integrating indigenous governance methods, such as Ubuntu Ethics, which emphasizes the interconnectedness and offers a unique perspective for managing networked privacy and information integrity. Also, the relevant laws and regulations including the interpretation can be adapted to fit the social context of each country without sacrificing the core values of human rights protection – the right to privacy has many aspects of risks that we need to consider such as the use of AI and personalized Ads, but each country has different focal points that it needs to consider. By integrating the specific focal points, the adaptation of huma rights frameworks to technological advancement could be one of the ways to challenge Western-centric views. 

Author Info

The comments in this blog are personal opinions and do not represent any organizations’ point of view

  • Chatmanee Taisonthi

  • Apirak Nanthaseree

  • Elizabeth Donkervoort  

ABA ROLI’s Work on Internet Freedom Issues  

Check out our current programming, with more information available in ABA Center for Global Programs Annual Report 2023

  • Advancing Rights in Southern Africa (ARISA) program - The program specifically focused on indigenous peoples’ rights, women’s customary land rights, media freedoms and digital rights, and the Angolan elections. Following the 2022 Judicial Symposium on Digitization in Nairobi, Kenya with the Democratic Governance and Rights Unit at the University of Cape Town and the Southern Africa Chief Justices Forum, ABA ROLI produced a bench book for regional judiciaries on computer crimes and cybercrimes. The bench book was 59 validated through a 2023 workshop for pairs of one judge and one judicial officer from several regional judiciaries. The participants discussed the relevant law and the proposed use of the tool in relevant cases. ABA ROLI’s Deputy Chief of Party for the ARISA program highlighted ARISA’s enduring work in the digital rights space on a panel entitled, “The role of modern technologies in dispute resolution in national courts, tribunals, and other regional courts,” at the Southern and East Africa Chief Justices’ Forum annual meeting in late October. The bench book remains a critical tool for judges in the region who are increasingly faced with making determinations on cases connected to this issue.

  • Social Media and Rights at Trial (SMART) in Central Asia – This program aimed to ensure that political speech and the individual right to privacy in Kazakhstan, Kyrgyzstan, Tajikistan, and Uzbekistan are protected through the enforcement of privacy rights around social media activity in criminal cases.

  • Promoting Internet Freedom in Ukraine - ABA ROLI has implemented the Promoting Internet Freedom program since 2019. This program strengthens the knowledge and capacity of cross-sector IF champions to advocate for laws and policies that promote IF and online freedom of expression (FOE). To achieve this, we partner with local CSOs and experts, businesses, members of Parliament, and government agencies to organize expert symposia and conferences to share best practices in supporting IF through sound policy.

  • Defending Data Privacy in Southeast Asia - In Southeast Asia, ABA ROLI is implementing the Defending Digital Privacy program, which aims to create a space for lawyers and experts from across Southeast Asia to explore the emerging area of digital privacy law, develop a common understanding of the core value of digital privacy, and explore methods to protect and strengthen it in their jurisdictions. Launched in April 2023, the Right to Privacy training program was held through a combination of virtual and in-person events. The complete series, available on ABA ROLI’s learning platform, was launched OnDemand in August 2023. The R2P training program includes four modules on the right to privacy in the digital environment; international, regional, and national privacy policy frameworks; review of key topics in digital privacy; and national data privacy regulation and practices. To date, a total of 595 users have completed the program, far exceeding ABA ROLI’s initial targets of 200 individuals trained.

The materials contained herein represent the opinions of the authors and editors and should not be construed to be those of either the American Bar Association unless adopted pursuant to the bylaws of the Association. Nothing contained herein is to be considered as the rendering of legal advice for specific cases, and readers are responsible for obtaining such advice from their own legal counsel. These materials and any forms and agreements herein are intended for educational and informational purposes only.