chevron-down Created with Sketch Beta.
May 09, 2024 Feature

Modeling a Privacy Framework for Trustworthy AI

Chuma Akana

In October 2023, the U.S. president signed an Executive Order focused on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). The order recognizes the potential of responsible AI to help solve urgent challenges and outlines eight guiding principles and priorities for trustworthy AI, which include ensuring the safety and security of AI and protecting the privacy and civil liberties of Americans. In the same vein, the National Security Telecommunications Advisory Committee (NSTAC) has proposed that by 2028, Americans should be able to rely on technological advancements to protect their privacy and ensure the safety and security of their data. As AI advances, it has the potential to analyze personal information in new and more intrusive ways, posing a threat to privacy. However, there are proposals to implement privacy protection in AI design, and this article asserts that trustworthy AI demands specific privacy laws. AI is responsible for collecting and analyzing massive amounts of data. Nowadays, many privacy-sensitive activities such as search algorithms, recommendation engines, and ad tech networks rely on machine learning (ML) and algorithmic decision-making. Therefore, it is crucial to have a privacy framework that is specifically designed to address the challenges posed by AI.

In this age of generative AI and ML, it is important to make use of the benefits of technology while also ensuring that there is adequate protection of privacy for users. The National Institute of Standards (NIST) provides guidelines for AI trustworthiness, which include accuracy, explainability and interpretability, privacy, reliability, robustness, safety, security, and the mitigation of harmful bias. It is also important that diversity, equity, inclusion, and accessibility are prioritized throughout the entire process of designing, developing, implementing, iterating, and monitoring of AI systems.

AI systems, particularly those utilizing ML, can operate in complex and opaque ways, which make it challenging for individuals to understand how decisions are made about them. However, data governance is crucial in achieving trustworthy AI, as the full pipeline development and implementation of every AI system must be considered. This includes the objectives for the system, how the model is trained, what privacy and security safeguards are needed, and what the implications are for the end user and society. Furthermore, explaining what training data and features have been selected for an AI system and whether they are appropriate and representative of the population can help counteract common types of AI bias/fairness. Therefore, issues like complex decision-making, data minimization, bias and fairness, explainability, and cross-border data should be taken into consideration while developing AI systems.

Privacy for Trustworthy AI

The NIST Artificial Intelligence Risk Management Framework emphasizes the importance of privacy values such as anonymity, confidentiality, and control in guiding choices for AI system design, development, and deployment. Privacy-related risks can affect security, bias, and transparency, and there may be trade-offs between these characteristics. Similar to safety and security, specific technical features of an AI system can either promote or reduce privacy. Furthermore, AI systems can pose new risks to privacy by enabling inference to identify individuals or previously confidential information about them.

In an attempt to regulate AI with data privacy law, the authors of the California Privacy Rights Act borrowed language on “automated decision-making” (ADM) technologies directly from the General Data Protection Regulation (GDPR). As defined by the GDPR, ADM technologies are those that have “the ability to make decisions by technological means without human involvement,” and the GDPR gives consumers the right to refuse to be subject to any such automated decision insofar as it produces legal consequences. While ADM is not synonymous with AI (ADM is rule-based and follows predetermined instructions, while AI can learn from data and make decisions based on those data), a broad range of AI-driven processes meet the GDPR’s definition and have therefore been directly impacted by the law. Given AI’s reliance on vast quantities of data, regulating AI through special privacy law is not only inevitable, but also a compelling strategy that should be carefully considered as the law explores approaches to mitigating AI’s risks. Generally, the focus has been on algorithms, but as the GDPR demonstrates, data regulation can also be a tool for constraining the contexts in which AI can be used. Though the GDPR encourages privacy by design and aims to prevent any potential misuse of personal data through technology and organizational strategies, these provisions are being challenged by the new ways in which AI enables the processing of personal data. For instance, traditional data protection principles such as purpose limitation, data minimization, sensitive data handling, and automated decision restrictions are in tension with the full computing potential of AI and big data.

With complex decision-making processes come extra layers of privacy considerations as data/information of individuals is processed for decision and prediction. Specific privacy laws on AI transparency would focus on understanding the workings of the AI system, including how it makes decisions and processes data. Moreover, the latest advancements in deep learning are focused on creating explainable models and allowing individuals to understand the reasons behind the decisions made by AI. This is crucial in decision-making processes that have a significant impact on society, such as health care and finance. AI transparency would build trust with customers, detect and address potential data biases, and enhance the accuracy and performance of AI systems. AI-specific privacy laws could address the need for transparency and accountability in automated decision-making. Also, there is the argument for the use of differential privacy—a privacy-enhancing technology that quantifies privacy risk to individuals when their data appear in a dataset—to publish analysis of data and trends without being able to identify any individuals within the dataset.

The GDPR provides that personal data shall be adequate, relevant, and limited to what is necessary in relation to the purposes for which they are processed (data minimization); however, AI systems often rely on vast amounts of data to train and improve their models, thereby raising concerns about the amount of data these AI systems collect and utilize. Data minimization includes restrictions on what data are collected, the purposes for which they can be used following collection (purpose limitations), and the amount of time firms can retain data. These rules require companies to demonstrate the necessity and proportionality of the data processing and to prove that it is necessary to collect certain kinds of data for the purposes they seek to achieve, to state that they will only use such data for predefined purposes, or to ensure that they will only retain data for a period that is necessary and proportionate to these purposes. Typically, certain types of data classified as “sensitive” receive a heightened level of protection; for example, the collection of biometric data requires a stricter sense of necessity. For AI systems processing large amounts of data sets, the key question would be what is adequate or proportionate, as the general approach in designing and building AI systems involves collecting and using as much data as possible, without thinking about ways they could achieve the same purposes with less data. The answer would be case-specific, and all relevant data minimization techniques for AI should be fully considered during the design phase. AI-specific privacy laws could emphasize principles such as data minimization and purpose limitation to ensure that only necessary data are collected and used for specific, legitimate purposes.

According to the president’s Executive Order, it is important to ensure that AI is developed and used in a way that advances equality and civil rights. Discrimination or disadvantage should not be perpetuated using AI, but rather AI should be utilized to improve people’s lives. However, AI systems can inherit biases or algorithmic fairness present in training data, which can lead to discriminatory outcomes. The Blueprint for an AI Bill of Rights recognizes the need for algorithmic discrimination protections to guide the use and deployment of AI systems. Specialized privacy laws could be implemented to address bias in AI systems, which could incorporate bias mitigation strategies and promote risk management measures during and after the processing of data.

Machine Learning and the Right to Explanation

The right to explanation refers to the concept that a ML model and its output can be explained in a way that “makes sense” to a human being at an acceptable level. Certain classes of algorithms, including more traditional ML algorithms, tend to be more readily explainable while being potentially less performant. Others, such as deep learning systems, remain much harder to explain. Improving the ability to explain AI systems remains an area of active research. In its 2016 report and guidelines from 2020, the Federal Trade Commission leaves no doubt that the use of AI must be transparent, include explanations of algorithmic decision-making to consumers, and ensure that decisions are fair and empirically sound. Providing data subjects with an explanation is important as individuals have the right to be informed of how their data are being processed, particularly when there is the existence of solely automated decision-making that produces legal or similarly significant effects. This means that individuals need to be provided with a meaningful explanation of the logic behind the AI system, as well as the possible consequences of the processing. Organizations that deploy AI technology must have detailed documentation in place to explain how and why their data are being processed. Furthermore, the data embedded in machine-learning models must be explicitly included when considering consumers’ rights to delete, know, and correct their data. As AI systems make decisions that impact individuals, there is a need for privacy laws that grant individuals the right to understand and challenge decisions made by algorithms.

Additionally, AI relies on processing massive amounts of data to produce useful insights, which reinforces the importance of rules governing cross-border data transfers. AI heavily depends on other data-intensive cross-border activities that are subject to digital trade regulations, such as cloud computing services and data collection from IoT (Internet of Things) devices. Limiting cross-border data transfers as obtainable in existing privacy laws could slow down the development of AI by restricting access to training data and essential commercial services. However, the lack of a sufficient regulatory framework raises concerns about the rapid growth of AI, including the weaponization of AI, misinformation, surveillance, bias, and intellectual property protection. These risks highlight the need for privacy laws specific to AI that can provide clarity on how cross-border data transfers, especially those involving personal data, should be handled.

Proactive, Transparent AI Development and Policy

It is important to understand the complex relationship between privacy regulations and the trustworthy use of AI. In the past year, there have been various proposals to enable the development of trustworthy AI such as the NIST Artificial Intelligence Risk Management Framework 1.0 and the president’s Executive Order. On a global scale, there is the EU AI Act passed by the European Parliament and UNESCO’s recommendation on the ethics of AI, which aims to establish ethical principles and values for the development and use of AI.

The extraordinary ability of AI to analyze data and make complex evaluations increases privacy concerns. To protect user privacyin the face of AI’s ability to analyze data, it is essential to proactively regulateAI technology by anticipating future developments and implementing preemptive measures. Regulatoryframeworks must be dynamic and responsive to the technology’s changes, demanding transparency fromdevelopers about their algorithmsand data sources. Developers should create models that respect userprivacy by minimizing data requirements and implementing robust data protection measures, while innovative approaches like differential privacy and federated learning should be utilized. Therefore, adopting AI-specific privacy laws is crucial to address the unique challenges posed by AI.

    Entity:
    Topic:
    The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

    Chuma Akana

    Chuma Akana is a Tech, Law & Security Program Fellow at the American University Washington College of Law and completed his LLM in Intellectual Property and Technology Law. His research is focused on privacy, AI, and emerging technologies. Previously, he worked as a foreign-trained attorney and advised on global privacy compliance..