Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

by István Kopácsi

On October 30, 2023, President Biden issued an Executive Order (EO)[1] aimed at securing United States’ (US) leadership in harnessing the potential and mitigating the risks associated with artificial intelligence (AI). This EO is an integral part of the Biden-Harris Administration's all-encompassing approach to responsible innovation.

1. Background[2]

While the European Union (EU) is pushing ahead with its AI Act, the US is advancing AI-related laws and governance at a quicker pace than the EU, which is crucial for three main reasons.

  1. The US can effectively engage with and challenge aspects of the AI Act and similar global AI laws.
  2. The US model, emphasizing regulatory infrastructure before comprehensive AI product regulation, can serve as a global example.
  3. The US consistently demonstrates a strong, bipartisan commitment to AI governance, starting ahead of the EU.

It should not be forgotten that the US was an early adopter. In February 2016, the Obama administration released a report on AI, introducing the principle of regulating based on risk assessments, and it has continued to pass significant AI legislation annually since 2019.

2. Summary of the EO[3]

As part of the US government's strategy for responsible innovation, the EO reflects a commitment to bolstering the safety and security of AI. It mandates developers of most powerful AI systems to share safety test results and vital information with the US government, while also establishing standards to ensure trustworthy AI systems. Moreover, the order focuses on preventing AI misuse in creating dangerous biological materials and enhancing cybersecurity by using AI tools to identify and address critical software vulnerabilities.

In addition to facilitating the extraction and exploitation of personal data, AI also creates strong incentives for such activities, as companies rely on data to train AI systems. The EO involves prioritizing federal support for privacy-preserving techniques, including those utilizing advanced AI, ensuring the privacy of training data. Additionally, it includes funding for research and technologies like cryptographic tools that safeguard individual privacy, along with assessing how agencies collect and use commercially available data, with a focus on personally identifiable information.

To ensure AI promotes equity and civil rights, the President directs specific actions, such as issuing guidance to prevent AI algorithms from worsening discrimination. Additionally, measures are taken to combat algorithmic discrimination through training, technical support, and collaboration between the Department of Justice and Federal civil rights offices to enhance the investigation and prosecution of AI-related civil rights violations. Furthermore, efforts are made to establish best practices in the criminal justice system for the use of AI in areas such as sentencing, parole and probation, risk assessments, predictive policing and forensic analysis.

The President takes specific actions, including establishing guidelines and best practices for minimizing AI's negative effects and maximizing its benefits for workers, encompassing job displacement, labor standards, workplace equity, health, safety, and data management. Additionally, efforts include producing a comprehensive report on the potential impacts of AI on the labor market and exploring options to enhance federal support for workers confronting disruptions in their employment.

The US holds a prominent position in AI innovation, and the EO is designed to ensure this ongoing leadership. These actions encompass the acceleration of AI research, moreover, they aim to foster a fair, open, and competitive AI ecosystem by facilitating small developers and entrepreneurs' access to technical support and resources. The order also intends to leverage existing authorities to enhance opportunities for highly skilled immigrants and nonimmigrants with expertise in vital areas, allowing them to pursue studies, residency, and employment within the US.

To ensure responsible AI implementation in government and to modernize federal AI infrastructure, several measures should be taken, including providing agencies with comprehensive guidance on AI use, including well-defined standards to safeguard rights and safety, while also improving AI procurement processes and strengthening AI deployment. This also involves expediting the recruitment of AI professionals and streamlining the acquisition of specific AI products and services to make government contracting more efficient and cost-effective.

3. Critical voices

Critical voices should also be addressed after an objective summary of the EO.

First of all, the format. EOs are less durable than legislation because they can be undone by future administrations, and their effectiveness depends largely on the cooperation of technology companies. Unless Congress can find common ground on this matter, Biden's order will remain the only AI law in the US.[4] Above that, skeptics say, this order is not a significant leap forward, but rather a continuation of ongoing government efforts in this domain.[5]

On the other hand, in terms of content, for those skeptical of regulation, the EO serves as a classic illustration of government rules going too far,[6] it is a comprehensive directive spanning the entire government, initiating the regulation of a new technology. The EO lacks a thorough investigation of AI issues and solutions, assuming certain metrics for risk without supporting evidence (from across the Atlantic, this concern seems rather exaggerated, given that under the draft AI Act, certain AI practices are explicitly prohibited, while the EO is more gentle). It requires advanced AI model developers to share a set of data that was generated during the testing period, which could hinder research and provide a strategic edge to non-compliant businesses. The data-sharing rules may be legally questionable, as they stretch the Defense Production Act. Furthermore, the order imposes reporting requirements on US cloud infrastructure providers, potentially burdening them and discouraging foreign collaboration with US businesses.

 

[1] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[2] https://cepa.org/article/the-quiet-us-revolution-in-ai-regulation/

[3] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

[4] https://www.axios.com/newsletters/axios-ai-plus-bd699592-3700-482c-a4be-6d7026ac20b7.html?chunk=0%26utm_term%3Dlisocialshare%23story0

[5] https://cepa.org/article/leap-forward-ai-executive-order-pleases-optimists-and-pessimists/

[6] https://www.forbes.com/sites/jamesbroughel/2023/10/31/bidens-new-ai-executive-order-is-regulation-run-amok/?sh=42c8bd7a71c1