Inside the U.S. Government’s First AI Executive Order: 8 Pillars of Action
A first-of-its-kind executive order issued October 2023 already has the voluntary commitment of several major technology companies — here’s why.
Just last week, President Joe Biden signed a new, first-of-its-kind AI executive order establishing new safety standards, security precautions, and future guidelines for the use of Artificial Intelligence.
Deemed “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust” in an official White House address, the AI executive order already has 15 major American tech companies agreeing to implement voluntary AI safety requirements. “It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks,” explains White House Deputy Chief of Staff, Bruce Reed.
And that next step is predicted to have a sweeping impact – from new safety obligations for individual developers to a call on the federal government for new standards of labeling AI-generated content. According to the administration, some safety and security aspects of the order involve a 90-day turnaround, while other developments could take place within the year ahead. Regardless, it’s a clear indication of the federal government’s prioritization of AI governance. And it couldn’t be more timely; Gary Gensler, chair of the US Securities and Exchange Commission, recently told the Financial Times that a financial crisis was “nearly unavoidable” within a decade if regulators failed to take action on managing AI risks now.
The 8 components of Biden’s AI Executive Order
In its official Executive Order Fact Sheet, the White House breaks the key components of the executive order into eight parts:
New Standards for AI Safety and Security, which requires developers to share safety test results with the government, lays out a new framework for “red-team testing” before release, establishes new standards for biological materials screening, and more.
Protecting Americans’ Privacy, calling on Congress to pass bipartisan privacy legislation that prioritizes federal support for the development of better privacy-preservation, strengthens data privacy research, evaluates data collection techniques, etc.
Advancing Equity and Civil Rights by providing guidance to landlords and federal contractors on how to ensure AI algorithms aren’t being used to discriminate.
This also involves establishing training and technical assistance on addressing algorithmic discrimination and creating best practices on the appropriate role of AI in the criminal justice system (like sentencing, surveillance, forecasting, risk assessments, analysis, and more.)
Standing Up for Consumers, Patients, and Students by advancing the responsible use of AI in healthcare and pharmaceuticals, establishing a safety program to receive reports of unsafe practices, and creating new AI-enabled educational resources and best practices.
Supporting Workers by establishing principles for addressing the impact of AI on workers (like job displacement, equity, data privacy, etc.), producing a report on the potential labor market implications of AI, and analyzing the ways in which the federal government could better support workers affected by a labor market disruption.
Promoting Innovation and Competition by launching a pilot National AI Research Resource tool, expanding grants for AI research in areas such as climate change, and modernizing the criteria (like visa reviews and interviews) for highly skilled immigrant workers with key expertise looking to stay in the U.S.
Advancing American Leadership Abroad, which includes expanding global stakeholder engagement and collaboration, accelerating the development and implementation of AI standards internationally, and supporting the deployment of AI abroad to address global challenges.
Ensuring Responsible and Effective Government Use of AI, which involves developing guidance for federal agencies’ use and procurement of AI and speeding up the government’s hiring of workers skilled in the field.
A step forward for safe, secure, and trustworthy AI
The Administration has already begun consulting and working with Congress, a variety of U.S. agencies, and international allies like Australia, Canada, the European Union, and more to outline new frameworks and procedures for AI governance.
A meeting back in July 2023 earned the commitment of several top AI companies (like Microsoft, Amazon, Meta, etc.) to uphold some voluntary commitments like developing a way for users to identify AI-generated content, allowing third parties to test for system vulnerabilities, reporting limitations of their technology, etc.
Flash forward to today, and the U.S. was present at Britain’s A.I. Safety Summit, where Britain Prime Minister Rishi Sunak and his government presented “The Bletchley Declaration.” Now signed by representatives from the 28 countries attending the Summit (including the U.S.), the declaration acknowledges the dangers and responsibilities of managing AI systems. The second meeting for this declaration is scheduled to be held in six months in South Korea, with a follow-up in France six months after that.
What’s up next: the EU AI Act, a global response, and more
This is not our first step towards governing artificial intelligence — and it won’t be its last. In fact, major moves are already being made to pass one of the world’s first pieces of legislation on AI governance, which features a blanket ban on police use of public facial recognition technology and aim to set a global standard for the social, ethical, and economical governance of AI-enabled technology.
While the law is not expected to come into effect until 2026 at the earliest, countries like Switzerland are already urging businesses to proactively assess their risks, update their model inventories, and revisit their model management strategies to prepare.
More resources you may enjoy:
- Secure Our World: 4 Work-Related Security Best Practices for Cybersecurity Awareness Month
- Shadow IT, phishing, malware, oh my! The risks secretly haunting your organization
- Aligning your cyber risk management program with your company’s bottom line
- The cyber attack on MGM Resorts: what you need to know (and what it means for your risk management strategy)
Elevate your GRC program today!
Reach out to our team with any questions, schedule a demo or learn more about Mitratech’s GRC solutions.