Understanding the White House's AI Executive Order: The Key Mandates and Release Timelines

Here's a detailed overview of the U.S. government's strategic roadmap for comprehensive AI integration and regulation across various areas.

Former General Counsel and Acting Secretary of the U.S. Department of Commerce Cameron Kerry said that The White House’s October 30 Executive Order on AI might be the longest and most detailed in U.S. history. 

“The EO amounts to full mobilization of the federal government around AI,” Kerry said in commentary via the Brookings Institute. “Almost every federal department or agency has some significant role.”

In a statement co-written by a number of AI experts, the Stanford Institute for Human-Centered Artificial Intelligence agrees with the former leader, while also marking the challenge in executing an order of this magnitude. 

“There is much to admire in this EO, but also much to do,” Stanford HAI wrote. “By our count, over 50 federal entities are named and around 150 distinct requirements (meaning actions, reports, guidance, rules, and policies) must be implemented, many of which face aggressive deadlines within the calendar year.”

As cliche as it is, it’s true that nothing worth doing is ever easy. Accomplishing the goals of this order will be a challenge, but a worthy one because these are huge steps toward realizing the safe, effective AI-enabled future we all want to see. Depending on the specifics of the mandates when they come to fruition, the AI EO could be hugely impactful in many areas of society.

Much of the EO was focused on the government’s use of AI across its breadth of offices, but there are parts that will inform and shape business use and AI development and it’s all worth understanding. Since the executive order spans the entire year and beyond, we’ve broken it up into a timeline so you can understand what you need to prepare for at each stage. Timelines are always subject to change, so use this as a rough guideline.

By The End of 2023 

Defining Dual-Use Foundation Testing and Sharing Results

Companies that develop AI models with both civilian and military applications must share information about how they train these models and the results of their testing with the government. The Secretary of Commerce will define the technical requirements for reporting, specifying which models and computing systems are subject to these rules.

To make sure these models are safe and accurate, experts called "red teams" test them to find any weaknesses or problems. They check if the models produce outputs that could be harmful or unfair. The National Institute of Standards and Technology will be responsible for setting national standards for these red-team tests and will work with the AI community to ensure safety and accuracy.

The EO also requires reporting of the results from these tests for the foundation models that use a certain level of computing power. It advises against releasing the model weights (the instructions given to the AI to make decisions) beyond the specified level and mandates reporting on how the weights are protected physically and digitally during training, ownership and use.

Streamline Visa Petitions for Non-U.S. Citizens To Work on AI

The government aims to expedite visa processing for noncitizens seeking to work on AI projects in the U.S. This includes increasing visa opportunities for experts in AI and other emerging technologies. Additionally, the Secretary of Labor will seek public input to identify possible updates to the list of Schedule A occupations: “immigrants of exceptional ability in the sciences or arts, including college and university teachers” among others. Part of these updates can facilitate green card approval for foreign workers as well. These measures are part of a broader effort to attract and retain technical talent in the country.

Civil Rights Office Recommendations on Reducing Bias

The Attorney General, in collaboration with federal civil rights offices, will take measures to prevent and address discrimination associated with AI. This includes coordinating with agencies to enforce existing federal laws, convening a meeting of civil rights office heads to discuss comprehensive strategies, and improving stakeholder engagement to raise awareness of potential discriminatory AI use. Additionally, the Attorney General will consider providing guidance and training to investigators and prosecutors at state, local, tribal and territorial levels regarding civil rights violations related to automated systems, including AI.

By the End of Q1 – March 2024

Public Report on Financial Institutions Managing AI-Specific Cybersecurity Risks

The Secretary of the Treasury is required to submit a public report outlining best practices for financial institutions to manage AI-specific cybersecurity risks. This directive follows remarks made by Federal Reserve Vice Chair for Supervision Michael Barr, who emphasized the need for banks to test their cybersecurity resiliency. The initiative is part of a broader effort to enhance the industry's ability to protect sensitive information and ensure the stability of the financial system. 

For Jasper customers in the finance industry: We’ll have a special update on this report as it is released. The report is likely to present findings, recommendations and provide a timeline for expected changes. We’ll break down all areas as those details come out. 

The EO provided the prompt to develop best practices but notably stopped short of specifics on some areas. Said Aaron Klein, senior fellow in Economic Studies at the Brookings Institution:

“In a document as comprehensive as this EO, it is surprising that financial regulators are escaping further push by the White House to either incorporate AI or to guard against AI’s disrupting financial markets beyond cybercrime,” Klein wrote in a Brookings statement. “Given the recent failures of bank regulators to spot obvious errors at banks like Silicon Valley, the administration’s push to incorporate AI for good could have found a home in enhancing bank regulators whose reputations are still suffering after the spring’s debacle.”

Marking Government Content As Authentic

The Secretary of Commerce and Director of the Office of Management and Budget will develop guidance on digital content authentication and synthetic content detection measures. Watermarking was listed as a specific measure but it's unsure yet what that will look like. To date, watermarking initiatives have not yet achieved the technical sophistication needed to prevent forgery or other errors. Additionally, AI detectors often produce inaccurate readings that can be harmful themselves. More work will need to be done to get to a point of reliable content authentication, which is why deeper study and recommendations are called for in the EO.

Additionally, the Director of OMB, in consultation with various government officials, will issue guidance to agencies for labeling and authenticating official government digital content to strengthen public confidence in the integrity of that information. This is especially important as we head into an election cycle. With or without this authentication, consumers should develop a practice of giving content a critical eye and source-check as they read, view or listen to it.

New Rulings on Countries, Skills and Professionals Needed for Greater U.S. AI Investment

To attract and retain top talent in AI and other critical emerging technologies, the government is considering a series of measures. In one initiative, the Secretary of State will explore expanding the categories of nonimmigrants eligible for the domestic visa renewal program to include more academic research scholars and students in STEM fields.

A new program will identify and attract top AI talent from universities, research institutions and the private sector overseas. The program will inform overseas STEM talent about opportunities and resources for research and employment in the U.S., including visa options and potential expedited adjudication of their visa petitions and applications.

The Secretary of Homeland Security will also modernize immigration pathways for experts, startup founders and other noncitizens with expertise in AI and other tech areas. This includes modernizing the H-1B visa program, which allows employers to petition for highly educated foreign professionals to work in “specialty occupations” that require a bachelor's degree or its equivalent. It also adjusts rulemaking to streamline the process for noncitizens from AI and tech-heavy professional backgrounds to permanently become U.S. residents.

Department of Energy’s Report on New Electric Grid Infrastructure, Climate Change Mitigation, and More

The Department of Energy will launch several initiatives to leverage AI within its sphere of influence. 

It will release a report detailing the potential of AI to optimize planning, permitting, investment and operations for electric grid infrastructure. It will also explore how AI can contribute to the provision of clean, affordable, reliable, resilient and secure electric power for all Americans.

The DOE will develop tools that facilitate the building of foundation models that streamline permitting and environmental reviews while improving environmental and social outcomes. These tools will help AI companies navigate regulatory processes more efficiently while also ensuring the environment is protected. 

The organization will also partner with private sector organizations, academia and other relevant entities to develop AI tools aimed at mitigating climate change risks. Additional partnerships will also be explored to support new applications in science and energy and support national security.

Housing Department Report on AI in Housing Access and Loans

The Department of Housing and Urban Development will align with the Consumer Financial Protection Bureau to address the potential for bias in automated tenant screening systems. This includes examining how the use of data like criminal records, eviction records and credit info can lead to biased decisions that violate federal laws like the Fair Housing Act and the Fair Credit Reporting Act.

The guidance will also clarify how the Fair Housing Act, the Consumer Financial Protection Act of 2010, and the Equal Credit Opportunity Act apply to housing, credit, and other real estate-related transactions. This encompasses algorithmic advertising delivery systems, ensuring that housing-related advertising doesn’t violate Federal fair housing and lending laws.

Report on AI Use in Government Operations and Bias Prevention

The Director of the Office of OMB will issue guidance to many federal agencies on how they can strengthen and appropriately use AI in the government. This guidance will require that each agency designate a Chief AI Officer. That officer will coordinate their agency's use of AI, promote AI innovation and manage associated risks. 

Additionally, the guidance will outline minimum risk-management practices for government uses of AI that impact people's rights or safety, incorporating practices from OSTP's Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework.

By the End of Q2 – July 2024

Industry Standards for Developing Models and AI Capabilities, Including Red-Shirting Standards

Government agencies will develop guidelines to promote industry standards for developing models and AI capabilities. This includes red-shirting standards, which refer to delaying the deployment of AI systems that don't meet safety and ethical standards.

An initiative will be launched to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on potential harm in areas like cybersecurity and biosecurity. Appropriate guidelines and procedures will be established to enable AI developers, especially those building dual-use foundation models, to conduct AI red-teaming tests for safe and secure deployment of trustworthy systems. It will also mean providing adequate testing environments to see whether these demands for safety and the like are met.

Report on Standards for Labeling Synthetic Content, Authenticating Content and Preventing AI Child Sexual Abuse Material

The Department of Commerce will develop a report on standards for the AI industry at large. This includes: guidance on tools and methods for labeling synthetic content, authenticating content, tracking the source of content and preventing generative AI from from producing child sexual abuse material or non-consensual imagery of real people.

Stanford HAI offered its perspective on this portion of the EO in a recent statement co-authored by a number of AI experts in the organization.

“The EO tasks Commerce to produce a report surveying the space of techniques for content provenance, watermarking, and other detection approaches,” the AI organization wrote. “Such research is vital, given existing legislative proposals may be premature in mandating watermarking. Namely, watermarking methods are quite nascent, especially for language models, lacking the required technical and institutional feasibility. However, we believe action will be needed, with recent announcements across the pond, especially with the growing concerns of AI-generated CSAM that are highlighted in the EO. Otherwise, we risk regulating with standards that are technically infeasible or simply do not exist.”

For Jasper customers: When this report is released, we will publish a thorough explainer of the recommendations so you can be informed and equipped to make updates to your AI strategy.

By the End of Q3 – October 2024

Justice Department Report Addressing the Use of AI in the Criminal Justice System

The Justice Department will submit a comprehensive report to the President, addressing the use of AI in the criminal justice system. The report will cover many aspects: sentencing, parole, bail, risk assessments, police surveillance, crime forecasting, prison management tools and forensic analysis. 

It will identify areas where AI can enhance law enforcement efficiency and accuracy while protecting privacy, civil rights and civil liberties. The report will also recommend best practices for law enforcement agencies, like safeguards and appropriate use limits for AI. The goal is to ensure equitable treatment, fair justice and improved law enforcement efficiency through the responsible use of AI.

Turning the Pages on the Calendar

That was a lot. Again, this is arguably the most robust Executive Order in U.S. history. But it shows the government’s proactive dedication to preparing the country and American citizens for the long-term, widespread impacts of AI across society. 

“The EO is also remarkably specific,” Stanford HAI wrote. “It sets ambitious deadlines for the vast majority of requirements, with roughly one-fifth of deadlines falling within 90 days and over 90% falling within a year. Lurking in the background is, of course, the uncertainty about the next presidential administration, as EOs can be revoked with the stroke of a pen.”

Hopefully, the goals of this Executive Order are not only carried out but accomplished in a way that promotes the safe and reliable use of the technology by all. We’re thankful for the government’s push toward a better AI future and we’ll be interested to see how these mandates materialize. Finally, we can assure you that as these orders are executed, we’ll continue doing everything we can to keep you informed and help you understand them in the context of your work. We’ve got your backs.

Back to blog