Skip to content

Biden’s new AI executive order is lots of talk, not much action

Yesterday the Biden administration issued "a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI)." As best I can tell, it includes basically one item of any importance: NIST will develop standards for red-team testing of AI software that affects "critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks." This testing is mandatory:

Companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.

Here is my translation of the rest of the executive order:

Establish standards and best practices blah, blah, blah.
Establish an advanced cybersecurity program blah, blah, blah.
National Security Memorandum blah, blah, blah.
Prioritize federal support blah, blah, blah.
Strengthen blah, blah, blah.
Evaluate blah, blah, blah.
Provide clear guidance blah, blah, blah.
Address blah, blah, blah.
Ensure fairness blah, blah, blah.
Advance the responsible use blah, blah, blah.
Shape AI’s potential blah, blah, blah.
Develop principles and best practices blah, blah, blah.
Produce a report blah, blah, blah.
Catalyze AI research blah, blah, blah.
Promote a fair, open, and competitive blah, blah, blah.
Use existing authorities blah, blah, blah.
Expand bilateral, multilateral blah, blah, blah.
Accelerate development of vital AI standards blah, blah, blah.
Promote the safe, responsible blah, blah, blah.
Issue guidance blah, blah, blah.
Help agencies blah, blah, blah.
Accelerate blah, blah, blah.

Am I being a wee bit too cynical here? Maybe. But aside from the national security stuff, the entire EO strikes me as little more than a laundry list of aspirational wishes that will produce lots of bureaucratic report writing and recommendation making (guide, shape, develop, strengthen, evaluate, promote, advance, etc.) but not much more.

In fairness, EOs have limited authority outside the federal government itself, so the impact of the AI executive order has built-in restraints. And the blizzard of upcoming reports about AI, coming from every agency imaginable, could eventually turn into real rulemaking,¹ if only through bureaucratic inertia. Even here, though, these effects will be mostly about how to use AI, not regulate it.

Full disclosure: I haven't read the EO itself, only the fact sheet. But I've also read a bunch of commentary from experts who have read it. It's certainly possible I've missed something big, but overall the EO seems almost entirely focused on investigating and thinking about the use of AI. There's not much in the way of mandatory regulation for anyone to be very concerned about.²

¹"Eventually" because real rules take years of hearings, public comments, and industry input before they can take effect.

²Though perhaps plenty to be concerned about if you think the government should adopt strong AI rules. But that would take congressional action. No mere EO could accomplish very much along these lines.

16 thoughts on “Biden’s new AI executive order is lots of talk, not much action

  1. Joseph Harbin

    "Biden’s new AI executive order is lots of talk, not much action."

    Maybe they shouldn't have used ChatGPT to write the new EO.

    1. Steve_OH

      ChatGPT's response to "Respond to, 'Maybe they shouldn't have used ChatGPT to write the new Executive Order about AI.'"

      It's understandable that some might have concerns about using ChatGPT to write such an important document. While AI can assist in generating content, ultimately, human oversight and expertise are crucial in crafting policies that have far-reaching implications. It's essential to ensure that any Executive Order on AI reflects a thorough understanding of the technology, its potential impact, and addresses the broader societal and ethical considerations. Collaborative efforts between AI systems and human experts can lead to more well-informed and balanced policies.

  2. KJK

    I assume he didn't sign the Executive Order on camera, with a gaggle of grinning sycophants behind him, and then showed the signed Order to the camera so that the audience (Faux News viewers) could witness the awesome power of his manly signature.

    1. jte21

      I noticed Biden also doesn't hold cabinet meetings where he goes around the table and makes everyone tell him how awesome he is. Because he's a grown-ass man and not a PAB.

  3. jte21

    Speaking of AI, my teen made me watch M3GAN last night. Aside from the hugely original idea of someone creating an AI robot only to have it turn on them (hoodathunkit?), why the fuck give it superhuman strength and speed? It's a doll. It's should only be able to lift, at the most, a book. Why can it crush steel and throw people across a room? Why? So many unanswered questions...

    1. aldoushickman

      This is of course the unfun--but realistic--issue with all the hollywood "robots are taking over!" scenarios: any piece of tech we build is going to generally be far, far less robust than humans, for two reasons:

      1) we don't overengineer things. Why, as you say, make a robot doll superstrong? That just increases overhead.

      2) robots can't heal themselves. A sentient car is tons of steel and glass and plastic, far faster and stronger than any human. But it will degrade pretty quicklly (esp. if humans have no reason to repair it). Ask yourself what technology with moving parts has a useful life of 80+ years.

    2. ProgressOne

      If future personal robots also are meant to protect you and others you care about, high strength might be needed. Also, people like their Tesla EVs to have incredible power to accelerate, even if it's far more than what's needed. The same thing might happen with robots. Extra power just in case the need arises. Just guessing at the future of course.

  4. tango

    All things considered, what SHOULD be going into an AI executive order? Arguably, the point of this is to get the thoughts together to figure out what we should be doing rather than actually doing it quite yet?

    1. golack

      Bingo.
      We're not at the point where we could write useful legislation. Even basic questions such as what constitutes AI are not really answered yet.

  5. different_name

    There are different beneficiaries of this, so whether it is "action" or not depends on where you sit.

    If you are Sam Altman, you're pretty happy with it. Of course he wants legislation to pick OpenAI as the winner, but this is a great first step in that direction - the USG is agreeing there are natsec and civil society risks to AI. Expect him and others who want AI research siloed in large, profitable corporations to start building on this to build higher and higher moats.

    The first big goal is to get rid of open source AI dev. If there's no profit motive, it is much less controllable. (Plus, hippies giving it away are stealing our profit!) So look for them to push safety compliance and testing regimes that you need to employ a passel of lawyers and department of QA engineers to comply with.

    Me, I'm going to continue playing with my uncensored 70B LLama2 until they pry it from my cold, dead computer.

  6. clearnetwork

    The critique of Biden's AI executive order is a common sentiment when it comes to government initiatives. The focus on developing standards for AI software, especially in critical infrastructure and cybersecurity, is a step in the right direction.

    However, as the article suggests, the real challenge lies in moving from aspirational language to concrete, enforceable regulations. For those in the cybersecurity field, this highlights the ongoing need for proactive measures. Services like https://www.clearnetwork.com/cyber-threat-intelligence can provide organizations with actionable insights to anticipate and mitigate cyber risks, which is especially crucial when policy lags behind the pace of technological change.

Comments are closed.