Skip to content

Should AI be regulated?

A few days ago, OpenAI's CEO, Sam Altman, testified that artificial intelligence posed significant dangers and ought to be regulated. Yesterday, though, he released a short brief on the subject and, remarkably, it turns out he doesn't want anything regulated at all. Not in practice, anyway. For starters, here's what he affirmatively does want regulated:

In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past....We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.

Altman is clear that by "superintelligence" he's talking about AI considerably beyond full general intelligence—i.e., full normal human intelligence—which is generally considered ten or fifteen years away all by itself. So call it the better part of 20 years before we get seriously close to superintelligence. That's all he proposes to regulate. It's pie in the sky.

Now here's what he affirmatively doesn't want to regulate:

We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits).

Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate.

In other words, all of OpenAI's current—and very profitable—work (ChatGPT and so forth) should be entirely unregulated. Only the stuff that's 20 years away deserves any attention.

This amounts to no effective regulation at all. Note that I'm not even saying Altman is wrong about this, since I have my doubts that AI will ever move slowly enough, narrowly enough, or be well-enough defined to permit any kind of effective regulation. Still, Altman is being disingenuous here. He's playing at being the reasonable man while, in practice, advocating for the Wild West.

43 thoughts on “Should AI be regulated?

  1. kaleberg

    When a modern company says "regulate us", it is obviously just marketing hype. It's clear they don't want to be regulated. If they were serious, they'd propose model legislation. Try hitting them with proposals to regulate privacy and matters of attribution and listen to them squeal like stuck pigs. Propose making them vulnerable to detrimental reliance suits, and they'll wriggle like the worms they are.

    You can even smell the "effective altruism" which is sort of the inverse of the actual return on investment calculations that most businesses use. Sure, regulate us in a million years but let us do whatever we want for the next 999,999. The value of doing such in the distant future is so high that any cost other people have to bear now is irrelevant. Tell them to redo their internal accounting so that future values are higher than present values and see where their market valuations go.

    1. Yehouda

      "When a modern company says "regulate us", it is obviously just marketing hype."

      Not always, sometimes what they really try to achieve is regulation of their competitors .

      1. CaliforniaUberAlles

        It's this.

        They are trying to kill off open source competition by making it so that large corporations are the only ones that can get the "licenses." This happens all over the place in different domains.

        The basic trick is to convince you that large matrices, which are essentially just a different kind of computer program, are Skynet so you freak out and demand regulation on other computer programs made up of big matrices.

        But Moore's law is dead. We are not going to exponentially scale these things, sorry Kevin.

        Bill Gates is more likely correct that Amazon's store and Google's search are in trouble with these, but human beings are not, unless they are a public person and don't want deepfakes of them.

        We are not even the proverbial "decade away" from AGI (or fusion, or the "battery revolution" or meaningful quantum computing).

        Because the current models are never 100% accurate, they will still always require human intervention. Also, they need humans to produce content to generate on. So we might need less copywriters and more good writers. Less draftsman and more artists.

        That's just normal technological change. Ask the guy whose job was replaced by a robot at the car factory in the 80s.

        It's not Skynet, we don't need to regulate it to lock in Facebook or Microsoft's advantage. If anything we need much much more open source """"AI"""".

        1. aldoushickman

          "But Moore's law is dead. We are not going to exponentially scale these things, sorry Kevin."

          Generally agree, but counterpoint: we know for a fact that it is possible to construct extremely powerful human-level intelligences out of cheap and readily-available materials running on a paltry ~100watts, because nature demonstrated proof of concept several hundred thousand years ago. Indeed, there are billions of models walking around right now!

          So simply saying that moore's "law" is dead and implying that thus there can't be thinking machines ignores the reality that there are definitely matter-constructs that think so it's certainly possible to build them.

          1. D_Ohrk_E1

            If 100B neurons with 10K synapses each, translated to 100B transistors with 10K circuits each...

            do you think that's powered by 100 watts?

            1. aldoushickman

              "If 100B neurons with 10K synapses each, translated to 100B transistors with 10K circuits each...

              do you think that's powered by 100 watts?"

              Of course not, if you insist on building it out of transistors. The point is that any debate about whether or not it's possible to build human-level AI out of matter has to exist in the shadow of the fact that all of us are human-level intelligences built out of matter that moreover use incredibly paltry amounts of energy to function. Reality proves that it is possible.

          2. Citizen Lehew

            People banking on Moore's Law being dead as their firewall against AI advancement are in for a rude awakening.

            Huang's Law is alive and well.

          3. CAbornandbred

            It seems to me we are confusing extremely powerful human-level intelligence aka, humans, with super intelligent human level computer intelligence.

            Yes. lots of powerful human intelligence walking around. But we're not super intelligent and never will be. We don't have access to the enormous amounts of information a computer will have.

            To me this is apples and oranges. Lots of apples around. No oranges yet.

            1. lawnorder

              Humans have access to the whole internet. There are differences between organic information processing and electronic information processing. Among them are the facts that organic information processing is slow and its data input/output has very limited bandwidth (see "slow"), but we have slow access to just as much information as any computer.

              1. CAbornandbred

                We do have access to all of the internet. What we don't personally have is massive computing power. But, we can use computing power out there. We need the happy medium of access to computing without having these computing machines gaining power over us.

            2. aldoushickman

              "It seems to me we are confusing extremely powerful human-level intelligence aka, humans, with super intelligent human level computer intelligence.

              . . .

              To me this is apples and oranges. Lots of apples around. No oranges yet."

              That's not what I'm saying. Look, all you (or me, or anybody else) are is a pile of matter in a complex arrangement. The fact that there are billions of such individual piles of matter walking around with human-level intelligence _strongly_ indicates that it is possible to construct such a pile of matter.

              Now, if you are quibbling about whether or not the existence of human-level intelligences counts as evidence that it's possible to make super-human intelligence, I'll admit, the evidence is not quite so overwhelming. But, given that there are things like lizards, dogs, chimpanzees, Donald Trump, you and me, and Isaac Newton, I'd say it's extremely likely that the universe allows the construction of minds greater than our own.

              Again, not saying that it is easy or that hyperventilating Silicon Valley types are right that complicated autocomplete programs are a hairsbreadth away from a tech singularity, just that anybody saying that superintelligent AIs are impossible is making a very dubious argument.

        2. lawnorder

          Humans are never 100% accurate. That is why there are already many situations in which humans are the subject of machine intervention.

  2. kahner

    in 20 years perhaps he can testify before superintelligent AI congress than humans pose little risk, and there's no need to regulate (ie enslave or eradicate) us. just throw us in the matrix.

  3. royko

    Does he believe an AI company should be liable if their AI libels somebody (which already happens?) Is the AI company liable if the AI is used to commit fraud? If the AI discriminates, who get sued?

    I would expect execs at AI companies believe that government shouldn't have a say in how their products are developed, deployed, and used, but I think they'll also believe they shouldn't be held responsible for any direct harm caused by their AI models. Which is generally how things go with new technologies.

    I think any dreams of a "pause" in AI are silly -- no one's going to just stop development of a technology this potentially lucrative. And I'm not really sure how AI as it exists can even be regulated. But I do feel strongly that AI companies should be legally responsible for what their AI does. No hiding behind the "algorithm".

    1. aldoushickman

      Also, I think that there are a lot of questions about copyright/intellectual property that stem from the materials that AIs are trained on, and the accordant outputs they create.

    2. KenSchulz

      You said what I came here to say — don’t try to micromanage by regulation; rather, enforce strict liability, fair use of copyrighted material (using whole copyrighted works in training sets exceeds this), and other legal means to ensure that producers and users of ‘AI’ police themselves.

      1. aldoushickman

        That works if the only problems with AI are copyright infringement and if you think that the possibility of zealous prosecutors is enough to keep AI developers doing things that benefit the public as a whole.

        I agree 100% that AI developers should be on the hook for blatantly consuming without license vast libraries of copyrighted material (and for producing software that can do things like "make me a cartoon in the style of XYZ cartoonist"), but saying "we have enough laws already--just enforce the ones on the book" is (a) pretty close to what gun nuts argue about gun laws, and (b) more importantly misses the purpose and opportunity of regulating new, disruptive, and powerful industry.

        Just because we have a federal law against unfair business practices (and we do! 15 U.S.C § 45) doesn't mean that it's adequate to meet all new challenges.

  4. Yehouda

    Talking about "regulating AI" is not actually useful, because the term "AI" is too mellable to mean something concrete enough to be useful.

    What needs to be discussed are specific technologies and specific usages. Smoewhat analogous to microbiology, where we don't regulate "microbiology". We regulate specific technologies and specific usages.

  5. aldoushickman

    "but superintelligence will require special treatment and coordination."

    Well sure. Skynet, Utlron, HAL3000, and other artificial superintelligences are all things that we will have to be careful about and regulate appropriately if and when they exist, just like we do with genies, leprechauns, and magic monkey paws.

  6. KJK

    Unless and until an AI version of Skynet actually does something illegal, I don't see under what constitutionally valid law it would be subject to regulation.

    Of course there are now a whole shit load of laws being passed in many states that may not be constitutionally valid and the Christian Taliban SCOTUS majority has their own twisted ideas about the constitution.

    1. aldoushickman

      "Unless and until an AI version of Skynet actually does something illegal"

      AIs make unlicensed derivative works all the time, and I doubt very much that the libraries of material the AIs were trained on were licensed by their copyrightholders for that purpose, either. So there is a very real question as to whether or not the current round of AI are, in fact, doing something illegal, since copyright violation is of course against the law.

      1. KJK

        Then let the copywrite holders who believe that their IP has been used in a commercial endeavor without their consent or applicable license obtained sue the publisher for damages.

        1. aldoushickman

          I'm sure that some will. Remember, chatgpt has only been around for 6 months; the statute of limitations for copyright infringement suits is 3 years. There hasn't been a lot of time yet for cases to even occur, let alone be resolved in the courts.

          But maybe a broader (and better?) point is that your argument that "Unless and until an AI . . .does something illegal, I don't see under what constitutionally valid law it would be subject to regulation" is not only wrong factually, but also wrong in premise. Congress has constitutional authority to regulate interstate commerce, and software development--even of fancy AI software--most certainly falls within its purview. Congress doesn't have to wait for a crime to be committed before regulating something.

          (and, query what it would even mean for it to only be constitutional to regulate something *after* that something has done something illegal, since by definition, something that isn't regulated isn't illegal. That's like saying Congress can't write laws regulating insurance until somebody commits insurance fraud by I guess breaking one of those laws Congress hasn't written yet)

    2. lawnorder

      Before constitutional issues arise, Skynet has to be a legal person with "human" rights. The question of when a machine intelligence becomes a legal person has been the subject of much speculation, although without a tangible example to speculate about. Even if Skynet had rights, that doesn't exempt it from regulation. Humans are subject to regulation.

  7. Jim Carey

    The AI risk is not in artificial intelligence. The AI risk is in artificial ignorance: ignoring reality while paying attention to an artificial reality created in technology.

    We are Homo sapiens which translates from Latin as "the wise hominid," and not the intelligent hominid. For all we know, Neanderthals might have been more intelligent. We can say with certainty that they went extinct because they were "pre-wise."

    Wisdom is the willingness to set aside one's preconceived notions in favor of examining the evidence and risking the possibility that they may turn out to be wrong (aka "do unto others as you would have others do unto you").

    Ignorance is the willingness to jump to a self-serving conclusion. Example: Does AI regulation serve my self-interest and does it require regulation? If yes, then yes. If no, then no.

    Looking for evidence to defend and confirm that conclusion, while ignoring refuting evidence, is a little bit more difficult thought process, but not much.

    1. lawnorder

      Neanderthals are not extinct. It appears that although they were a different "race" than H. Sapiens, they were not a different species; as far as we can tell, the two races interbred and blended and we still have Neanderthal genes.

      1. aldoushickman

        "Neanderthals are not extinct"

        They most certainly are.

        "It appears that although they were a different 'race' than H. Sapiens, they were not a different species"

        No, this is incorrect--they definitely were a different species with very different morphology than homo sapiens. Was there some gene flow between populations? Sure, but the same thing happens with a whole lot of fairly closely related species (grizzlies and polar bears, wolves and coyotes, blue and fin whales, etc.). Even donkeys and horses sometimes--albeit rarely--produce mules that can reproduce.

        "and we still have Neanderthal genes"

        Depends on who you calling "we,"--people of European and Asian descent have an estimated 1-4% of their DNA that could be Neanderthal, but nobody else does.

        1. lawnorder

          If two population groups can interbreed and produce fertile offspring, they are the same species. Wolves, coyotes, and domestic dogs are all the same species. Domestic cattle and American bison are the same species. Etc.

          1. aldoushickman

            No, that is ridiculously wrong. It's not whether they CAN interbreed, it's whether they DO in nature such that the population in question has significant gene flow such that it can be considered one gene pool. I'll 100% concede that the definition of what is and is not a separate species is hazy at the edges (and that "species" is a convention we humans apply, and not necessarily a descriptor of base reality), but there absolutely is not a single species called "dog" that includes arctic foxes, st. bernards, and golden jackals,* for example.

            _________
            *Vulpes lagopus, canis familiaris, and canis aureus, respectively.

            1. lawnorder

              Taxonomy is sometimes inaccurate. That being said, foxes are NOT the same species as coyotes, wolves and domestic dogs; they can't interbreed. I don't know about golden jackals; do they interbreed with domestic dogs and produce fertile offspring.

              If you define separate species by whether they do interbreed rather than whether they can, would you assert that the Australian Aborigines, New Zealand Maori, and American Indians were separate species from Eurasian and African Homo Sapiens before the age of colonization? Those four population groups did not, in nature have significant gene flows from one to another.

  8. tango

    Who are we kidding --- do any of you think that Congress will be able to come together and pass a bill that makes any sense on regulating AI? Can anyone think of any emerging technology that Congress has regulated in that way?

    Whenever something like this comes up, folks say we should pause and have a "national debate" and come up with "sensible limits." Never happens --- what happens is that companies proceed with the tech as hard and fast as they can in the hope to make money. And that's what will happen here, for better or worse.

  9. painedumonde

    If Capital is at a disadvantage, it will require government to place roadblocks in order for it to pass by competition and regain advantage. IMO, this is what it's all about - open sourcing is diluting advantage and not to Capital's taste. If Capital had the advantage, it will require regulation as gatekeeper.

  10. samgamgee

    At least force companies to label instances where AI is involved. Specifically for end-users / customers.

  11. OwnedByTwoCats

    The hot thing in AI today is large Neural Networks (NNs), trained to understand language by feeding it large amounts of human-written text. This kind of AI will work in the domain that it has been trained in, and make pretty random outputs when dealing with inputs outside of its training domain. Neural Networks may synthesize the work of many experts, and so appear to be equal to a human expert in a lot of domains, but it won’t be able to go much beyond that.

    There’s an old quip that Artificial Intelligence is anything we don’t know how to do yet. Playing games that intelligent people played, like Chess and Go, was thought to be intelligence. Now it’s minimax optimization. Similarly expert systems got carved out of AI in the 1980s, and then reached limitations. Neural Networks in general, and Large Language Models and transformations, have the spotlight now.

    Hans Moravec, in his 1999 book Robot, predicted four generations of robotics, each lasting about a decade, and having the intelligence of an insect (i.e. Roomba), a mouse, a monkey, and then a human. We should be toward the end of the mouse generation, and looking for robots with monkey intelligence starting in just a few years. I don’t think we’ve even gotten to the mouse stage yet, and the exponential growth that seemed to be the rule 24 years ago certainly has flattened out. Making predictions is hard, especially when they concern the future.

    What parts of “AI” need to be regulated? Copyright certainly needs to be specified. If you train your NN with my creation, am I entitled to monitization of your NN? Given that NNs need hundreds of thousands to hundreds of millions of items in their training sets, how does that work practically? What else?

  12. Pingback: Weekend link dump for May 28 – Off the Kuff

  13. Pingback: Боб Айгер ставит свою печать на Disney, а также мое «решение» потенциального (огромного) социального вреда ИИ - Nachedeu

Comments are closed.