Skip to content

Yes, AI will likely be selectively banned in the future

Tyler Cowen points today to an essay by Dean Ball about regulation of AI. Basically, Ball is afraid of endless upward ratcheting. A seemingly limited law—like California's, which proposes to regulate AI models that could produce WMDs and similar catastrophes—is likely to grow over time:

Let’s say that many parents start choosing to homeschool their children using AI, or send their kids to private schools that use AI to reduce the cost of education. Already, in some states, public school enrollment is declining, and some schools are even being closed. Some employees of the public school system will inevitably be let go. In most states, California included, public teachers’ unions are among the most powerful political actors, so we can reasonably assume that even the threat of this would be considered a five-alarm fire by many within the state’s political class.

....So perhaps you have an incentive, guided by legislators, the teachers’ unions, and other political actors, to take a look at this issue. They have many questions: are the models being used to educate children biased in some way? Do they comply with state curricular standards? What if a child asks the model how to make a bomb, or how to find adult content online?

Ball looks at this from a public choice framework: what are the regulators incentivized to do? Regulate! So they'll always be looking around for new stuff to tackle.

That's fine, but I don't think you need to bother with this framework. I know that most people still don't believe this, but AI is going to put lots of people out of work. Lots and lots. And when that happens, one of the responses is certain to be government bans on AIs performing certain tasks. After all, governments already do this, protecting favored industries with tariffs or licensing requirements or whatnot. How hard would it be to mandate the continued use of human doctors and human lawyers even if someday they aren't necessary? Those folks have more than enough political clout to get it done.

On the other hand, taxi drivers, say, don't have a lot of political clout. So driverless cars might well take over their jobs with no one willing to do anything about it. Sorry about that.

Any way you look at it, though, there's someday going to be lots of pressure to preserve jobs by regulating robots and AI. Maybe in ten years, maybe in five years, maybe tomorrow. But it's going to happen.

48 thoughts on “Yes, AI will likely be selectively banned in the future

  1. Doctor Jay

    I am less worried about bomb making than I am about how the AIs are going to serve the interests of the people that control them. Which is to say: big money.

    I mean, the AI can slip in stuff all over the place. Stuff like "corporations are people, too" and "Taxation is theft" and who knows what else. It can bombard a child with lots of stories about how everything in is terrible.

    And with traditional schools, there are people in the neighborhood who are answerable: teachers, administrators, school board.

    With an AI teacher, there will be a Help Line, where you talk to more AIs who don't say anything meaningful at all.

    1. realrobmac

      Legislators already put stuff like this into mandatory curricula. And believe it or not, there are already plenty of right wing or libertarian teachers out there to insert their own interpretations of things when they teach kids. I don't think this is really a big concern. And I think teaching will be one of the last professions to be displaced by AI.

    2. aagghh96

      Politically, it’s going to be an absolute nightmare. And we are woefully unprepared to deal with what is already happening, let alone with what is coming. “Fake news” was just an introductory act.

  2. reino2

    It's weird to use teachers as an example. We don't have an excessive amount of power, and we're about the last group that will lose our jobs due to AI. The technology might be pretty much there with AI that can diagnose disease and write a legal brief, but lead a 1st grade class? Puh-leeze!

    1. realrobmac

      This was my thought. How TF is generative AI going to replace teachers? What do people suppose teachers do in the classroom all day long? Just give long winded and factually accurate answers to children's questions? We already have books and computers and Google and we still need teachers. Is AI going to schedule meetings with kids parents, keep an eye on kids in the classroom all day, keep discipline? Come up with creative classroom experiences on its own? Good grief.

      1. tango

        I think you all are underrating the AI of the future. What is now clunky will later be smooth. AI will have the ability to be an individualized tutor who will know each child's strength and weakness and remember everything that they learned year after year. AI will be able to draw on teaching and learning techniques honed over its experience with millions of other children.

        I suspect more affluent families will start with AI learning assistants and things will escalate from there. While I am not sure that they will close down schools, I suspect that there will be a lot fewer teachers needed and we will get some sort of hybrid situation.

        I am afraid AI is coming for a lot of our jobs, more a question of when than if.

      2. Jasper_in_Boston

        Is AI going to schedule meetings with kids parents, keep an eye on kids in the classroom all day, keep discipline? Come up with creative classroom experiences on its own?

        All of the above, eventually, yes, if we let it. But as Kevin points out, we may not let it.

  3. Justin

    AI and hackers are already wrecking modern life. Come on, now. This technology will be used to impoverish us. It’s a tool of extortion and Warfare. It’s already happening. Regulate all you want. The hackers and thieves already have what they need. And they will get even more dangerous. We’re done.

    Flipping car dealers hacked! Hospitals. Payment systems. They don’t need to take our jobs. They’ll just take whatever they want by extortion. Good grief.

  4. lower-case

    taxi drivers, say, don't have a lot of political clout

    republicans have spent decades dismantling protections for labor; if a bunch of investors wanna pool their talent and set prices for their output there's a whole stable of judges on the 5th circuit who will sacrifice their own children to protect those investors

    but if 8 people at the local mcdonalds try to do that, the 5th circuit will grind their bones for bread

    1. lower-case

      bloomberg

      Citigroup Inc. said artificial intelligence is likely to displace more jobs across the banking industry than in any other sector as the technology is poised to upend consumer finance and makes workers more productive.

  5. D_Ohrk_E1

    Thinking about this a bit more, as Marques Brownlee was asking the other day, is AI a product or a feature? If it leans more towards a product, then yeah, it'll result in tons of traditional jobs being taken away. But if it's more of a feature, then those jobs will still exist but productivity will increase, to the point that fewer people will be hired in the same jobs.

    Or maybe it's a phased thing where AI is mostly a feature for the next few decades until a breakthrough makes it a product, specifically when robotic bodies are merged with sentient programming limited by hardcoded laws.

  6. realrobmac

    AI and driverless cars are not the same thing. Not even close.

    And you neglect another possibility. If some sort of AI technology truly does start taking away massive numbers of jobs there is another way to prevent mass poverty. Tax the owners of the AI (and other rich folks) and redistribute the wealth so that everyone benefits.

      1. KenSchulz

        Not true as a syllogism, but descriptively sometimes true and sometimes not. For centuries we’ve had machines that can do things that formerly required human skill; but we don’t say they embody ‘artificial skill’ — instead, characteristics like precision, repeatability, feedback enable their performance.

    1. jdubs

      The term AI has morphed into this weird, undefineable mashup term. It means very different things to different people and the proponents/marketers promote the term to mean literally anything you want it to mean.
      Software, hardware, automation, robots, process improvements, anything that runs on electricity, anything that feels futury, anything you can imagine......its all AI now.

      There are literally no limits, its now a synonym for magic for many advocates and marketing teams.

  7. FrankM

    Automation has been supposed to destroy jobs for...wait for it...CENTURIES. Ever since the beginning of the industrial revolution. To give one well-known example:

    At the beginning of the 19th century, most clothes were made by hand. Automation in the clothes-making industry was supposed to destroy jobs in the industry. By the close of the 19th century there were MORE people employed in the industry than at the beginning. What happened is simple: automation caused prices to fall, leading to people purchasing most of their clothing instead of making it themselves. Know anyone who makes all their own clothes anymore? Increased demand -> more jobs, even with automation.

    Please explain why this time is different.

    1. pipecock

      How were those times not what is being described?

      Automation takes away decent compensating “skilled” work forcing ppl to find some other under compensated way of working.

      The fear exists thanks to it happening every single time so far.

      How you fail to see this is the real problem with your post. You’re just fucking stupid I guess.

      1. Jasper_in_Boston

        Automation takes away decent compensating “skilled” work forcing ppl to find some other under compensated way of working.

        Automation is tough on workers, yes. And Kevin's right that developments in AI will take away jobs. Surely millions. But that's a very different animal from " the creation of new jobs in new sectors will break down." We've always seen job destruction. We just have't the total employment shrink on net because of emerging sectors.

        It's possible this time will be different. Maybe! But so far there's no evidence of this. Employment levels and wages in the US have both been robust, and if anything have strengthened since the arrival of generative AI.

  8. jrmichener

    I expect a lot of job loss due to tools that improve efficiency greatly, allowing a smaller number of workers to accomplish far more. The current state of the LLM's allows knowledge workers with domain knowledge to accomplish tasks that previously would have required substantial programming support. Replacement will probably come in time, but improved efficiency will allow substantial headcount reductions in some areas.

  9. cephalopod

    I think one of the biggest battles will be over educating children at all. Will children need to learn to read or count? Can't a cell phone camera and AI evaluate and explain everything via pictures and audio in a way that makes even basic academic skills superfluous? A roofer today may need to be able to read the text on the outside of the box of roofing nails, so they pick the right ones, but tomorrow's roofers won't!

    1. pipecock

      It’s funny to me that ppl think anybody is “educated” now. I interact with the general public on a day to day basis. I’d guess their average IQ to be about 70. Bad math ability, abysmal reading comprehension (in fact I now suspect that the true level of illiteracy is at least an order of magnitude greater than reported), total lack of ability to think quickly about any subject with any efficiency.

      The average person is an imbecile already.

      1. geordie

        By definition the median IQ is 100; however, how dumb that is would be fairly surprising to most intelligent people. Also you conflated education (knowledge) with intelligence. In my experience it is generalized knowledge that is most lacking these days. That is at least somewhat addressable whereas intelligence is much less amenable to change. In areas like math I can see AI instruction making a bid difference. History on the other hand isn't going to be addressable with AI.

  10. Goosedat

    Joanna Maciejewska: “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”

  11. Justin

    I can see why AI is so useful. I mean… the possibilities are endless and oh so useful!

    “With an embedded AI, Amazon expects Alexa customers will ask it for shopping advice like which gloves and hat to purchase for a mountain climbing trip, the people said, similar to a text-based service on its website known as Rufus that Amazon rolled out earlier this year.”

    https://www.reuters.com/technology/amazon-mulls-5-10-monthly-price-tag-unprofitable-alexa-service-ai-revamp-2024-06-21/

    It’s worth millions!

    Please… take their jobs.

    1. KenSchulz

      Amazon expects Alexa customers will ask it [an embedded AI] for shopping advice like which gloves and hat to purchase for a mountain climbing trip

      If you’re asking Amazon about mountain-climbing gear, you’re in for some serious trouble.

  12. kendouble

    There’s another concern about AI. Where exactly will the tech bros be obtaining the clean energy required to power their massive data centres? Microsoft is, in an act of desperation, now funding fusion research, a total Hail Mary if ever there was one. If we want to rein them in, restrict the amount of power they can use. Make them prioritise.

    1. illilillili

      Wind, solar, geothermal. When siting a data center, you look for:
      1) Is it close to cheap clean energy?
      2) Is it close to fiber?
      3) Is it reasonably close to customers?

      In that order. Especially for training AI, being close network-latency-wise to customers is less important.

      The Cloud providers regularly sign power purchase agreements to ensure that the clean energy they want gets built.

      1. ColBatGuano

        So all that extra energy required, above and beyond what is currently forecast, is just going to miraculously appear? Will the Cloud providers pay for that infrastructure or expect the public to subsidize it?

  13. illilillili

    "The Net Interprets Censorship As Damage and Routes Around It"
    Banning AIs from acting as doctors doesn't prevent all actors from being able to create AIs that are better doctors nor prevent people from using that AI. Someone will set up shop in Timbuktu to train and distribute an AI doctor. People will figure out how to access and download that AI.

  14. illilillili

    A whole lot of commenters overlook history.

    Computer generated graphics increased the number of humans that work on animated movies.

    Sewing machines increased the number of humans that worked making clothes.

    Compilers for circuit layout increased the number of people who work on designing chips.

    1. Pittsburgh Mike

      I think you're exactly right. Tools that improve people's productivity often don't result in reductions in employment -- there are still high employment levels for software engineers and doctors for example. Even lawyers, who work in an industry where dead simple automation really can reduce the effort required to write a contract, still seem employable.

  15. johnbroughton2013

    Regarding "How hard would it be to mandate the continued use of human doctors ... even if someday they aren't necessary?

    A state could mandate this, but that doesn't mean consumers would be restricted to non-AI medicine. There is already a significant "medical tourism" business; now consider if Mexico or Canada offered far superior (AI-based) medicine at a far lower cost. Or one or more states decided to offer "human-supervised" AI medicine, with doctors essentially double-checking what AIs say (and validating what AI-robotics do) - what prevents someone in California to doing telemedicine, or, worst case, flying to another state.

    And if a doctor wants AI advice in a state that mandates "continued use" of human doctors, is that forbidden? AI interpretation of images, approved by a radiologist?

  16. johnbroughton2013

    Regarding "How hard would it be to mandate the continued use of ... human lawyers, even if someday they aren't necessary?

    How, exactly, would you prevent a human lawyer from using AI - install monitoring software? And if one lawyer can then do the work of four, that would mean fewer lawyers.

    Certainly, for cases and issues involving **federal** law, lawyers don't have to be in the same state as their clients - at least, not lawyers who don't have to appear in court. Ban "AI lawyers" in California, the (AI) work will go elsewhere.

    1. Pittsburgh Mike

      The legal profession is already highly automated. We hired Wilson Sonsini to do startup incorporation twice, in 1999 and 2008. The only difference in the two docs were the names of the founders and a couple of paragraphs clearly designed to eliminate some ambiguity that probably showed up in a legal case. They don't need generative AI, just MS Word 🙂

  17. Pittsburgh Mike

    Let's look at software dev, the area I know something about having worked in it since 1972.

    Already today, programming productivity tools (even pre-Copilot) allow someone writing in modern C++ or Python to easily be 100 times more productive than someone writing in Fortran or Assembly 50 years ago. Yet employment in software development must be 100X what it was in 1974.

    What Kevin and others fail to note is that a company in 2040 and using whatever generative AI tools are available then will be competing with other companies in 2040 using those same tools. And if there's money to be made in an area, companies in 2040 will hire more programmers to work in that area, so they develop more products faster than their competitors. That's why Google, MSFT, and FB all keep hiring people. This *almost* feels like just another application of Ricardo's Law of Comparative Advantage.

    Conceivably this might change if the productivity of an unsupervised generative AI was higher than a human working with an AI, or if the unsupervised AI was enough cheaper than an AI + human. But I don't really believe either of those will become true.

    There are also several land mines in the water that might limit generative AI to roughly where it is now: copyright suits, and the related issue of running out of good quality data with which to train AIs.

    All in all, I'm thinking generative AI will have almost no effect on employment.

  18. shapeofsociety

    People don't like having jobs, they like having income and social status. Work is a means to that end, not an end in itself.

    The prospect of fully automated luxury communism, if AI gets good enough to make it happen, will lead to everyone happily giving up their jobs to go spend their lives hiking in the woods, playing video games, watching TV, or pursuing whatever other hobbies strike their fancy. Nobody is going to want to keep their job, unless their job's purpose is entirely to give them status rather than income (such as political leaders) or creative self-expression (artists).

Leave a Reply