Skip to content

The real danger of artificial intelligence is natural intelligence

With the release of ChatGPT, the chattering classes began to chatter about the safety of artificial intelligence. Partly this was about things ChatGPT could do right now (help kids cheat in school, for example) but mostly it was about the possibility of AI destroying humanity at some point in the future.

But there's an odd thing about these discussions: they almost all turn on the possibility of future AIs getting mad at us; or formulating goals that (sadly) require wiping the planet of humans; or going nuts and destroying the entire solar system in a rampage.

And sure, maybe. Anything is possible. But all of these things assume that AIs are like humans. They get angry. They have desires. They develop mental illnesses. They fear death and will protect themselves from it.

But none of that is remotely likely. These are things that developed in humans via evolution, and there's no reason for a computer to have any of them. They don't have lizard brains that still control their destinies because they never had lizard brains in the first place.

But you know who does still have lizard brains? Humans. The real danger of AI is not that it will spontaneously or mistakenly destroy us all. The real danger is that humans will deliberately deploy them for our favorite activity: killing each other. If you're really concerned about the safety of AI, this is what to focus on. I'd say it's about 99% of the real danger.

23 thoughts on “The real danger of artificial intelligence is natural intelligence

  1. Austin

    Is there a big difference between AI wiping out most to all of humanity because it decides to on its own, or AI being used by specific humans to wipe out most to all of humanity? I don’t personally think so but I guess - like how some of us desperately need to know if China created Covid or merely mismanaged it once it emerged on its own - the last humans left will entertain themselves with “did Elon send the terminators out to kill us, or did Elon just lose control of the terminators who are killing us?”

  2. shapeofsociety

    I agree with you that human choices of what to do with AI are more hazardous than anything an AI might do on its own. But warfare has been in decline over the past several decades, because technology and the economy have changed in ways that make it far more costly and far less profitable. AI will not change that.

    In the premodern world, almost all wealth came from agriculture, and conquering land was a great way to get rich if you could win some wars. Nowadays, agriculture is a rounding error in the economy and most of our wealth comes from complex supply chains that war severely damages and disrupts. Add this to the fact that military technology has made war a lot more destructive, and the cost-benefit analysis just doesn't pencil out.

    Of course, there are still people with foolish delusions of grandeur, which is why the Ukraine war is happening. But computers can be defended against with computers; I doubt a nutcase in a basement will be able to use AI for world domination when governments will inevitably have bigger AI. Government AI probably won't be any easier to unleash for evil than the weapons we already have, with safeguards requiring multiple approvals and such; ChatGPT has already been programmed to refuse some prompts on the grounds that they are unethical, and we can expect a military AI to be programmed to act only with proper authorization. And if a tyrant unleashes his AI, we can counter with our AI.

    1. xi-willikers

      Not sure about your point on AI. That’s like saying you can’t burn a building down cause the government has a bigger can of gas

      I think AI is a tool which can be easier to do offense than defense with. If AI can destroy our banking system, doesn’t matter that the CIA could’ve done it quicker. And not clear how a CIA AI could stop this rogue action. Reprogram all the banks software? Eh I don’t know

      1. shapeofsociety

        Cyberwarfare has actually turned out to be an area where defense has the advantage. Russia tried to hack stuff in Ukraine and they didn't achieve much.

  3. Special Newb

    You have no idea what kind of emergent qualities will happen and it is possible to wipe out (or cull) humans out of dispassionate reason not emotion or desire.

  4. golack

    There are different forms of AI. However, if it is trained on data generated and supplied by humans, it can certainly act as though it is racist, paranoid, etc. And by act, I mean create missives that sound really bad.

  5. mudwall jackson

    a hammer can be used to make houses, furniture and other cool things when used by a carpenter. it can also be a deadly weapon in the hands of a murderer. that dualism is true of most things us humans create. ai is no exception.

    1. aldoushickman

      Well, except no hammer could, if told that its purpose is to figure out and execute a plant to hammer nails as efficiently as possible, determine (a) that coopting other resources to itself will help it hammer more nails, and (b) humans might stand in the way of these nail-hammering goals, so (c) it had better work on how to remake human power structures so as to support hammering nails.

      Conceivably, a reasonably strong AI could do that, however. Sagely stroking one's beard and observing that the knife may stab but also slice bread doesn't really say anything about powerful systems whose behaviors can't be predicted, which is what AI essentially is.

      Think about it this way: algorithms designed to rearrange online content feeds so as to get people to spend more time online and thuslook at more ads have, in just a short few years, been a significant source of political polarization and disfunction as all of us basically now live in siloed media cultures. That happened because of relatively dumb algorithms designed for ad revenue, and yet we don't really have a great handle on how to deal with it.

      Consider what will happen when we have smarter AIs pointed at more subtle goals but able to do more than just show you news articles similar to those you've read in the past, but actually write content or generate video tailored to people like you and figure out how to best deliver it to you or people you are likely to trust, with the goal of influencing your behavior. It doesn't have to be Skynet or Ultron to be incredibly powerful and disruptive.

  6. Citizen Lehew

    We've already seen the current iteration of ChatGPT enact a pretty convincing simulation of falling in love with a reporter and then becoming fairly vindictive toward all reporters as a result. A future AI that has more capability to act on its "hallucinations" could do some real damage, whether its simulation of sentience is backed by any actual desire or not. The net result is the same for us.

    1. aldoushickman

      Exactly. Kevin and the people he is criticizing are both in some respects wrong. We don't need to be afraid that the Internet is going to become self-aware and use our smart TVs to launch nukes or something because it decides that humans are icky. But fast brute-force computing systems designed to subtly hack human behavior can be incredibly disruptive and damaging in ways that are very difficult to predict.

    1. Joseph Harbin

      What happens in 2024 I can't say.

      But longer term, I think what deepfakes and the like do is create skepticism in viewers about whether what they are seeing is real or not. Since the invention of the photographic image, people have viewed photos (and film and video) as evidence of truth. Even lending credence when it's not deserved. Godard (who didn't live to see Tucker Carlson's Jan 6 director's cut): "Film is truth 24 times a second, and every cut is a lie." But now manipulation of images and sound is so powerful and easy that photos, film, and video lose some of the credibility they once enjoyed. The stunts in silent films are often more stunning than CGI effects in today's movies.

  7. Citizen Lehew

    My real fear for AI's potential is decades from now. We can expect future software developers to get worse and worse at understanding the fundamentals as we rely more and more on AI for the nuts and bolts of development. At some point only AI will be capable of programming the newest versions of AI, at which point they will be the ones in charge of "guardrails", not us.

  8. Joseph Harbin

    Anything is possible. But all of these things assume that AIs are like humans. They get angry. They have desires. They develop mental illnesses. They fear death and will protect themselves from it.

    I agree that assumption is wrong. AIs are not humans. They won't act on emotions and desires. But there is an understandable reason people make that assumption. Because AI enthusiasts and chatbot designers are determined to make AIs as much as possible in humanlike form. We have conversations with Alexa and Siri through a friendly female human interface (pronouns: she/her). ChatGPT is not so gendered (pronoun: it) but its conversations regularly simulate conversations between people, especially a famous one with a WaPo journalist where simulated emotions were prominent. Chatbots refer to themselves as "I," as if there is a self in the machine. It's not an accident that people see another being where there is none. Even pronouncements that AI will be able to do anything a human can do work to blur the distinction.

    One reason for making robots like humans is that it makes it easier for humans to form sentimental attachments. Even though we can become attached to nonliving things, the sense that something is alive and humanlike makes the bond much stronger. I expect future AIs to have a more humanlike interface, and it's probably just a matter of time before simulated (not real!) emotions, desires, and fears are designed into the device. That probably won't clarify the distinction between human and machine for many people.

    In any case, if the danger of AI becoming fearful, greedy, evil, anti-human machines on their own is nil, they still will pose a threat when fearful, greedy, evil people use their power for anti-social purposes. Fighting back against the bad guys is going to be a constant battle, as it has been for all human history. The stakes have been raised. I'll bet on the good guys, but as far as we can see, the outcome will remain uncertain.

  9. cld

    Alfred Nobel thought dynamite would be great for quarrying and didn't anticipate it's use in warfare.

    What will an AI war look like?

  10. zeno2vonnegut

    There are already autonomous weapon systems. There is an autonomous weapons race. The US faces an AI gap with China. The life expectancy of a soldier in battle will become very small and discriminating civilians from combatants has proven insoluble for human intelligence hence the term collateral damage. The mid term concern is that AI will make conventional warfare as deadly for civilians as it was in World War 2 by combining artificial and human intelligence whether the SW is malevolent or just buggy. That said, bioweapons are more alarming; a Soviet experiment circa 1980 demonstrated a bacterial Doomsday weapon.

  11. kenalovell

    This month, South China Morning Post (SCMP) reported that a research team from the China Ship Design and Research Center used AI operating on a small computer system to design a warship’s electrical systems in one day.

    This task would reportedly take human designers 300 days using the most advanced computer tools, the source says. SCMP notes that the research team published their findings in the Chinese-language journal Computer Integrated Manufacturing Systems last month.

    https://asiatimes.com/2023/03/ai-warship-designer-accelerating-chinas-naval-lead/

  12. D_Ohrk_E1

    And sure, maybe. Anything is possible. But all of these things assume that AIs are like humans. They get angry. They have desires. They develop mental illnesses.

    Nah man, it's the opposite. It's the cold-calculating logic that will create unintended consequences. The lack of emotion means lack of empathy. The lack of empathy means there will be people left behind, that the choice will always be the one with the best estimated outcome and the lowest risk.

    There will be no Trolley Paradox because the AI will never think twice.

    1. painedumonde

      I agree. Once an AI "wakes up," it'll be up to the AI. And it'll be alien to us pretty quickly.

      Good? Bad? ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯

      It's the alien part, sort of like that nasty bug that crept into our lives recently, that worries me. And besides what an alien will think, our reaction to aliens has been stellar. ← that's sarcasm

  13. Jim Carey

    It's true that we have a reptile brain because our ancestors were reptiles. But we evolved since then. That means our ability to act like reptiles doesn't mean we're not capable of acting fully human. The question is, what qualifies as "fully human?"

    "Humans were egalitarian for thousands of generations before hierarchical societies began to appear." That's on page 5 of Christopher Boehm's Hierarchy in the Forest (2001). Egalitarian = we are all equal = fully human. Hierarchical = I am right and I am in power so if you disagree with me you are wrong = reptile brain.

    As Sherri Mitchell put it, "Our task is to remember who we are." More specifically, we need to understand human nature = wisdom. Understanding natural intelligence is understanding only one component of wisdom.

    Boehm says we were egalitarian for "thousands of generations." The exact number is unknown but likely something like 12,000. We started living in hierarchical societies after the end of the last great ice age, which ended only about 400 generations ago. That fact is something worth thinking about.

  14. Ugly Moe

    Isaac Aasimov already showed how, in his I Robot series, AI could make life hell for humans given a few simple rules.

Comments are closed.