Skip to content

We are all waiting restlessly for artificial intelligence

A new paper in Nature claims that "disruptive" discoveries in science have been on a downward trend over the past few decades. The authors use a measure of disruptiveness that basically measures whether anyone cares about previous work once the disruptive paper has been published:

The intuition is that if a paper or patent is disruptive, the subsequent work that cites it is less likely to also cite its predecessors; for future researchers, the ideas that went into its production are less relevant. If a paper or patent is consolidating, subsequent work that cites it is also more likely to cite its predecessors; for future researchers, the knowledge upon which the work builds is still (and perhaps more) relevant.

I'm . . . not so sure about this. All the various papers and patents that went into CRISPR, for example, may have been ho-hum, but CRISPR itself is pretty disruptive. You could say the same thing about the internet, cell phones, and GPS. The increasing complexity of the world means that it takes a lot of small pieces to construct a single disruptive discovery.

But there's another thing as well: A very few discoveries in human history might be called super-disruptive: discoveries so big that for many years they enable a follow-on surge of ordinarily disruptive discoveries. Here's a list:

Don't take the details too seriously here, but this table lists most of the consensus super-technologies and a few of the follow-on technologies they enabled. After the electric grid was invented, for example, households got air conditioning, refrigerators, TVs, microwaves, stereo systems, dishwashers, electric lighting, washers and dryers, and a vast array of other electrical gadgets.

The discovery of these super-technologies got closer and closer together all the way through 1950, but now progress seems to have stalled. This is because we're still living in the Computer Age, which our timeline suggests should have lasted about 50 years before giving way to a new super technology. That hasn't happened, which means we're still living on the dregs of an era that's largely played out.

But why hasn't a new super-technology been discovered yet? Because the next super-technology is artificial intelligence, and it turns out that it's really, really hard, even compared to previous gut busters. So we're piddling along with routine improvements and ordinary new inventions while we wait for AI to come along. When that happens, we'll have yet another explosion of innovation.

76 thoughts on “We are all waiting restlessly for artificial intelligence

  1. B. Norton

    One might add potable water systems/wastewater disposal systems to the disruptive technologies list. They disrupted disease pretty significantly.

  2. Zephyr

    The problem is AI is going to supercharge the already scammy world we live in. Fake term papers, fake emails, fake ads, fake LinkedIn profiles, fake news, faked everything designed to vacuum up all your money.

    1. Blackbeard

      But if, for example, students start using AI to fake term papers, wouldn’t universities just use bigger and better AI to detect the fakes?

  3. Doctor Jay

    I dunno if I even sign off on "artificial intelligence" as being a singular thing. We already have AI library assistants (Google search), AI parking valets (self-driving cars, which are currently fully capable of putting your car into the parking lot, and probably better at it than you are), AI advertising managers (laugh all you want but they are already better at this than humans are).

    Do you want to talk to a computer? Do you want to ask it questions? You can already do those things, with results that work for some and not for others. Do you really want computers that are better to talk to than humans?

    My guess is that a generalized artificial intelligence won't ever exist, because intelligence, as we understand it, is embodied. The "brain in a jar" is just a brain in a jar, not the next step in evolution.

    1. Keith B

      I'm not quite so pessimistic (or is that optimistic?) but I think your basic point is dead on. Human intelligence arises from our experience as we interact with the world, and from brains developed through hundreds of millions of years of evolution to be able to learn from that experience. Computers have none of that. Maybe it's possible to instill enough of a model of how the physical world works and (equally as important) of how other minds work to simulate real human intelligence, but it's a much more difficult task than AI researchers seem to think. I'm fairly sure that it can't be done just by using a machine learning algorithm on a huge neural net.

      1. BruceO

        Easy enough to fit out Atlas with stereo vision, binaural hearing, touch, smell, even taste. Let it wander around for a while and talk to it like we talk to our kids as they grow... "That's a cat, be nice to it but be careful so it doesn't bite or scratch you." Power starts to run down, it goes somewhere and has a snack. It needs a deep recharge once a day.

        Its experiences wouldn't match human ones, of course, but it's easy enough to have it learn about the real world by interacting with it.

      2. Jasper_in_Boston

        Human intelligence arises from our experience as we interact with the world, and from brains developed through hundreds of millions of years of evolution to be able to learn from that experience. Computers have none of that.

        We'll not literally recreate "human" intelligence (biologically-based, honed by aeons of natural selection), sure. It will be artificial. My point being, what we call AI doesn't have to be (and won't be) anything like what homo sapiens uses to make sense of the world. It just has to achieve the same (or better) results compared to what our species is able to produce. And it's already there on some things (chess being an obvious example).

        I've tended to be on the "Drum is too optimistic" team with regard to his predicted timeline for AI. But there's no doubt in my mind something very powerful will eventually arrive—he's not wrong about that.

    2. BruceO

      Whenever AGI arrives, which I think is inevitable, it'll be more like we've invented a species of alien and invited it to live among us.

      1. lawnorder

        Quite a while back a science fiction writer (I forget the name) defined an intelligent alien as a being that thinks as well as a human but not like a human. If and when human-level AI is achieved, it seems reasonably certain that it will be alien by that definition.

    1. Jasper_in_Boston

      The internal combustion engine should be on there, too. The changes it effected on human society can hardly be exaggerated. For one thing its emergence massively stimulated other, entire sectors (glass, rubber, trucking, drive-ins, road construction and engineering); prompted gigantic sociological change (commuting, suburbs, sprawl) and led directly to airplanes, and ultimately, the jet age.

  4. painedumonde

    I can't believe the introduction of "derivative trading and other complex financial tools" didn't make the list. Not every invention should be positive as morrospy notes.

  5. Zephyr

    The telegraph, radio and television should be on the list. Before telegraphs most news could travel no faster than a horse or a ship.

  6. ttruxell

    I agree on the clean water thing above. How many lives has that saved. Untold millions.

    I'd add also this to the future list: workable fusion energy. It just as close as AI

    1. jheartney

      Fusion Energy = General AI in Pie-in-the-sky score?

      LOL, can't really argue that one. Don't expect to see either in my lifetime.

  7. Keith B

    You should add biotechnology to the list. You could trace the origins to Pasteur in the 1800's or even Van Leeuwenhoek's microscope, but it's reasonable to say it really came of age around 2000 with the human genome project. That's about as disruptive as anything since the agricultural revolution.

    As for AI, do we really need computers that have a human level of intelligence? We already have programs that can perform many specific tasks better than any human. For instance, Google Maps can give directions to anywhere in the US and many other countries, and even knows about current traffic conditions. I have an app that allows me to photograph a plant and identify what plant it is. There have been programs for at least 70 years that can solve any math problem that has a known method of solution. Collectively they are much more useful than something like ChatGPT, which essentially is a program that's learning to BS convincingly. We already have humans that are experts on BS. I don't see any advantage in developing a machine that can do it better.

      1. Keith B

        You could claim 8,000 years or even longer, if you count crop and animal breeding from the onset of the agricultural revolution, but the scientific investigation of life is much more recent. It couldn't even get started until the realization that living things are composed of cells.

        Everything on Mr. Drum's list has precursors that extend farther back than the dates he provides.

        1. ScentOfViolets

          Selective breeding doesn't require the knowledge that living things are made out of smaller living things, at least, not to my knowledge.

            1. ScentOfViolets

              Blink. Why would you think that? Do you think the Wright Flyer was not an airplane? Do you think a Greek trireme is not a ship? Technology at the application level tends to exist on a continuum, or at least so I think.

    1. BruceO

      Your point is well taken; AI-like capabilities are already plenty good to take work from humans (and there are plenty of other capabilities not identified as AI that are also taking work).

      I'm hoping for an AI-based encheferizer that inputs a video clip and outputs a clip with an encheferized sound track.

  8. Jim Carey

    The problem I have with every AI discussion that I'm aware of is that they're all ignoring what I view as the elephant in the room. Specifically, intelligence, artificial or otherwise, is a tool that serves a specific interest at the expense of other interests. For example, "narcissism" is the name we give to natural intelligence used to resolve conflicts of interest in favor of the self, and "wisdom" is the name we give when it's used to resolve them in favor of the greater good. AI will serve whatever interest a human tells it to serve. My hope is that the human will be wise.

    1. BruceO

      Well, I think you're asking who will instill its values into it?

      Right now it's "Some subset of the internet" which isn't what most of us want.

  9. jheartney

    One assumption about AI is that once invented, it'll have unlimited scalability; thus the worry about superintelligent AIs taking over the world. But what if it isn't? What if the best we can do is an AI dog or horse-equivalent? Such an AI would still be hella useful (could at last make vehicle autonomy happen) but wouldn't be a threat to human hegemony.

    1. ScentOfViolets

      If by unlimited scalability, you mean AI's having IQs of 4,000 or 10,000 or 1,000,000 (yes I know, no one serious uses IQ scores, it's just that humans are scaled to 100), you may be on to something. But there's another school of thought that maintains human-level intelligence is pretty close to the highest possible level of intelligence, say, within a factor of two or three. The argument is that simply bolting on more memory -- working, short term, long term, etc. -- is does not make a person more intelligent. Simply increasing the 'speed of thought' doesn't help either. What the actual magical ingredient(s) that determine lessor or greater intelligence no one really knows, and I suspect that past a certain level of organization concomitant with ever increasing information processing power intelligence and even consciousness no longer have any meaning.

      1. BruceO

        I haven't heard the 'factor of 2 or 3' argument before.

        I wouldn't expect that to be the case; it should be able to have a far more interconnected "brain" than humans. That could presumably yield "better" decisions than humans (whatever that means).

        1. ScentOfViolets

          So the more interconnections in a biological brain or it's machine equivalent the more 'intelligent' it is? I certainly wouldn't cavil at the assertion that for at least some tasks, interconnectivity helps. But I suspect a non-trivial percentage of people would agree that the measure of intelligence is just how many tasks you can perform at a given level of proficiency.

          1. BruceO

            I'm assuming more connections would allow the software to access more data as it makes decisions, so it would make better-informed decisions.

            That wouldn't necessarily improve the quality of the decisions; that would depend on what had been stuffed into the database.

            1. ScentOfViolets

              You don't need more connections to access more data. But that's a hardware issue. I was assuming you meant 'more connections' in the software sense. By which I mean to say, if there is any intelligence at all in a machine, it resides the in the algorithms it's running. Not in the hardware (though there are minimum requirements), and not in the particular implementation of those algorithms. There's a wonderful piece by Hofstadter and Dennett Taken from their collabertion The Mind's I. The entire book is online and quite readable. But all you need to know for our current conversation is in the socratic dialogue about a conversation with Einstein's Brain.

      2. illilillili

        Simply increasing the speed of thought helps *a lot*. You could in principle run today's car automation software on 1950's computers, but getting an answer in a few eons doesn't let it drive a car in realtime.

        1. ScentOfViolets

          So you're claiming that running an algorithm to, say, sort a list of numbers is somehow more intelligent if it's run on a Spark workstation as opposed to and IBM 370. Well .. I disagree 🙂

        2. Jasper_in_Boston

          Simply increasing the speed of thought helps *a lot*

          Yeah, the obvious example is chess. High level chess-playing is an astounding feat of us bipedal apes (a few of us, anway). And yet not even the greatest of us is now a match for machines. And AFAIK the gap is still growing. Maybe there's a cap on how much better machines can get than human at playing chess (it's logical that there would be; after all, it seems nonsensical to postulate AI can become infinitely better than humans at chess). But that gap is clearly going to become huge. And now apply that to every field of human endeavor: designing software; conducting medical research; diagnosing illnesses; figuring out ways to make financial transactions more efficient; driving and flying; industrial engineering; R&D of every variety; translating and interpreting human languages...

          1. ScentOfViolets

            Well, sure, computers outperform humans at the task of playing chess. But they don't play in the human style now, do they? In fact, they use a much dumber brute force aptsproach, looking ahead more moves than a human is capable of and then trimming forks in the play by asimple weighed evaluaton. And since this dumb brute force approach works I would most empatically argue that playing a good game of chess is not a good proxy for intelligence.

  10. cld

    I'd have included something like electro-magnetic theory/relativity/quantum mechanics, all together as one thing, because, unique in history, they're abstract theories of the unseeable that turn out to be real and that can actually be used to do things.

  11. realrobmac

    "So we're piddling along with routine improvements and ordinary new inventions while we wait for AI to come along."

    This strikes me as unlikely.

    And once again, AI is not defined as it pretty much never is in a discussion about AI. So what, if anything, are we talking about here?

    1. lawnorder

      That's my take. We talk about AI without defining it, which means we don't really know what we're talking about and won't know when it actually comes into existence.

      1. ScentOfViolets

        Okay, how about this definition: Artificial intelligence is what HAL/SAL 9000 series has. This is a perfectly fine definition, just as one can define a set either by specifing some rule for set membership, say, the set of all natural numbers less than five, or by saying the set is composed of the numbers 1, 2, 3, and 4.

    2. ScentOfViolets

      I'm of the "I don't know what intelligence is, precisely, but I know it when seet it" school of thought. That is, when an AI can, say, attend class just like a regular student and learn the same way they do, I'll call it intelligent. Not before.

      1. Jim Carey

        Proposed definition of natural intelligence: the ability to close a gap between an undesirable current state and a desirable future state if and when the necessary process is cognitive as opposed to something else (physical and etcetera). That would mean that AI is the use of technology to replicate the natural version. How'd I do?

      2. illilillili

        Intelligence is the ability to accurately predict the future. The more accurate, the more intelligent. This includes finding and recognizing patterns useful for predicting the future.

        1. Jim Carey

          The ability to predict the future is the how. Achieving a desirable outcome is the why. Having a "how" without a "why" is like knowing how to drive a car and never going anywhere.

  12. DaBunny

    What the heck is the "Years to develop" column. Is that how many years from first use to full development? So steam engines were fully developed by 1950? Sure. Electric grid has maybe 80 years before it's *really* built out? Eh, maybe so. Computers were there by 2000? I guess so...maybe. But some of those others are just insane.

    Monetary tech was fully developed around the time of Jesus? Are you joking? The printing press won't be fully developed for another 900 years?? WTF?! And agriculture has another 80,000 years before it's really developed? That...that's gotta be a typo, right? Right??

  13. zaphod

    Kevin's on his AI fantasy ride again.

    What I know is that trying to communicate with companies and running their AI gauntlets has coarsened the interaction. It triples the time needed to make a simple inquiry, and makes the experience most unenjoyable. If this will be the norm for AI in other capacities, God help us.

    The other day, Kevin rightly feared that the "lizard mind" in humans was a real danger for the future. Will AI have a lizard mind? Since the lizard mind is likely the part of the brain that is responsible for emotions, it is not all bad, because not all emotions are bad. Quite the contrary.

    So build your AI's and they will have all of the conscious experience as a can of tuna fish. Which is OK with me. Just don't expect them to attain or explain the mystery of consciousness. And don't expect them to solve any of the problems attached to human experience. Because they literally will have no knowledge of what that is.

    Now, quantum computers I'm not so sure about.

    1. bigcrouton

      AI will absolutely have knowledge of human consciousness and experience because these things are written in human literature of all types. Just reading Crime and Punishment should give it plenty to work with.

      1. zaphod

        So, who will decide just what AI reads? I can see some real scary decision making output depending on who programs it. Thanks for something else to worry about.

        More generally, it will get a picture of human experience based on selective past portrayals of human experience. Garbage in, garbage out. I'm not aware that "Crime and Punishment" solved any problems concerning the human propensity for evil. I do not believe that algorithms are capable of ameliorating human behavior. We're barking up the wrong tree.

  14. BruceO

    I'm reminded of "There's Plenty of Room at the Bottom" by Richard Feynman where he discusses manipulating individual atoms. It was a speech given in 1959.

    Interestingly, we've basically hit that boundary in electronics and materials science, and as you look around at most of science and technology we're pretty close to the boundaries of what is knowable. There are plenty of discoveries yet to be made, to be sure, and some of them will probably dramatically change human life. But it's easy to see why big discoveries have become increasingly scarce.

    1. illilillili

      We aren't even close. The vision that Drexler presents of 3d atom manipulation is something that biology does, but that we don't yet do on any scale larger than a bacteria.

      1. BruceO

        Drexler's vision is pretty outré and so far it doesn't seem to have yielded much in the way of tangible results. There are a lot of potential, or even likely limitations posed by physics that could preclude the realization of using billions of little robot arms to pick and place individual atoms, along with the other atomic-scale machinery to shuttle the completed sub-assemblies through the steps of the manufacturing process.

        But we're working just above that level with 2d materials not more than a few atoms thick, nanotubes, and graphene. We're still learning a lot about those levels, so they may provide an alternative path to accomplishing what Drexler proposed.

    2. KenSchulz

      Actually, I am expecting some pretty disruptive innovation in materials science. Some more progress in carbon nanotubes or graphene macrostructures and we could have a space elevator. That would drastically reduce the cost of getting stuff into Earth orbit. We could build massive orbiting solar arrays and send uninterrupted sustainable power down. We could build really large, capable interplanetary ships to mine asteroids, explore the Solar System and become a truly spacefaring species. Advanced materials and nanomachinery could enable us to build much more efficient, durable, even self-maintaining structures on Earth.

  15. illilillili

    I'd add vaccines to the list.

    And I think "digital computers" needs to be broken up somehow. Saying that the room-sized computers with all the power of today's hand held calculators is super-disruptive seems a bit of a stretch. Feynman was programming digital computers in the 40s when the computer was a roomful of women with paper and pencils. https://ahf.nuclearmuseum.org/ahf/history/human-computers-los-alamos/

    The various generations of computing -- mainframe, mini, workstation, personal, phone, cloud -- were each fairly disruptive. And the underlying technology of patterning atoms in 2d on a wafer is also fairly interesting.

    The WorldWideWeb was super disruptive and occurs at about the right point in time. Say 1990 and about 40 years after the previous super-disruption.

    The iphone was super-disruptive. We very quickly stopped using landlines and felt a need for our tweens if not our 8 year olds to have a phone.

    1. BruceO

      The integrated circuit would be an appropriate place to break it.

      Discrete components, even semiconductors, aren't scalable. The IC changed it.

      The different "generations" of computer aren't really that distinguishable until you get to the Apple II and PC, which is when they started becoming something an individual (consultant or homeowner) or small business could own.

  16. Zephyr

    Every time I read something about the wondrous future of AI I think of all the crappy software and hardware we have to deal with that makes life worse. Every single time I am forced to use the self checkout line something fails and it takes a human to make it work. Use the stupid touchscreen kiosk to order at McDonalds and it takes five times as long as telling a human clerk. The robot vacuum is constantly getting stuck. The notifications on my phone simply do not work. My laptop just started refusing to charge. What tortures will AI bring us?

    1. zaphod

      Thanks for the warnings about self-checkout lines. I have so far saved myself from that inconvenience. But the reason I avoid them is that they eliminate human employment.

      BUT, maybe if it takes an army of humans to correct AI errors, then employment loss is cancelled out, and we are just left to deal with the inconvenience.

      At the end of one of Douglas Adams' novels, a pilgrim is allowed to view God's final message to mankind. It was "We Apologize for the Inconvenience".

  17. Pittsburgh Mike

    This is all nonsense. We're only starting to see the results of pervasive computation; the iPhone is perhaps 13 years old.

    Similarly, biological inventions are dependent upon computation, and are *just* starting to show promise, as Kevin should obviously know, given his pending CAR-T treatment.

    We don't have a clue which inventions are going to have legs over the next 50 years. In 1972, none of the advances in biochemistry were even visible; instead people were predicting massive famine in the 80s. The ARPAnet was 3 years old and only used by a few DoD contractors at some research universities.

    As for general AI, it's still a joke. You have some good pattern matchers with neural networks, though you can't figure out why they make any decisions, and a lot of them, such as resume scanners, probably just rediscover implicit bias.

    But large language models, like ChatGPT, are pathetic, worse than a joke. They sound good, but their lack of understanding just means they're frequently spouting nonsense, and they don't even know when they're making s**t up.

  18. Bob Cline

    We're living in an era with an unprecedented rate of technological advancement. Consider the changes you've seen in your lifetime. Where do people come up with this stuff?

  19. kaleberg

    That "disruption" metric seems to be explicitly designed catch incremental improvements. Suppose someone finds the value of some physical constant to three digits. Subsequent papers will refer to that one. Then someone finds the value to four digits. The four digit paper references the three digit paper, but subsequent papers reference the four digit paper. Wow, is that disruptive. Substitute slightly better synthesis, slightly improved algorithm or what you will, and you'll see all kinds of slightly disruptive papers. Yawn.

    P.S. Why AI? It seems pretty incremental and our current approach doesn't promise much better yet. Why not a battery, power or materials breakthrough?

    1. ScentOfViolets

      For some reason I can;t fathom, materials science doesn't get the love it's siblings enjoy. Even though materials science is prime driver that advances the others. Didn'tk used to be that way. Anybody remember Anerek as the wonder substance of tomorrow? What about Cavorite 😉 A bit more plausible would be advances in the fabrication of the latest carbon allotropes; supercaps, supergates, superconductivity, here I come!

  20. Goosedat

    Replicating the nervous system with global satellite communications was the great disrupting event humankind is still experiencing and it is interesting that this medium is still barely recognized as disruptive. Replicating consciousness will so thoroughly merge figure/content and ground/environment the medium of AI may never be recognized once artificial awareness of the set of all sets becomes operable. This medium will oversee and produce all stimulus of any environment for any observer to achieve a harmonious outcome determined specifically for each individual, who will never be able to determine any distinction between that which is perceived and that which is blocked out in order to focus attention. The production of subjectivity will become completely mechanized, which is what the critics of AI warn.

  21. Solar

    I understand that AI is a pet topic of you Kevin, but what exactly does it have to do with the Nature article? Also, it seems you may have misunderstood the point of it. The article isn't discussing or measuring disruptiveness by the impact the technology had on society, but by how different it is from the preceding technology.

    For the authors, what they are measuring is how often does a new development or technology completely turns the previous understanding in that area on its head to the point that prior knowledge becomes less relevant for the future, vs when the development is the result of gradual smaller increments building up from the previous knowledge.

Comments are closed.