Skip to content

Poll: Is AI just another overhyped scam? Or the bedrock of our future?

The seemingly endless hype of "artificial intelligence" has, naturally, produced plenty of obvious failures. I asked my car to call "Karen" the other day and instead it called "Mother Jones." Huh? Then I asked it to call "Home" and it asked if I wanted to call 611. This is typical performance.

On the other hand, the vast server farms of T-Mobile do a remarkably good job of transcribing messages left on my voice mail. It's so accurate it's hard to believe sometimes.

When you hear about this kind of stuff, what's your reaction?

  • AI is just another overhyped scam. It's the dotcom boom of the '90s all over again.
  • The AI snarkers are like the folks who mocked cars as toys in 1900. They were too blinkered to understand what would happen as the technology improved.

Which are you?

40 thoughts on “Poll: Is AI just another overhyped scam? Or the bedrock of our future?

  1. DTI

    I dunno. I studied both the ethics and technology of AI back in the 1980s when we were asking the same questions. (Though there's now no concern that computers gaining consciousness and *autonomously* taking over the world the way we were back then. We weren't concerned that computer *owners* like Zuckerberg, Bezos, etc. would lose consciences and take over the world. Which only goes to show.)

    Anyway, AI has already come like a freight train over the last 40 years. Most electronic thermostats now use perfectly legitimate AI to manage our furnaces. Fancy rice cookers use it to make perfect rice. And most cameras use AI to keep our drunken selfies in focus and color balanced.

    TikTok and Facebook use it to make sure we're seeing content we really want to see. (And, less successfully, to show us ads they think we want to see.)

    Oh, that's not "really" AI? Um, yeah, it is. Or was. Saying otherwise is moving the goal posts.

    Point being that while it's not an overhyped scam it's also probably going to *continue* to creep over us more than roar over like a freight train unexpectedly coming around a blind curve.

    1. Brett

      Those are all examples of what could be called AI but are really just purpose built algorithms that may have some very rudimentary decisioning capability to make decisions based on correlation. But mostly I think they’re algorithms that many engineers have spent thousands of hours crafting by hand.

      Kevin’s hobbyhorse about AI is more that self learning algorithms will become smarter than humans and eliminate all knowledge work. If that can ever happen I’m not worried about it being in my lifetime.

      AI is such a vague term that I think everyone has something different in mind.

      1. wvmcl2

        Exactly - none of those things are true artificial intelligence, just computer chips that can do certain specific things faster and with better memory than humans. None of it has the massive complexity of human cognition. Those sensors in toasters and cookers are just glorified thermostats - a technology that has been around for a century. And even chess programs are not true AI - just highly developed algorithms dedicated to a specific task. Calling these things AI is hype.

        True AI, if it ever comes, is decades away.

  2. DFPaul

    Get an iPhone (horrors, I know). "Call so and so" is remarkably accurate, assuming you have the names entered correctly in your address book.

  3. cld

    AI is a thing that goes on behind the scenes for most people, so I think a lot of people who think it's hype simply never have occasion to practically think about it.

    I had been thinking, who on Earth would want to be able to turn off their lights with their phone, how lazy could you possibly be?

    Until yesterday when I had to get out of bed to do it while holding my phone. Now I need one of these gizmos.

    Projecting from this to AI, I'd say everyone will be pretty on board with it the minute it obviously does something for them, the equivalent of about 1910 for autos, which will be, --I don't know. Probably when your cereal bowl asks if you'd like more sugar.

  4. kenalovell

    The Feedly news aggregator has been running an AI project for months, teaching "Leo" to tag the content of news stories. It's comedy gold. :Leo actually seems to be getting artificially dumber. Example from today:

    Extremist Michelle Malkin Gets Banned By Airbnb
    Is this article about Politics?
    YES NO

    NO

    Thanks! Your feedback helps make Leo smarter. Leo was 63% confident this article was about Politics.

    Even better:

    Italian Catholic Military Chaplain Condemns Anti-LGBT Archbishop For Spreading Deranged Anti-Vax Claims
    Is this article about Geopolitics?
    YES NO

    1. name99

      Unsure exactly how "the dumbness of crowds" (and the dumbness of using crowds to try to achieve ANYTHING that touches on the political) has relevance to AI...

  5. Justin

    Like most technological shifts, the impacts of AI are and will be overhyped in the short term, underestimated in the long term.

    (As opposed to blockchain and cryptographic currencies, which are overhyped in any time horizon you want to choose).

  6. Blaine Osepchuk

    I agree with all the comments so far. AI, in its various forms, is already huge. And everyone in a modern society is benefiting from it, often without realizing that AI or machine learning is occurring behind the scenes.

    Is there hype? Of course. But it's also counterbalanced by all the AI that we don't call AI any more. We keep moving the goal posts, as one commenter wrote.

    1. golack

      Define benefit 😉

      Mapping routes to translations--helps. Managing workers schedules--hurtful.
      Facial recognition--creepy?

      It does depend on how it's used and how it has been optimized.

  7. skeptonomist

    Most of what has been done so far is not really "intelligence" on the part of machines, it's just elaborate programming by humans. Computers are good at doing repetitive tasks very fast, and once they "learn" how to do something right, i.e. are correctly programmed, they don't forget or make mistakes like humans do.

    The tasks that computers have been able to really think out for themselves have been pretty limited so far. When you ask Alexa to do this or that, it's a matter of your request having been anticipated by human programmers.

    1. dausuul

      But the ability to understand your speech in the first place is AI.

      Which is typical. The things humans think are easy, like walking and talking, are capabilities we evolved over millions of years, and we are really good at them; bringing computers up to our standard is incredibly hard and was pretty much impossible before machine learning.

      A lot of the things we think are hard, like playing chess, are things we're really bad at, so it doesn't take much for computers to outdo us.

  8. Andrew

    It’s useful to differentiate between what we now call ‘AI’ and the goal of AGI - artificial general intelligence. Computationally, we can now handle a large amount of human generated input to generate human-like output via neural networks. We have not created a goal-directed AI where a program could take an inexact description of a task, generalize it, create a new set of networks and join them together to ‘learn’ how to solve the problem. We are even further from the final goal of a self-directed AI where the program can actively generate new tasks (e.g. eradicating humanity) on its own without human intervention. It's not clear at this point that the current techniques will scale up to an AGI or we'll have to approach it differently using NNs as the building blocks.

    1. KenSchulz

      Scientific psychology (I’m a perp) is well into its second century, and we know very little about human cognition, in comparison to what we don’t know. Psychologists even have varying opinions on whether ‘general intelligence’ is a thing. Not that useful things aren’t being accomplished with advanced programming techniques, things that formerly required human intelligence. But given the difference between computer hardware and software and brain wetware, it’s pure speculation to apply the same term to both. Heavier-than-air flight was accomplished without flapping wings, and we have only recently gained understanding of bird flight - by studying birds, not airplanes.

  9. MontyTheClipArtMongoose

    Based on MoJo's recent reporting on the Biden-Harris regime, I think AI is recognizing those fucks as the asking to speak managerially class.

  10. iamr4man

    >> AI is just another overhyped scam. It's the dotcom boom of the '90s all over again.<<

    I don’t think it’s an overhyped scam, but I do think it will be pretty much like the dotcom boom. Lots of people trying to get in to it for a quick buck. Lots of failures. Lots of too much too soon stuff. But eventually things will shake out. Perhaps not as quickly as some people think, but 20 years from now it will be everywhere.
    I wonder what would have happened to Webvan if Covid hit when it was starting?

  11. lithiumgirl

    It's here already, and not always used wisely. Read Weapons of Math Destruction by Cathy O'Neill for some troubling examples of its misuse.

  12. xi-willikers

    People don’t really understand what AI is when they criticize it. For context, I did deep learning research very recently in college

    Most of the recent advances (10-15 years) have been done in really big deep learning models. Things that recognize text or images very well but require huge datasets. However, this is not intelligence by any stretch of the imagination. They are basically just nonlinear functions that we have used brute force computing power to make very good at specific tasks. You’re not teaching an AI the alphabet and working your way up, it just takes input and spits out output

    This sort of explains why some AI is great and some bad: it’s skill scales directly with your computing power, dataset size, time investment etc which all costs money. A car company half asses their AI because they make cars for a living, not natural language processing models

    Modern AI is more of a powerful toolbox than anything else. We haven’t learned anything of substance about intelligence, just little tricks and a good way to apply overwhelming computing power to mimic human tasks

    1. dausuul

      The AIs of today are essentially insects. We evolve them to perform certain tasks, which they then do very well, but they don't innovate or devise novel solutions in the moment--they know only what evolution has taught them.

      But... insects are far beyond what we had before. Insects can do incredible things. And the principles by which insects evolved produced us too. Insectlike AI is a step toward humanlike AI, though there are many more to go.

  13. tinfoil

    Wait, Kevin, what about your "robot overlords" series, with the lake Michigan analogy?! (for those not familiar with it: https://www.motherjones.com/media/2013/05/robots-artificial-intelligence-jobs-automation/) FWIW, AI has gone through several boom-bust cycles since the 1960s. Back in the 90s a professor said "maybe it's just the hardware." And that seems to have been very prescient (though we've made some software advances too as the previous commenter points outs.) The "remaining" question for me is if there really is a "synergy" tipping point when all the small accomplishments add up to more than their sum.

  14. Salamander

    AI is like blockchain, or Internet voting. The folks promoting it are ignorant of what it even means and how it works. Or, more significantly, how it DOESN'T work. Lots of grifters and con men are earning good livings from those three, however.

  15. pjcamp1905

    Really? A binary choice?

    First you need to specify what you mean by AI. You're talking about voice recognition but you say AI and people hear androids. At the voice recognition level, it is already here, flawed and improvable, but essentially functional. At the sentient android level, it is a long way off but not infinitely far.

    Define terms and provide a continuous scale with an uncertainty estimate and then I'll play. Of course, I don't really know what you expect to learn from a lot of people who know little to nothing about the field.

  16. Bobber

    "On the other hand, the vast server farms of T-Mobile do a remarkably good job of transcribing messages left on my voice mail. It's so accurate it's hard to believe sometimes."

    Voice mail on iPhones is transcribed by Apple, not the carrier. I would presume the carrier is not the transcriber on Android, either.

    By the way, I didn't answer your poll because a) it required me to login to Google, and b) my answer is "somewhere in between".

    1. Chondrite23

      Regarding voice mail transcription, look at the "Chinese room though experiment." There is a huge debate about whether a program which produces intelligible results is intelligent.

      I'm in the camp that AI, or Machine Learning, is sophisticated pattern matching. Nothing wrong with that. I love that I can talk to Siri and get useful information. I don't think that Siri is a real intelligence.

      Whatever we call it, AI or ML, will improve and be very useful. In some cases it may be abused, like most other tools.

  17. D_Ohrk_E1

    You're kidding right?

    The dot-com (tech) bubble was a capital bubble driven by shysters and FOMO greed, not an indictment on the dot-com -- the internet -- future. Out from the rubble of the dot-com bubble came WiFi, Bluetooth, SaaS, the death of the brick and mortar mall, and ubiquitous streaming services. The mockery of internet malls has succumbed to the domination of Amazon's dominance around the world. In 1998, T1 lines were king. 20 years later, WTF is a T1, amirite?

    The problem I have with naysayers is that they don't understand that the expertise for all of these advanced technologies doesn't appear out of thin air. It takes years to learn and build that expertise and without investment it will move where the capital exists to support it. That's China.

    It serves no purpose to be a naysayer. Fusion may not happen tomorrow but it will happen. AI may not take over your job today, but it will happen. Quantum computing may not be practical today, but it will happen. In-between then and now, a lot of research and knowledge needs to be gained. Don't discourage it.

  18. golack

    I read recently they have an AI building an AI.....
    Still a number of chip chip generations away before any start to mimic Star Trek's computer. Now if the processing power that goes into bitcoin mining was being used to develop AI....
    How about a nice game of quantum computing?

  19. rick_jones

    The seemingly endless hype of "artificial intelligence" has, naturally, produced plenty of obvious failures. I asked my car to call "Karen" the other day and instead it called "Mother Jones." Huh? Then I asked it to call "Home" and it asked if I wanted to call 611. This is typical performance.

    Tyoical for you perhaps. Whenever I ask my car, though really I’m asking my phone, to call so and so or for directions to such and such it works just fine.
    Perhaps your car is wiser than you acknowledge. After all, aren’t we supposed to shun Karens? …

  20. name99

    ' I asked my car to call "Karen" the other day and instead it called "Mother Jones." '

    How old is your car? And you asked your car? or your phone?
    Part of the issue here is precisely that things are changing so rapidly. So if you are upset at the voice recognition performance ofd your car, well, damn, your car is, what, five years old, running a SW package (by non-experts...) that is 3+ years older and (never?) updated. It's quite likely using voice recognition technology that doesn't even count as AI by modern standards (ie no use of neural networks).

    Compare to the behavior of your phone. (And even there is your phone less than a year old? The HW in your phone really does change notably every year.)
    Meanwhile we have AI performing well in a variety of tasks that can be categorized as "non-linear pattern recognition" or "extreme optimization". This includes, for example, EDA (Electronic Design Automation). No, an AI will not design Apple's next chip. But AI techniques can (and are) making EDA tools run 10x faster, which makes it a lot easier for Apple to design that next chip.

    Meanwhile in a different space we have things like this:
    https://twitter.com/aza/status/1489312847016325120?s=11

    If you interpret the question "is AI overhyped as" "are humans obsolete" you will get one answer. If you interpret the question as "are computers on the brink of being able to bring a whole new suite of SW techniques, running on very different sorts of HW, to solve a whole lot of very different problems from what we thought as computing 10 years ago", well, then you get a very different answer...

  21. DonRolph

    And these are the only options?

    AI is typically overhyped as is much out the high tech world these days.

    AI is also a critical technology which is increasingly important and critical to competitiveness.

    Of course it might help make the discussion more precise if you defined AI more precisely! 🙂

  22. bcady

    We may well hit a limit on AI. We did in my field of video encoding. The standard has been mp4 compression but there has been a step above that for years called mp5. The reason it never became the standard is that the amount of energy and computing strength needed to encode/decode the video outstripped the value gained by the compression. And so mp4 remains the standard.

    Something like that could happen to AI where we get close to completing tasks but the final few steps will take massive amounts of computing power (and massive amounts of energy) to implement, so it won't be worth it. When the costs are higher than simply hiring someone to do it, AI will stop progressing.

    1. name99

      The above "history of video compression" is utter nonsense.

      - The names transitioned from MPEG, MPEG2, MPEG4 to h.264, h.265, and h.266 for essentially political reasons. You will get different explanations of this depending on whom you ask; my version of the story is that the MPEG4 process became so politicized, so bogged-down in BS surrounding the dinosaur industries (ie telcos and broadcast) that the computer industries said "fsck this" and moved sideways to an alternative standards track which was already computer friendly. (The earlier h.261 and h.263 standards were for video conferencing over the internet.)

      - In terms of technology again things did not stand still.
      h.264 was a set of minor tweaks to MPEG4 (most people regard them as essentially identical) but it was followed by h.265 which, apart from additional functionality that was offered, provided good-enough "same quality at half the bit rate". h.265 is about to be replaced (over the next few years) by h.266 which likewise offers good-enough "same quality at half the bit rate".
      (In both the case of 265 and 266 *initially* the reviews do not suggest anything quite as good as "same quality at half the bit rate" because the specs offer a large number of tools, and it takes time to figure out the best way to use those tools. What then happens is over the next three years or so, at equivalent quality the bit rates of all the major SW and HW encoders improve by 10 to 15% or so as everyone learns better tradeoffs.)

      If you really are in the field of video encoding, you might want to look at a technical summary of both h.265 and h.266. At the very highest levels it's still the same techniques as MPEG4/264, but it's kinda amazing how much successive refinement has been able to squeeze out of these ideas. In particular we still haven't needed to move to object-based encoding, even though twenty years ago (around when h.264 came out) it was widely expected that that had to be the next step.

Comments are closed.