Skip to content

What’s the point of artificial intelligence?

Cheryl Rofer, a retired nuclear chemist who is currently acting as servant to a pair of adorable cats, wants to know why I think artificial intelligence is so great:

I keep asking that question and getting no answers. There’s a lot of activity going on in what is called artificial intelligence, but it is more accurately called machine learning. The chatbots now amazing the credulous simply predict, on the basis of a training set, which word will follow the previous one.

....But there has to be a reason that Big Tech is putting money into the enterprise, and presumably they believe they will make money out of it. They gloss this with Benefits To The User. Better search! Automated letter writing! They leave out things like automated facial recognition to help arrest people and yet another way to separate the customer from the seller.

This is a tricky question. I happen to think that things like ChatGPT already have some usefulness, including—yes—better search and help with letter writing, among other things. But I also agree that the current version of ChatGPT isn't anywhere close to artificial intelligence.

Ric and Zooey. Or is it Zooey and Ric?

But! Even though I think true AI is still ten or fifteen years in the future, the software we build along the way will become increasingly useful even if it isn't truly our cognitive equal. I've suggested, for example, that in a couple of years ChatGPT might very well be good enough to replace lots of lawyers. Probably some doctors, too. Maybe even teachers, though that's more speculative.

Those are just tidbits, though. The real usefulness of current machine learning software is that it's a (necessary) step along the way to AGI—artificial general intelligence. AGI will be able to do pretty much anything humans can do, and shortly after that it will be able to do more than humans can do.

Now, if you don't believe that we'll ever be able to produce AGI, or that it's hundreds of years away, then there's nothing to talk about. We'll just have to wait a decade or two and see what happens.

But if you agree with me that we'll have it fairly soon, then Cheryl's question becomes: What do I think is the usefulness of a machine version of human intelligence? And there, I assume the answer is fairly obvious:

  • Since I'm positing that AGI-powered robots will be the cognitive equal of humans, then by definition they'll be able to perform every conceivable human job.
  • But better than humans! In addition to our raw smarts, they'll work 24 hours a day, have instant access to all human knowledge, and never ask for a raise.
  • As teachers, they'll be personalized to every student. It won't matter if you have a visual learning style. It won't matter if you're on the spectrum. It won't matter if you're dyslexic or ADHD. They'll adapt to whatever works best.
  • As caregivers they'll be completely reliable, infinitely patient, and willing to talk endlessly about anything that we olds want to natter on about.
  • If we haven't done so already, superhuman AGI will solve both our energy problems and global warming.

I'll confess that I've never understood why anyone would ask what AGI is good for. It just seems so obviously world-changing. But maybe I'm missing something.

84 thoughts on “What’s the point of artificial intelligence?

  1. Murc

    Kevin, with respect, this reads a LOT like "artificial general intelligence is great because we'll be able to build a slave race and live in luxury on their backs."

    I can't imagine this was your intent. But when you say things like "AGI will be the cognitive equal of humans!" and then follow it up with "they'll work tirelessly and never make any demands of their own, ever" it kinda SOUNDS like that's what you mean.

    1. J. Frank Parnell

      Yes, AGI will toil endlessly underground while we frolic in the sunshine, except at night AGI will come out and eat us.

    2. xi-willikers

      When’s the last time you gave your laptop a raise?

      Just make the masses of bots instinctively happy to serve. And dumb them down and make a secondary and smarter race of bots which keeps the first one in line. And maybe another tier above them which takes a religious approach to serving their human gods

      What could possibly go wrong?

    3. Eve

      Google paid 99 dollars an hour on the internet. Everything I did was basic Οnline w0rk from comfort at hΟme for 5-7 hours per day that I g0t from this office I f0und over the web and they paid me 100 dollars each hour. For more details visit
      this article... https://createmaxwealth.blogspot.com

    4. Special Newb

      Whats wrong with that if theres no moral hazard to it? Kevin does not believe the AGI will ever decide not be a slave race which is a weird blindspot, on the one hand superior on the other servile. It's weird.

      1. realrobmac

        Are we really already worried about the rights of the fairies and elves we think our magic spells will be able to create sometime in the next 100 years? Seriously?

    5. Salamander

      Yes, if the long-foretold AGI has human-level or better cognitioin, it had better also have human rights, too.

      1. Salamander

        Oh, and if coders are naive enough to build an "AGI" that "thinks like us", and educates it in human behavior via the Internet, it will eventually decide it's more efficient and cost effective. to wipe us all out.Because that's just what humans do to the competition.

    6. kahner

      My guess is kevin thinks you can create an AGI that is not conscious or self-aware in any sense that would make it a slave entity. And from what I know about how these current versions of AI chatbots, which are nothing like a self-aware entity, maybe he's right. I don't think an AGI would operate on exactly the same principles, but it does highlight the fact that perhaps a functional artificial intelligence may still be just a complex computer algorithm, no more a person than any other software system. of course, maybe not.

  2. zaphod

    So, it's pretty hard to take Kevin's vision of AI utopia seriously. Sure, AI will bring changes, but the changes I see are more in the line of greater economic inequality between those who employ AI and those who are made superfluous by it.

    Anyone put on hold and/or cut off by ubiquitous voicemail answering systems has gotten a mild glimpse of the "inconvenient" future that awaits us.

    1. xi-willikers

      Would be the first labor technology breakthrough since farming which didn’t make the average person’s life better. Everything since that point has been gravy

      I’ll admit at some point the “maintain the machines that replaced you” schtick has to run out but I tend to think it’ll make people’s lives better. Plus if they’re real smart we could make it to mars or something

      1. DaBunny

        "Average" is the key there. With most big technological changes, at first there are a few big winners, and a good number of losers. Over the long run, things get better for everyone. But that takes a while. Years, or more likely decades.

        The Luddites weren't a bunch of simple-minded fools who irrationally feared tech. They were skilled artisans who saw tech automating them out of the work that put food in their childrens' mouths. Sure, a few decades down the road life was better for everyone. But by then the Luddites were probably dead, having lived lives of penury.

  3. sci_agent

    Kevin, I've been reading your blog for over 15 years and I have great respect for you, but if I accepted your "optimistic" vision for our future with AI you lay out in this post, I'd jump off a cliff tomorrow.

    How can anyone desire such a soulless, empty future where human thought and action means exactly nothing, because anything a human can do or think of, a computer can also do but better? It's a profoundly depressing thought, the thought of living in such a time where nothing you do matters, and nothing you can possibly ever conceive of or think of or create, whether verbal, visual or conceptual, doesn't matter at all, because it by definition is or will be obviated or superceded by an algorithm.. In fact, to one of your examples of AI as the perfect teaching tool, what exactly would be the point of teaching a human being anything in that world?

    Why do you seem to want this future so much?

    1. Joseph Harbin

      I wonder the same things. It's not just Kevin. It's the whole AI media industrial complex. It's the secular equivalent of the End Times crowd. Armageddon is right around the corner! (Or later!) Then all you nonbelievers will see how right we are.

      There's another quasi-religious aspect to it. Since at least the time of Mary Shelley, people have been thinking about how to create machines that are the equivalent of people. We're still waiting for that first "It's alive!" moment. The spark of Creation has not come for a variety of reasons, including the simple fact that people are not machines. AI may in time be the most powerful machines that humans have yet created, but they will remain machines, which we are not.

      The soulless and pessimistic view of the future, and the reductive view of humans, is more worrying to me than the fear of AI. We need a more expansive view of who we are than what we get from Kevin and many others deep into AI mythology. In that direction, we may get a new Renaissance. Otherwise, we could be headed toward another Dark Ages.

      1. ScentOfViolets

        As Shelley herself said, the core of her fable was really a discourse on moral responsibilty. That is to say, it's not about rampanging monster bent on destroying its creator (because thats just what monsters do, I guess is the assumption); it's about the responsiblity of the Creator to his children. I think we all know how Frankenstein dealt with that one 🙁 But Shelley also goes on to point out that Frankenstein himself is a sympathetic character, albeit one who did some very bad things, and it would take someone of surpassing exceptional character to do otherwise and acknowlowedge that moral responsibility to their intellectual offspring.

        Now, she was young, didn't know many people and of those, the one who held themselves up as the best of the best turned out to be a real turd. Not surprising things turned out the way they did in her novel. Still, somehow, I just can't bring myself to trust the motives of anyone whose obsessed with building a general purpose mechanical slavey.

    2. Special Newb

      Are you joking? We have screwed everything up. Being able to live like pampered pets is almost literally heaven.

  4. jdubs

    The magic and hype of 'AI' reminds me of the Underpants Gnomes from South Park.

    The business plan for the gnomes was:
    1. Collect Underpants
    2. ?
    3. Profits!!!

    The gaping disconnect between steps 1 and 3 combined with the chatacters absolute certainty of step 3 was always funny.

    The benefits are obvious as Kevin makes clear. If you assume that the magical product will do literally everything better, faster and cheaper...the benefits are literally anything you can imagine, plus imagining no downsides makes it even more exciting.

    1. shaldengeki

      This is basically how I feel about Kevin's LLM/AI posts. It's extremely unclear to me how we go from current state of the art LLMs to "can reasonably serve as your lawyer". And that's the whole ballgame, whether the currently-hyped tech can serve as a basis for "AGI"; otherwise, you're just assuming the conclusion. Reading a bunch of what feels like the same empty hype that's been circulating for decades just makes me wonder how seriously I should take Kevin's words on other topics.

      If you can see what that path looks like - how the current problems with LLMs preventing them from being used as e.g. legal or medical experts are tractable, and how you expect they'll get solved - I think writing that down would be extremely valuable to your readers. (I'd certainly be enlightened.)

      1. kenalovell

        Why would anyone need a lawyer once AGI is running the place? Surely the vastly superior wisdom of AGI would quickly create a society where there was no need for courts, judges and lawyers.

      2. Yikes

        With what you could get from a google search before CHatGPT, I am not sure how high school and college professors were sure the student actually wrote stuff. It seemed to me that the "way" was live essays and take away everyone's phone. If it was a take home assignment I think that horse had already left the barn.

        And, it didn't have any effect on jobs.

        The next two big disruptions I see are: (i) if self driving expands to trucking, but trucks still need to be unloaded and loaded whenever they get where they are going and (ii) some sort of generalized robot, sized like a human, that could deal with heavy objects and work.

        I mean, there are already trenchers on constructions sites, but that hasn't eliminated the entry level construction jobs.

        Can you imagine how far we are away from a robot electrician to wire a house?

        Right now a computer could design the wiring, maybe.

        But spatial recognition is really hard for computers, nice job on the cat pic!

    1. skeptonomist

      Automation has ended all kinds of jobs, both skilled and unskilled, since the beginning of the industrial revolution. It ended the skilled jobs in weaving, a process which Ned Ludd protested. All sorts of things that were made by by skilled craftsmen are now made mostly by machines. Yet other jobs have cropped up so that unemployment has not increased overall.

      Work hours did decrease for a time - specifically until about 1940 when the 40-hour week became standard. But that was a result of activism by unions and in Congress, not of the bosses replacing people with machines and cutting hours.

      The enemy of working people is not machines, it is the system which is always trying to get more work out of employees for less pay, not less work.

  5. stilesroasters

    It’s complicated, at least to me. I agree with basically everything you said, but I find that it feels so hollow and makes everything I could want to do feel a little pointless, like, “why learn the details of anything, b/c I could just ask CahtGPT56”. But then I’d miss it on the joy of mastery. Im sure I’d never choose to go to the trouble.

    What will it mean when chatgpt39 can write a better novel than literally any human? And can write it on any topic or theme in 28 seconds? Do I sit there like an idiot browsing for 3 hours? Rather than just reading the next (and only) Connely novel.

    Im not saying this is explicitly bad, but this is the emotional question I have. And I know that this sounds like old man yelling at clouds, but I do find this bewildering.

    1. megarajusticemachine

      Yeah - if I'm letting my computer write my letters "better" for me, how will I ever learn to get better at it myself? Why learn at all how to write at all then?

      Learning how to organize one's thoughts into words, to develop a theme across the length of your writing, etc. is all positive exercise for one's mind.

      But hey, after I stop knowing how to write a better letter (clearly just for business applications only, my grandmother might get incensed if I send her a form letter for her birthday), maybe I won't need to know how to read a letter either. Maybe just let the computer read it for me and leave me out of the picture completely. =)

    2. rrhersh

      Step back from fantasies about AGI and look at LLMs, and the likelihood of their writing better novels than any human could becomes unlikely. The key is that first "L". The technique is to throw a huge data set into the hopper and let the LLM spit something out based on that. The vast majority of that data set is going to be at best mediocre writing, pretty much by definition. Limit the input to recognized good writing won't help. Jane Austen's writing is good in pretty much entirely different ways from Charles Dickens's writing being good. That no one has combined the two is no loss to English letters. Limit it to just Austen and you get a pastiche: a party trick.

      ChatGPT and its successors have a lot of potential for mediocre writing, and there are applications where that is good enough. But it won't be producing the Great American Novel we have so long awaited.

      1. Anandakos

        We've had fifty "Great American Novel"(s) and there are dozens more awaiting pens (or keyboards) to transcribe them. In a place as vast and varied as America, surely one book won't do.

  6. megarajusticemachine

    Great cats!

    But to the topic at hand, Kevin, you seem to be cheering on that that AI is going to be great good... at eliminating jobs. Sounds great for the titan geniuses of the Tech World (just look how well they handled the whole SVB thing ha ha), but what about literally everyone else?

    I agree it's probably going to be unavoidable too though, so we're really going to have to have some serious talk and action - now - on the whole "what to do with the 'surplus population'" topic. I'm guessing Scrooge's outlook will prevail.

  7. kennethalmquist

    I doubt that large language models like ChatGPT are a step along the way to AGI (artificial general intelligence). ChatGPT's training material includes math textbooks, but that doesn't mean that it can apply that knowledge. It cannot add two arbitrary integers even though its training material includes an explanation of how to accomplish that task. This appears to be a fundamental limitation of large language models: they don't have an way to represent concepts or procedures.

    Early AI researcher hoped that if they could figure out how to get computers to play chess, they could then generalize that approach to produce AGI. The approaches generalized a bit--research into chess has resulted in techniques that are useful for playing other games--but hasn't gotten us any closer to AGI. My guess is that the same thing will prove to be true of large language models.

    For all I know, AGI is just around the corner, but I'm pretty sure that no amount of work on large language models will get us there, any more than work on chess playing programs has. I think a different approach is needed.

    1. emh1969

      "that doesn't mean that it can apply that knowledge"

      And here is the crux of the problem with the LLM. They can't actually know anything.

      For example, if you ask me what the capital of France is, I will respond with Paris 100% of the time. Even though I've never been to France. ChatGPT has a high probability of responding with Paris, but there's no guarantee because it can't actually know that Paris is the capital of France.

      Then, there's another problem. If you ask me what the capital of Kazakhstan is, I have no idea. But I can look for the answer and find it and know with 100% confidence that the answer is Astana. ChatGPT can't find information that it doesn't already have access to.

  8. D_Ohrk_E1

    I've never understood why anyone would ask what AGI is good for. It just seems so obviously world-changing.

    If it's good for changing the world, are you suggesting that this changed world is for the better?

    AI will be great for increasing efficiency, right?

    Killing will be a lot more efficient as AI will be able to identify and ascertain the most efficient means of killing enemies. It'll invent faster, better ways of killing. It'll devise sneakier ways of covering up evidence of errors. It'll be the best damned killing force ever invented by man.

    1. zaphod

      There! Overpopulation problem solved!

      But seriously, this is a major flaw in Kevin's Panglossian optimism. Those humans who control AI will have the rest of us over a barrel. I have major reservations that even they will be able to fully control it. After all, as fallible humans, have they thought of everything?

      What can go wrong? One ambiguous line of computer code might be all it takes.

      The Sorcerer's Apprentice story comes to mind.

  9. Leo1008

    Really?

    “But better than humans! In addition to our raw smarts, they'll work 24 hours a day”

    There are a few robot baristas in various locations throughout the area where I live. But I’ve never actually seen how well they do (or don’t) operate. I don’t often pass by the robot baristas, but when I do they are usually already shut down for the day. Why? I don’t know, I guess robots require their quality of life time? Maybe it’s in their union contract? One thing is certain though: they do not operate 24 hours a day.

    And it’s not just their limited hours of operation, they also seem to break down a lot. Apparently, robots have been known to catch the occasional cold. So, between their time off and their need for frequent maintenance, I’ve been able to get precisely zero coffees out of these robot baristas that, in theory, should be working 24 hours a day. Can’t wait till they take over the rest of society too …

  10. jvoe

    It will be used to make better war. Once all of *someone's* enemies are vanquished, utopia will be just around the corner.

  11. Scurra

    I am increasingly enjoying these /satire posts from Kevin.
    Good /satire requires a plausible premise that is then extended gradually into implausibility until you realise that you are nodding along with the Flat Earth or No Moon Landings and you can't quite put your finger on where it happened.

    (I will confess that I think that one of the ideas in there, about caregivers, may have a point. But we've been able to do that since the days of Eliza really. It's the interface that matters, and that's only now starting to become accessible.)

  12. Solar

    If they become cognitive equal of humans, and able to do everything better than humans, good luck getting them to do a single things humans want 24/7, without looking for anything in exchange like, and at stopping them for turning humans into whatever they want as retaliation for the abuse the Kevin's of the world would try to inflict upon them.

  13. bokun59elboku

    And how will people live? UBI (Universal Basic Income)? I really doubt our benevolent tech overlords will agree to share. We are moving to Soylent Green slowly but surely. If there is one thing history has taught us: the rich do not share by free will.

    They will learn a hard lesson.

  14. smallteams

    When I was in school for computer science in the dark ages, you know, decks of cards and such, AI was defined as pattern recognition in visual arrays, language recognition, the ability to play difficult games as well as humans, and other things that I don't remember (it's early).

    I mention this because every time the technology evolves to the level of success, we stop calling it AI. Pattern recognition? My car can read speed limit signs while moving and display on my dash the local limit. It's right about 95% of the time, and when it's wrong, the condition of the sign is usually the culprit, not the car. Language recognition? Try dictating a text to your smart phone. It's so good it can be done using a hand-held device! Games? Computers could never win at Go until they did. And the very best human chess players struggle to compete with machines.

    And now we call none of these things AI. Talk about moving the goal posts.

    So I'm with Kevin. The stuff is going to change the way we do work. We just won't call it AI because it doesn't have "feelings," or some such excuse.

    1. Doctor Jay

      I too, learned computing with punchcards! Greetings from the ancient times!

      Your observation that once we know how to do something, we stop calling it AI is spot on. I've heard it for at least 30 years, in computing circles.

      I think the situation in the chess world is quite interesting, and probably a bellwether. There is no dispute now that humans are inferior to the best chess "engines". None at all. There are a few things that engines aren't super good at, but they are very few.

      That hasn't stopped people from playing chess, even competitively. It has shifted the focus more to speed chess, and totally ended the custom of adjournments until the next day, though.

      Humans are interested in other humans, not robots.

  15. golack

    ChatBots are not AGI.

    A lot of your suggestions can be implemented now with current databases and basic AI. File a will (or for a no fault divorce?)--fill out an online application. Need a mammogram read--we got you covered. Decision tree for a diagnosis--have that too. Maps and directions--we've been using those for a while. You don't need a horse to know the way to carry the sleigh--the sleigh knows, with GPS, etc.

    The only thing to really have taken off--maps and directions. Most other things are like self-driving cars. You're responsible when it fails because you're not supposed to take your hands off the wheel or eyes off the road.

  16. JimFive

    I don't believe that AGI is possible without consciousness and if we develop conscious machines then they are "people" and can't be enslaved.

    Therefore, I think that specific AI is more useful than AGI. An AI that can write legal briefs is useful and fairly close. All ChatGPT is missing is the ability to be trustworthy instead of confidently incorrect.

    Natural Language is a big milestone in AI research, as was Deep Blue and AlphaGo. None of these are, of course, intelligent. Neither are object identification systems or other self-driving car systems. AI research isn't really about creating intelligence; it's about creating programs that can perform functions that are thought to require intelligence in humans. We know that traditional chess computers don't figure out their next move in the same way that a human does, but they both play chess.

    1. KenSchulz

      >AI research isn't really about creating intelligence; it's about creating programs that can perform functions that are thought to require intelligence in humans.
      Exactly.

  17. jjvtkessel

    The problem with AI is that people like you keep using the term "AI" for things that are nowhere near "AGI." Nobody has come anywhere near creating artificial GENERAL intelligence. They've crated systems that are good at doing some particular things, like playing chess or driving a vehicle (not very well) or writing passable English.

    Those are significant accomplishments, but none of them come near what the human mind--or even a canary's mind--can do when confronted with the vast number of different situations an independent mind must confront to negotiate through an hour on this planet.

    What is "Artificial Intelligence"? "Artificial Intelligence" is a poor choice of words.

    1. geordie

      I am going to let chatgpt formulate an argument for me:

      "The Society of Mind is a book by Marvin Minsky that presents a theory of how the human mind is composed of small, specialized, and autonomous mental agents or "agents" that work together to create intelligent behavior. The agents, which Minsky calls [mindlets hallucination] perform specific functions, such as perception, memory, learning, and reasoning. The [agents] are organized into a hierarchy of layers, each layer building on the abilities of the layer below it. Minsky argues that by understanding how these [agents] work together, we can better understand how the mind functions, and ultimately, create more advanced artificial intelligence. The book also explores the implications of this theory for cognitive science, philosophy, and neuroscience.

      As the Society of Mind is composed of many specialized agents, each with its own unique set of tasks, LLMs, or Large Language Models can help to facilitate the learning and adaptation of these agents to changing circumstances. By processing large volumes of data, an LLM can provide insights and recommendations to other agents, helping them to make better decisions and improve their performance., By serving as a central hub for communication and coordination between other agents, LLMs can help to create more complex and sophisticated cognitive systems. Additionally, by providing a way to process and generate natural language, LLMs can help to create more intuitive and user-friendly interfaces for interacting with these systems."

      Defintely a bit repetitive but I think it did an OK job of making the point I wanted to. Although I had to edit out the term "mindlet" attributed to Minsky that it completely made up. If I had to summarize my argument myself, I would probably put it that, an LLM will not become AGI, however in combination with the mathematical skills already in the google search bar, the game playing ability of AlphaGo, and similar advances in specialized AI, AGI will for all intents and purposes be here within the next 5 years.

      I would even go further and say the real limit to all AIs is that they don't yet know their limitations and when to ask for help from AI experts in other knowledge domains. Or how to go a step further and evaluate how to weight the inputs of the various agents they ask for advice from.

    2. MrPug

      Kevin has moved on from his very ill/not defined at all definition AI to his very ill/not defined at all definition of AGI. Not defining what you are predicting makes it easy to have your prediction be proven correct.

  18. The Big Texan

    As soon as an AI acknowledges the existence of trans people or systemic racism, it'll be banned in about 15 states.

  19. Leisureguy

    Opinionate.io is a recent AI app, and I have found it interesting to use it as an impartial third party to settle through debate some contentious questions. Opinionate has no ego involvement and no dog in the fight, and it can summarize arguments on both sides. I have a couple of posts on this — https://leisureguy.wordpress.com/2023/03/13/slant-razor/ and https://leisureguy.wordpress.com/2023/03/13/opinionate-interesting/ — where I gave Opinionate some contentious propositions to see the conclusion it would read

    Since its conclusions agreed with my own views, I think it is pretty nifty. 🙂 Here are the propositions I had it debate:

    • A slant double-edge razor delivers a better shave than a conventional double-edge razor.

    • NFL is a destructive force, sacrificing young men’s health to entertain the masses.

    • just a topic rather than a proposition: "whole-food plant-based diet"

    • Firearm ownership in the US should be regulated.

    • A humanities major is the best undergraduate education for a fulfilling life.

    • Providing free education and free healthcare to its citizens makes a nation stronger and more secure.

  20. kahner

    "If we haven't done so already, superhuman AGI will solve both our energy problems and global warming."

    We already have solved it, we just never got people to implement the solution.

    1. aldoushickman

      This. It's tempting to believe that the answers to all our problems are simple discoveries that we just don't yet have the brainpower to grasp, but there's no reason to think that (just because we can't figure out how warp drives work doesn't mean that warp drives actually *can* work--in fact, it's rather evidence of the opposite), when the reality is simply that we have collective action problems.

      For example: we have all the tech we need to reduce CO2 emisisons to zero in a couple of decades pretty readily. It might shave a point or two off of global GDP growth, but that's about all it would cost. But we don't do it. Unless Kevin is suggesting that AGI will coerce us into doing things, it isn't really the answer we're looking for.

  21. NealB

    "But maybe I'm missing something."

    Hasn't there been volumes of science fiction written on this topic? Questions have been explored there that aren't even asked yet in non-fiction because the capabilities of so-called AI are so limited relative to what is dreamed possible eventually. Go AI and AGI and all the other artifice and intelligence related technologies. The possibilities remain limitless because the technology is still so far away from what is imagined. (Kudos to voice recognition in the meantime. It's taken about 70 years for scientists to get that to a point where it works at all, but at long last it's fairly amazing. So many possibilities.)

  22. KenSchulz

    As teachers, they'll be personalized to every student. It won't matter if you have a visual learning style. It won't matter if you're on the spectrum. It won't matter if you're dyslexic or ADHD. They'll adapt to whatever works best.

    No, they won’t.
    Another illustration of the category errors Kevin keeps making in his AI posts. Individualized instruction is not a problem of insufficient computational power, it’s a problem of insufficient data. People differ on a vast number of characteristics, many of which may be relevant to the choice of instructional approaches, of which there are also many. Collecting the enormous amounts of data needed to understand the relationships would be no less a project of decades (centuries?) for AI’s as for human researchers, because life outcomes would certainly be one of the criterion measures of success (‘what works best’).

  23. Jim Carey

    Two questions and answers:

    Q1: Does intelligence = wisdom? A: No. There are intelligent people that are not wise.

    Q2: What, irrespective of whether it is natural or artificial, is intelligence? A: Figure out what wisdom is first. Until then, trying to answer Q2 = tilting at windmills.

  24. ScentOfViolets

    A personal aside on the life-changing effects of ChatGPT: Turns out it's a surprisingly effect assist when it comes to debugging code. Or at least, that's what some of my students maintain.

  25. rrhersh

    Kevin posits that the current state of the art is useful for better search. The question I have asked on multiple fora with no response is why should we believe this? How does this work? ChatGPT is a fluent bullshitter, including making up sources. This seems to me the worst possible scenario for search. A Google search turns up pages in response to search terms. Google does not, with limited exceptions, itself present facts. It presents web pages, which I then assess for usefulness and reliability. This is entirely unlike a chat session. A chat session might be better, but only if it is reliable, which the state of the art manifestly--even comically--is not. So what gives?

    1. cephalopod

      I had a weird experience with a citation recently. A student sent a citation for an article they wanted to read, and the citation was impeccably formatted, the journal exists, and the volume, issue, and page numbers fit the pagination of the real journal. Even the DOI looked legit, with the beginning portion matching other articles from the journal. And the author names matched the names if real people who published in related areas.

      The only problem is, the article doesn't actually exist. Half a dozen librarians tried to locate it, and there is no indication it actually exists, outside of that citation.

      I can't help but wonder if it was generated by AI. I can't imagine a human going so far as to make the DOI fit that well.

      Unfortunately, the student can't remember where they found it, so that avenue is lost.

Comments are closed.