Skip to content

AI allows woman with ALS to speak again

A paper released today documents one of the most impressive uses of AI language models yet. A woman with ALS who was physically unable to speak—but still had a normally functioning language center in her brain—was fitted with electrodes that fed their output into a large language model similar to ChatGPT. It was able to produce sentences at 62 words per minute with a 76% accuracy rate.

Accuracy went down with larger vocabulary sizes, but improved with more electrodes and more training:

Obviously this is just the beginning. With more electrodes and faster processing, this BCI should be able to produce virtually normal, error-free speech. Needless to say, it could also be used by people with normal speaking capability—for example, to produce writing without ever saying a word. Do it in reverse and eventually two people might be able to communicate telepathically.

13 thoughts on “AI allows woman with ALS to speak again

  1. economist23

    "Needless to say, it could also be used by people with normal speaking capability—for example, to produce writing without ever saying a word."

    And once the tech becomes cheap and easy to use, it will take about 5 minutes before you have to opt-out of them reading your mind and sending you telepathic ads.

    1. Ken Rhodes

      Just don't attach all those electrodes. One of the few elements of privacy where you have to opt in to allow the invasion.

  2. Ken Rhodes

    A very sad sidelight learned while reading this amazing news:

    At the end of the Nature paper there is a section of Author Information, listing the participants and their affiliations. One name stands out in particular, Dr. Krishna V. Shenoy. He has quite a roster of affiliations:
    --Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
    --Department of Electrical Engineering, Stanford University, Stanford, CA, USA
    --Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
    --Department of Neurobiology, Stanford University, Stanford, CA, USA
    --Department of Bioengineering, Stanford University, Stanford, CA, USA
    --Bio-X Program, Stanford University, Stanford, CA, USA

    In November, 2022, Dr. Shenoy was elected a Fellow of the IEEE, which is quite an honor.
    Two months later, Dr. Shenoy died at the age of 54 from pancreatic cancer.

    I am 80 years old. I take reasonably good care of myself, and I still work fulltime. But a large element of luck is involved. Life is so unfair!

  3. birdbrain

    The electrodes in question are beds of silicon nails surgically implanted directly into cerebral cortex. Do you think this is usable by "normal people"?

    As for "more electrodes means more accuracy," that's also woefully simplistic. (Brain tissue reacts to the trauma of implantation. More electrodes, and denser electrodes, means more trauma. Scar tissue develops over time. Language areas aren't huge, so there's a limit to how many electrodes can be in there at once. Implants are easy vectors for infection. Etc etc etc.)

    This is cool (though not even close to "the beginning" - this work's been ongoing for two decades by now), and flashy, as BCI work often is. But there's an immense amount of complexity elided by the breathless reporting on it, and you'd do well to take any claims for near-future real-world relevance with a big dose of salt.

    1. bluegreysun

      That was what I wondered: are the electrodes on the scalp (like a common EEG) or are they implanted. Maybe easier to justify in someone with a terminal (probably) illness with a 2 yr. prognosis.

      But! - what if this more precise mapping of neurons, or firing patterns or whatever, can be used to help train some EEG cap of electrodes, (external on the scalp). Maybe? Maybe not? (I’d guess *probably not* but I know nothing).

      1. birdbrain

        Yes, working with neurology patients (who can give informed consent) is standard with human BCI work.

        That's a sensible question to ask, and people are trying, but as you guessed, it's pretty hard. EEG has very low spatial resolution (i.e., a standard high-density cap is only 250ish electrodes for the entire head), so there's only so much a neural network could do to improve on things, even with implanted electrode arrays to compare the EEG against.

        Getting signals out of the skull is tough!

  4. pjcamp1905

    I can't tell if you're serious or not. More of the same solves the problem? Doesn't work yet? Make it bigger. Of course, you're operating on the assumption that back propagation solves all problems, a claim for which there is no evidence in favor and quite a bit of evidence against.

    On another note, this is like pheromones to Elon Musk, and anyone who lets that joker stick wires in their brain is out of their ever loving mind.

  5. kaleberg

    Given the plasticity of the human brain, the big breakthrough is probably the machine learning to interpret the brain signals. This would even be amazing technology if there were a limited vocabulary that could be reliably invoked in response to internal neural activity. If you've ever studied stroke recovery, it often involves a lot of relearning, using different neural circuits to do familiar things.

    One of my favorite stories involved a guy who couldn't control his speech directly but could read text aloud. He learned to imagine printed text saying what he wanted to say on the forehead of the person he was speaking to and reading that aloud. My guess is that the LLM component will be somewhat useful, if only as a means of determining a vocabulary of useful words and phrases, but the real breakthrough will be the neural interpretation.

Comments are closed.