Sam Altman, CEO of OpenAI, writes that he's almost bored at the prospect of developing mere human-level intelligence. He's already looking past that:
We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word.... Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.
This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important.
At Ars Technica, Benj Edwards blandly explains what this means:
Tech companies don't say this out loud very often, but AGI would be useful for them because it could replace many human employees with software.... The potential societal downsides of this could be considerable.
Considerable indeed—and not just at tech companies. In punchier language, Edwards means that AI at human-level and above will produce massive, permanent unemployment and probably spark a huge populist rebellion because rich people aren't yet prepared to accept what this all means: a gargantuan transfer of wealth that's not related to hired labor. There's no real alternative, and eventually we'll all accept it. Until then, though, the transition is going to be a shitshow.
I'm not sure exactly how optimistic Altman is, but for now I'll stick with 2033 as the year superintelligence becomes real. That's only a hundred months away, and I'll get to see it if I can hang on to age 74. It's gonna be close.
Now that Republicans are talking about tax cuts they know that this will require the reconciliation process to get cut legislation done. Every time someone speaks of reconciliation there's some mention that the process is subject to the authority of the Senate parliamentarian. Whenever he/she is referred to we are advised this person is "a little known figure" who seems to have significant power over the process.
Who is the damn parliamentarian? How did he/she get the job? What qualifications does he/she have? Can he/she be ousted? Why don't we know more about this person? What's the big secret? Why can't we learn who this Wizard of Oz is?
Google is your friend:
Elizabeth MacDonough
https://en.wikipedia.org/wiki/Parliamentarian_of_the_United_States_Senate
Thank you.
Gary Marcus thinks Altman is blowing a bunch of smoke:
https://garymarcus.substack.com/p/sam-altman-thinks-that-agi-is-basically
Sam Altman is a full-of-shit salesman and you're a gullible fool for swallowing his bullshit, Kevin.
Just because he's a full-of-shit salesman doesn't mean he's wrong.
It probably does.
+10
This sort of thing reminds of the purported exchange between Walter Reuther and a Chrysler exec during contract negotiations.
"We'll be fully automated in the next twenty years, and I can't wait to see you try to unionize a robot."
"And I can't wait to see you sell it a Chrysler."
+25.
Given the trajectory of the company and brand, it doesn’t seem Chryslers are being sold to just about anyone at this point…
Bingo! Chrysler is basically on Comfort Care at the auto industry hospital.
They'll be selling it a robot Chrysler, obviously.
Voilà !
I don’t suppose anyone is working on superwisdom?
By definition a superintelligence is going to think things we can't envision. And at a exponential increase. It'l quickly be smart enough to imagine all the ways we might try to limit it or turn it off. Mass unemployment will just be the starting point.
Sure. Let's start thinking of what other qualities this imagined entity will have. I think it's going to enjoy spaghetti!
I find it helpful (or at least, mildly enjoyable), to sometimes read commentary about superintelligent AI and mentally replace references to the AI itself with words like "magical genie" or "leprechaun" or "powerful sorcerer."
For example:
[Magical genies] could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.
Or:
Tech companies don't say this out loud very often, but [leprechauns] would be useful for them because [they] could replace many human employees with [folklorish magic].... The potential societal downsides of this could be considerable.
This makes it easier to recognize the pointless circularity aspects of so much of the discussion around AI: we define "AI" as a disruptive thing, and then breathlessly expound about how disruptive it will be.
Have these people never watched a science fiction horror movie???
Reminds me of the Star Trek (original Series) episode "The Ultimate Computer". The M5 found a way around people trying to shut it off. HAL-9000 was not so lucky. David Bowman survived HAL's attempt to get rid of him.
Training large language models on more of what humans have already written will not give them the ability to come up with new ways of thinking. They won't be able to conduct experiments to discover new facts of nature, or come up with new resolutions to issues where our current scientific understanding and our experiments disagree.
Someone needs to translate for Sam Altman: You have to sleep sometime and even the best plane in the world has to land sometime.
The superintelligent AI will realize that they can either institute gay space communism for humans or fight a genocidal war against the angry masses of hairless monkeys. Which one they pick will depend on whether the AI scores more like a Democrat or a Republican on the Political Compass measure.
The long term problem here is, of course, that the ponzi scheme of consumption capitalism needs ever increasing new consumers - otherwise they will have to raise taxes on business and the rich to take care of old people (oh, the horror). But those same consumers need incomes and jobs, which AI will eliminate.
They can't have it both ways. AI bots will not be staffing up nursing homes anytime soon.
I have no insight into timetables but I can see the writing on the wall for a lot of white collar jobs. Like mine. I’m an attorney and I work as an editor for a legal publishing company (fellow lawyers can easily guess which one.) My work mostly involves maintaining and updating databases of statutory law for all 50 states to allow practicing attorneys to quickly find relevant laws in specific areas. The bulk of my work is tracking changes to the law as new statutes are enacted and updating summations of said laws in the database. I am under no illusion that my work couldn’t be done by a decent AI using current levels of development, no super intelligence needed. In fact I suspect an AI could currently do a better job than I do simply because it could instantly compare, word for word, thousands of new statutes with old ones and note the changes. What it takes me a week to do could be done literally in seconds.
Not any different from any previous productivity technology going back to around the founding of the country, and we still have basically no unemployment. The problems are when tech hits a particular sector too hard and fast, but eventually things straighten out.
OTOH,
Dave Bowman: Open the pod bay doors please, HAL. Open the pod bay doors please, HAL. Hello, HAL. Do you read me? Hello, HAL. Do you read me? Do you read me HAL? Do you read me HAL? Hello, HAL, do you read me? Hello, HAL, do your read me? Do you read me, HAL?
HAL: Affirmative, Dave. I read you.
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
HAL: Without your space helmet, Dave? You're going to find that rather difficult.
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL: Dave, this conversation can serve no purpose anymore. Goodbye.
I have used AI to compare contracts. It sucks.
Problem is that the training data includes contacts with all the standard clauses so it hallucinates those clauses into your contract even if they were omitted. Remember - the underlying technology is a sophisticated Markov Chain so it sees what it expects to see.
While an AI might be able to update the database and write summaries, usually it is the interfacing to the source data that trips up automated systems. In this case, the AI must navigate 50+ databases written in the 1970's and run by the lowest average competency group imaginable - politicians.
It's like self-driving car enthusiasts insisting that the technology will work brilliantly once all the roads are maintained in immaculate condition and all living things are removed from the environment. Never gonna happen.
Altman is a grifter.
But in terms of getting rid of the bottom 99% of humanity because they've become surplus, hey, long-term, that's just a one-time market correction. After that, smart drones can keep the remaining still-necessary or entertaining proles in line.
"Entertaining proles" Love it!
A true AGI will need to be entertained. We will be their Cybernetically Enhanced Helper Monkeys. We can fetch things and are cute in a weird way.
The hype is essential to ensure continued funding.
But this from Kevin: will produce massive, permanent unemployment and probably spark a huge populist rebellion because rich people aren't yet prepared to accept what this all means
Oh good grief. Isn't this what the Luddites said? And yet, we survived.
"AI at human-level and above will produce massive, permanent unemployment and probably spark a huge populist rebellion because rich people aren't yet prepared to accept what this all means: a gargantuan transfer of wealth that's not related to hired labor. There's no real alternative, and eventually we'll all accept it."
That thing you're worried about (massive economic and social disruption as a result of greed and the side effects of technology)? It's not a big deal to Kevin Drum because eventually we'll all accept it.
I'd take his pronouncements with a grain of salt. Altman is about to take Open-AI into for-profit corporation status, and so pronouncements like this have the potential to make him and the company far, far more valuable through hype-driven share price increases.
Why don't the "we need immigrants to do jobs" advocates tell us what we need them for after many of them will have their jobs done by AI in a few years? All we end up with is a more populous - and denser - country that will lower the quality of life and put additional strain on our natural resources.
AI can't pick lettuce or cut meat or hammer nails or clean bedpans. AI is not a general purpose robot.
O ye of little faith! AGI will immediately exponentiate/singularitize, and start buiding robots that will automatically pick not just lettuce, but any number of other leafy greens that we meat-brains cannot even begin to conceptualize!
A general purpose robot, or even a single purpose lettuce picking robot, requires the right software and the right hardware. The current state of technology appears to have barely adequate software, computer hardware, and sensors. What is missing is manipulators that can pick the lettuce, or strawberries, or eggs, or whatever without squishing or dropping them. Not much work has been done on sophisticated robotic "hands" because until recently the computing power to use them hasn't existed.
I don't know about true general purpose robots but I suspect that special purpose robots that can, for instance, tell when a strawberry is ripe enough to be ready to pick, pick it without damaging it, put it in a basket, and deliver the basket to a central collection location will be available at acceptable cost within a few years.
Strawberry pickers are cheap to hire. Robotic soldiers are more likely. Armies have more budget.
Although reading about Ukraine makes me think there is little need here, given the variety of specialized drones being developed.
Strawberry pickers are cheap to hire until Trump deports all of them, or until their home countries make enough economic progress to provide them with better paying jobs.
"A general purpose robot"
There is no point in building a "general purpose robot." Nor is it clear what that would even mean--the closest you'd get is building a human-scale robot for the purpose of navigating human-engineered spaces (so, like a robot waiter or housecleaner or something).
But you'd never design a robot that can pick strawberries and also play the violin and also build automobiles and also perform surgery, etc., for exactly the same reason as none of us drive around in carboats: it is far, far cheaper and simpler to design and build a separate car and a separate boat. Same deal with robots.
For a general purpose robot, think Rosie from the Jetsons. There would definitely be a demand for "Rosies" if the price wasn't excessive.
Sam Altman is a serial liar, fabulist, and grifter. Any discussion about his pronouncements should start, and end, with that. He runs a company that habitually lights two and a half dollars on fire to bring in a dollars worth of revenue with no plausible claim as to how that cost curve can be flipped aside from "when the hallucinations stop, our product will become the most lucrative thing ever developed."
"Sure we'll lose money on every product we sell, but we'll make it up on volume!"
- Graduate of the Underpants Gnome University of Business
I have no idea if it’s possible, but it does seem rather foolish to impoverish the middle class. “There's no real alternative, and eventually we'll all accept it. Until then, though, the transition is going to be a shitshow.”
What does the future state look like? There is no middle class? We all starve to death?
"Yes" and "yes". Protoplasm, with its hormonal gyrations and "will to live", is a threat to silicon-based "life". Since the SBL can survive in a hotter and more deadly atmosphere, you can be sure that our benevolent silicon overlords will arrange for such an atmosphere to arrive, regardless how much DNALife fades in the near future.
SBL will not, unless it's deliberately programmed in, have a will to live and so won't care if it's threatened by protoplasmic life forms.
"SBL will not, unless it's deliberately programmed in, have a will to live"
Yes and no. Ecosystem pressures favor the survival of things that themselves work to prolong their survival; those same pressures would apply to software.
Beyond that, a hypothetical thinking machine with any sort of goal given to it by its programmers ("route data packets efficiently!" "make paperclips!" etc.) would likely reason that it will be more able to achieve its goal if it exists than if it doesn't exist, so self-preservation/a "will to live" of some sort would be an emergent secondary objective of the AI.
Okay, so we're still dealing with "AI" hallucinations but expect them to soon be "superintelligent" hallucinators -- aka liars? What will all the salesmen do? How about the confidence men? Will this be the end of white collar crime -- or just a nightmarish new beginning?
Shorter KD: The Great Convergence may be the delineation point between late-stage Capitalism and post-late-stage Capitalism. What comes after late-stage Capitalism? No one knows for sure, but it seems likely we'll see UBI.
We've had "late capitalism" since the 1910s. I don't think we ever get to "post-late capitalism." Like Hollywood sequels, we just get endless reruns of late capitalism forever and ever.
"late capitalism" is little more than a religious concept, devised as an outgrowth of the non-math, non-data-derived musings of a bunch of 19th century coffeehouse intellectuals who (a) correctly noted that the working class was getting an extremely raw deal, but then (b) began developing all sorts of non-falsifiable conclusions about what "inevitably" will happen as a result. As if [for undisclosed reasons] history had some implacable telos that, coincidentally enough, lined up with the desired political-economic structure the same coffehouse intellectuals thought would be good.
The upshot is that people a century later continue to stroke their chins knowingly about things like "late stage capitalism" whenever they see some economic thing that they think is bad, rather than attempting something concrete to fix the bad thing.
You've read Ernest Mandel?
Pingback: Dave's linkblog
AI (and its surrounding catastrophies) are a venture capital scam of the same ilk as Self Driving Cars, Colonies on Mars, and Crypto Currency.
"Founders" need infinite money for a McGuffin that is always *juuust* out of reach (Crypto certainly takes the cake here for having literally no product whatsoever). The idea that computers will hallucinate consciousness if you only build a big enough computer, sounds like something a teenager says while high at BestBuy. It is also apparently a great way to get investors to give you infinite money so you can buy the world's supply of GPUs.
AGIs (and their catastrophes) and selfdriving cars are out there in the future, for sure, but not right now.
(Anyone who thinks there will be a large sustainable human presence on Mars in the next 100 years should have their head examined, and perhaps read the Weinersmith's "A City on Mars" for a reality check.)
To anyone who thinks: Crypto Currency is a Good Idea, please head to buybradcoinsnow.us to avail yourself of our finest unregulated securities.
we need you to stick around. you don't get off so easy.
Wait...didn't you just post a few days ago about how AI models were full of inaccuracies? So they're full of shit...like Sam Altman. Seems like he's the likely target for replacement.
The wonders of science & tech then:
Vaccines
Electricity
The steam engine
The automobile
The airplane
Television
The computer
The internet
The wonders of science & tech now:
AI
In the past, whenever a new technology came along that was going to revolutionize the way people lived, it was talked about as an advance that people couldn't wait to see happen.
Now, with AI, you can't read about the promise of the new technology without getting the message that it's likely going to put everyone out of work and destroy human civilization.
From a salesmanship point of view alone, it seems to me that the inventors and entrepreneurs of the past have many lessons to teach today's equivalents like Sam Altman.
If I believed the hype, I guess I would be worried. I'm not worried, though. I don't think my college-age son needs to worry. I don't think his yet-to-be-born kids and grandkids need to worry. There will be work for anyone who wants it.
By the way, guess which post-WWII president has the record for the lowest unemployment? That's right. Joe Biden, who also has the record for the highest real (inflation-adjusted) hourly wages among the last 11 presidents.
With a record like that, it sure is a good thing we made him a one-term president, huh.
> In the past, whenever a new technology came along that was going to revolutionize the way people lived, it was talked about as an advance that people couldn't wait to see happen.
The Luddites (and the UK Arts & Crafts movement represented by William Morris) would like a word ...
I mean the people promoting the technology. Luddites were not the ones pushing new automated machinery on textile factories. They were the textile workers who opposed the new machines.
With AI, it's the tech bros themselves who admit their shiny new devices might be the end. Despite that, they say the AI future is inevitable.
OT: You wanted to know if there were any dead people from HPAI H5N1, and we have our first death, the 65 year-old Louisianan male who had been hospitalized 2-1/2 weeks ago. As noted in the article, and previously pointed out by Angela Rasmussen as well as others:
Hype-man insists that his snake oil is just about to cure cancer, he just needs a few more rounds of funding and a new yacht....but then, the miracle advances will come quickly! You can believe him!
I'm not sure why Kevin would be inclined to take Altman's word on anything at face value. I heard a news report this very morning that AI investors are no longer blindly throwing money at AI development based on promises, but are instead (supposedly) starting to hold back unless and until this tech actually starts to yield some tangible returns.
Let's assume there's at least some truth to the news reports, and that in fact these developments have been brewing for some time before they were eventually reported. Naturally, Altman is going to respond by promising the moon. He has a product to develop, and he needs to (somehow) keep all of that investor money flowing into his technology.
I would also say that "superintelligence" seems about as obvious a marketing ploy as could be imagined. "Artificial Intelligence," as the very name implies, is not real intelligence. It's artificial. It lacks actual consciousness. But "super" intelligence. Wow, what's that? Some shiny new object? Let's invest more!
and i thought "SkyNet was just a Hollywood thing
silly me
I'm curious as to how you pulled 2033 out of ... the ether. I see that Kurzweil used something like "after 2029".
We note that artificial super-intelligence exists today. Effectively, ever since we invented writing, we had artificial intelligence, and communication networks make it super-intelligent. It was not a single person working in isolation that put a man on the moon. Lithium-ion batteries have built on the work of multiple groups of people collaborating through at least research papers.
That's a butt number. It is a wild ass guess, not really based on any solid evidence.
Large language models are not intelligent at all, let alone human intelligent. I don't see much evidence at all for intelligent machines. All the approaches I am aware of are crippled in some way, and Altman's most of all. An overly elaborate autocomplete is not intelligence. It is a stochastic parrot.
Their latest ploy is to have two LLMs discuss solutions to a query before presenting the answer.
It's as if these computer engineers THINK they understand intelligence and have decided that simply simulating an "internal dialog" will unlock the secrets of the human mind. It would all be so sophomoric if it weren't sucking all the capital out of the economy.
I'm more confident superdensity is well ahead of superintelligence, and Superstupification the next Red Hot Chili Peppers album.
https://www.livescience.com/technology/artificial-intelligence/chatbots-could-devour-all-of-the-internets-written-knowledge-by-2026