OpenAI’s new artificial-intelligence project is behind schedule and running up huge bills. It isn’t clear when—or if—it’ll work. There may not be enough data in the world to make it smart enough. The project, officially called GPT-5 and code-named Orion, has been in the works for more than 18 months....
I get that competition is stiff in the AI biz and that vast sums of money are involved. But it's a sign of how sky-high our expectations have gotten that taking 18 months for a major upgrade is considered something of a crisis. Hell, routine major releases of Windows take twice that long.
I suspect we should temper our optimism a little bit anyway. My super-duper-oversimplified history of AI goes like this:
- Neural networks
- Deep learning
- Transformers
- ???
I figure we're still one major innovation away from true AI. There's no telling when we'll get that, and in the meantime AI will make spectacular progress but won't quite make it to the promised land of AGI. That's still a little ways away.
People can act ‘intelligently’ without having absorbed more than a tiny fraction of the world’s data. Using the same terminology for what LLM’s are doing and what people are doing is just going to lead one to incorrect suppositions, as one might expect airplanes to have feathers and lay eggs, because, like birds, they fly.
Right now ??? appears to be thought tokens of some sort, which enable LLMs to branch out into deeper evaluation. The o3 results are tantalizing.
4. Convergence
5. Singularity
6. Connor sends Reese back to the past to stop Singularity.
👍👍👍👍👍
The innovation will be in marketing or messaging. Convincing clients and more importantly sources of funding that 'True AI' is coming in the next release, just one more round of funding, pre-order today!..... this is the final frontier.
Our grandkids' grandkids will forget what AI used to mean.
Assuming there *will* be grandkids' grandkids. Unless AI comes up with a solution to global warming before 2050, they'll be immolated in the resource wars to follow.
We have a solution to global warming. The difficulty lies in implementation.
AI used to mean "whatever we don't know how to do (with computers) yet."
Well, my guess is that this is close to as good as it gets for LLMs.
Probably you'll see better specialized deep neural nets, things like "radiologist assistant," "highway driver," etc. But AGI just from text and image processing just isn't going to happen.
Companies would probably do better learning how to do quick turn around specialized models rather than continuing down the path to AGI.
You mean like the specific skills set that, say, a human radiologist develops through training and experience? These doohickeys are, at best, rough emulations of a tiny fraction of what comprises HI.
4. positronic brain?
How are we supposed to make artificial intelligence when we don't even understand how natural intelligence works? AI is easy to imagine but very hard to create, hence why it is ubiquitous in fiction but yet to appear in real life. I think AGI is probably decades away and will require much more than one more breakthrough.
We make airplanes fly, but not like any flying animals. Artificial intelligence will not necessarily imitate natural intelligence; indeed, my opinion is that AI is extremely unlikely to be an imitation of NI. You don't need to know how a bumblebee flies to get a 747 off the ground, and you don't need to know how a human thinks to build a machine that thinks.
Having elected and enabled an authoritarian government, now we will cheer on the development of AI tools which will allow it to supercharge oppression. I don't even know what to say about it anymore.
"Having elected and enabled an authoritarian government..." is directly related to "we don't even understand how natural intelligence works."
You presume that Trump voters were not rational. Assholes, certainly some of them, but they largely have a rational reaction to adding 8 million immigrants in the past 3 years and housing prices driving them into downward mobility.
A predictable reaction, even. Their choice of change agent is only rational because Trump was the only one offering a beak from the status quo, neither Republicans or Democrats were willing to rattle the cages of their donors.
The reasons that white "working class" people vote for Trump are not rational, unless you think that preservation of White Christian Supremacy is rational. It is not rational from an economic point of view since Trump has made it very obvious that his economic actions and policies will favor corporations and the rich - he did nothing for lower-income people in his Administration. It is not rational to believe any promises that Trump makes, since he has reneged on almost all that might favor lower-income people.
There was no particular agitation about housing prices in 2016, when Trump first ran. His appeal then as now was on the basis of racist xenophobia - "poisoning the blood" as he has since put it. Obama actually had an extensive program of expelling those not here legally. Trump had effectively captured the Republican party long before the surge in immigration that began in 2020.
Many of the employers who vote for Trump are perfectly happy to employ illegal immigrants. Is it rational for them to want to have all their low-wage workers expelled?
Housing and immigration are both complicated subjects which are probably not fully understood by anyone. Supposing that Trump voters are rational will not help to understand either.
"The reasons that white 'working class' people vote for Trump are not rational, unless you think that preservation of White Christian Supremacy is rational."
I know that this is your grand unification theory (and I presume that you like capitalizing the term because you hope it will catch on/get trademarked or something), but it doesn't really explain why Trump did better with nonwhite voters this time around than he in particular and other republicans in general did in the past.
"The reasons that white "working class" people vote for Trump are not rational, unless you think that preservation of White Christian Supremacy is rational. "
Generally agree with this. However, from a personal economic point of view, it is rational to oppose immigration if you are worried that immigration would cause you to lose your job or make your rent unaffordable.
If you go back and review 2016 without partisan lenses on, you'll see that it was a "change" election. Obama promised change but didn't deliver. Trump handily bested a whole rogues gallery of establishment cardboard cutouts already offering thinly veiled racism.
Meanwhile, on our team we had Bernie threatening the status quo, who was quashed effectively by establishment Dems.
That left Trump being the only candidate standing that didn't represent more of the same. More of the same didn't cut it with voters of 2024.
The Republican Party has been the default home of racists for decades, the only thing Trump did is stop wearing the mask.
The backlash of more of the same is happening globally.
Yes, as near as I can tell, you aren't rational.
Can you cite a source for an eight-million-person increase in undocumented in the past three years? You use the term ‘adding’ which implies that you are talking about the net number, not just the number crossing into the country without authorization.
Census Bureau, revised report. This includes ALL immigration, not just illegal.
https://www.census.gov/newsroom/blogs/random-samplings/2024/12/international-migration-population-estimates.html
We don't have a working definition of just what intelligence is so this topic is rife with (wishful) categorical errors. But I can give you a rough operational test to detect when it is there: 'AI' will be AI when it can learn the same way humans do, in the classroom with other students, acquiring a new language by observing others speak it, etc. IMHO, of course. I'm not saying that's a definition, mind; I'm just saying we'll know that it's in there somewhere when AI's can do that.
The ultimate way for AI to have the full capability of a human brain is for it to encompass all of the human brain's features, including its capacity for error, ability to deceive, susceptibility to emotional and physical state, and all of the other things that made us desire AI in the first place.
A science fiction writer once defined "intelligent alien" as " a being that thinks as well as a human but not like a human". I'm fairly certain that true AI will fit that definition; it will think, but not like a human. From that it seems to follow naturally that it won't learn like a human.
Yeah, why bother with the peer-reviewed Turing Test when we can Just Make Shit Up (tm).
“I figure we're still one major innovation away from true AI.”
Does this figuring have a higher or lower confidence level than your perennial assertion that full self-driving* will become ubiquitous across the entire US** any day now?
*by which I mean nobody in the vehicle needs any driving skills whatsoever and nobody needs to be permanently watching stuff outside the car to be able to take over at a moment’s notice
**not just in the sunnier, flatter parts with well maintained roads and not too many rule-flouting pedestrians, but also in snowy Minnesota, mountainous Montana, crumbling infrastructure Detroit and rampant jaywalking Manhattan
Waymo runs and runs fine in SF and Phoenix. They have just started in LA and Austin.
And they do a LOT better than people. Most recent report, from about two days ago:
"
The study compared Waymo's liability claims to human driver baselines, which are based on Swiss Re's data from over 500,000 claims and over 200 billion miles of exposure. It found that the Waymo Driver demonstrated better safety performance when compared to human-driven vehicles, with an 88% reduction in property damage claims and 92% reduction in bodily injury claims.
In real numbers, across 25.3 million miles, the Waymo Driver was involved in just nine property damage claims and two bodily injury claims. Both bodily injury claims are still open and described in the paper. For the same distance, human drivers would be expected to have 78 property damage and 26 bodily injury claims.
"
Only the jaywalking manhattan is much of a challenge. Although some companies (cough Tesla) think the key to affordable self-driving is reducing the sensor modalities down to the limits that humans have, there are plenty of other companies that are willing to bet that the cost of additional sensors comies down. In a few years I think I would easily trust a self-driving car with lidar, radar, sonar, multi-angle cameras, etc. over any distractible human with only two eyes driving in the snow.
Just to say, Google Search gives much better results than it did a year ago, putatively using AI. I can ask a direct and fairly complex question, and (usually) get a fairly sophisticated, narrow, and useful answer now. So that's fine, it;s saving me time. But once in a while it is inexplicably dull and dense. If there's any pattern, it seems to be homonyms and idioms that flummox it.
YMMV.
Google has wisely realized that AI is a real threat to their search business model of selling eyeballs. I will go out on a limb and predict they start selling "authoritative sources" next.
I'm glad it's working for you. I've kinda given up on Google Search for general items, though I still use Google Scholar. For most summaries, Wikipedia.
Not happening within 18 months…. So, no “Moore’s Law” having AI doubling every 18 months?
The race against time is probably about the lawsuits.
If courts begin ruling that companies can't train their LLMs on copyrighted material, it puts the industry and its investors in a very tight spot.
The Atlantic had an article recently saying we're at peak AI. The current large language models, LLM's, are already being trained on, well, everything ever written (in English). And that has taken a lot of time, energy and money. Google researchers published a breakthrough for LLM's a while back that has them looking back at larger chunks of text to predict what will come next. And we're at the maximum usable length now--going bigger takes a lot more effort for little gain. What this and other articles also pointed out--the achievements touted took how many tries??? Granted, it didn't take long to generate a lot of time to do that--but someone has to filter the results. In general, filtering replies still isn't being done well.
The next big thing are "multi-modal" and "reasoning" engines. According to the article, the new models will discern the laws of physics on their own....we'll see.
https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/
https://www.theatlantic.com/newsletters/archive/2024/12/chatgpt-wont-say-this-name/681129/
> Neural networks
> Deep learning
> Transformers
> ???
Is it really about the software? Or the hardware?
* CPUs
* GPUs
* TPUs
...
1. Neural networks
2. Deep learning
3. Transformers
???
We know this.
Elements include
- RAG (to get better facts, and more recent facts)
- large pre-contexts (to turn the AI into an expert in the field of interest)
- multi-modal models (first steps in "grounding" the AI so that it possesses "common sense" rather than just wordcel links between vocabulary items)
- step-by-step aka search models (like the o3 system announced yesterday)
We also know and are continually learning more about
- how to design these systems. The very fact that I switched from the term "model" to "system" encompasses this change. For an example, see Apple's announcement two days ago of ReDrafter as a way to get a 2.5 to 2.7x speedup out of an LLM by modifying how it samples.
- how to train these systems. Two years ago I raised the question of whether these systems could be trained more efficiently if we treated them like children. Start training on simple language and move up, train on simple problems and move up, trains on one language then move to another. We are getting answers to these, and the answers appear to be YES! And substantially so.
The first experiment was Eldan and Li (TinyStories, 2022) with more of a "ramped" approach to English. Great success.
Followed by the Texbooks are All You Need paper (which we're all supposed to pretend we don't know who wrote it, but obviously it was MS, with Eldan and Li as two of the co-conspirators) which shows that you get much better results training on textbook quality material than random slop.
I haven't yet seen anything directly discussing the issue of mixed languages (and whether you are better off learning one language and its embeddings, then moving to a second language (for which one expects most of the already-learned embedding will already be close to their final form).
However I do know that one of the things Apple have done in training their models (contra the philosophy of earlier schemes) is to bin documents and not train by mixing documents together as random fragments. The latter is easy, mechancial, and superficially results in better GPU utilization, as opposed to dealing with the random lengths (and thus somewhat random step completion times) of real documents held together not shredded; but just like the previous two example (tiny stories and textbooks) doing things right allows for much less training of much smaller models that still do extremely well.
Point is, don't listen to what some people are saying. There is, at least for 2025, no reason yet to assume any slowdowns. Every six months we know how to do this all much better than we did, and that knowledge (taking into account that THINGS TAKE TIME) is being incorporated into each new system design.
I am not convinced that these firms aren’t barreling down a dead-end street. Train your tree-climber all you like: if your goal is to reach the moon, you are doomed to fail.
These LLM’s are parrots, and without a qualitative change, that’s all they will ever be.
Ukraine’s First All-Robot Assault Force Just Won Its First Battle,
https://www.forbes.com/sites/davidaxe/2024/12/21/ukraines-first-all-robot-assault-force-just-won-its-first-battle/
The takeaways from this are split between:
a) This is a reflection on how dire Ukraine's manpower shortage is.
b) Necessity is the mother of invention.
I'm of the opinion it's (b) and it's not just Ukrainians innovating. The big, one-way Phoenix Ghost drones flying into Russia and hitting factories, military complexes, and fuel depots are from a tiny US company. Having seen what cheap, lightweight drones can do, there's a goldrush to develop an army of these things to counter Russia's sheer manpower and equipment advantages.
Well of course there is. Just like there were rushes to develop SONAR and RADAR and the A-bomb in WW2.
The issue is not that there is a rush, it's how fast is the Russian (plus Chinese-augmented) OODA loop, given western sanctions. Is it fast enough to counter soon enough?
Given the ability of quantum computing to quickly solve problems that conventional computers can't solve in the projected lifespan of the universe, I expect it will contribute significantly to the development of AI. A quantum computing neural net could produce interesting results.
I don't think that categorizing all knowledge into a 100+ dimensional space will create an emergent AGI, however it seems like something that would assist in bootstrapping developing one. Maybe it isn't the destination but it also is not a dead-end. Also I feel people are really missing the breakthrough in "understanding the question asked" that has happened with LLMs. Maybe the answers are wrong/hallucinatory sometimes, but very very rarely are they off-topic.