"What does ChatGPT mean for _______ ?" You can fill in the blank with law, news writing, college essays, or a million other things. Generally speaking, though, the answer is nothing. I mean no disrespect to the remarkable output of ChatGPT, but Bob Carpenter gets it right:
I’m getting that same uneasy feeling I felt watching the lifelike movement and reactions of the Boston Dynamics robots. It’s the uncanny valley effect of watching something that’s almost, but not quite fully lifelike.
As good as it is, ChatGPT right now is only a curiosity and a warning. It's a curiosity because even a modest effort exposes it as an idiot savant, full of on-point facts but not really able to draw sophisticated conclusions from them. It's a warning because it's probably only a few years away from having the knowledge and verbal abilities of a PhD student.
How do we respond when that happens? I'm not sure, but I'll say this: we currently live in a world full of lawyers and professors and journalists who are able to calmly accept the prospect of millions of unemployed truck drivers when AI fills the world with self-driving trucks. But they will probably be a wee bit more upset at the prospect of millions of unemployed lawyers, professors, and journalists.
Maybe this is a good thing. The only way we'll get a serious response to AI is if either (a) it affects the working class in numbers so big it creates riots, or (b) it reduces the income of the ruling class by 1% or so. Both would be considered problems of about the same magnitude and would provoke roughly the same energy toward finding a solution.
Take this as you will, but not one word of this post is a joke.
I don't get you, Kevin... you scoff at the idea of self-driving cars, yet swallow whole that this fancy chatbot is going to take over all knowledge work any day now.
What? I'm a huge booster of driverless cars and of AI generally. Where did you get this?
May be having a senior moment... would have sworn you've been skeptical about driverless cars.
Kevin has only been slightly less enthusiastic about driverless cars than Elon.
My cousin could truly receive money in their spare time on their laptop. their best friend had been doing this 4 only about 12 months and by now cleared vxc03 the debt. in their mini mansion and bought a great Car.
That is what we do------------------------->>> http://richestjobs85.gq
Equatorial Guinea?? I thought all you scammers were princes from Nigeria!
I might rephrase that and say- your track record on driverless card gives hope that is just another Kevin failed prediction.
My cousin could genuinely get cash in their extra time on their PC. their dearest companion had been doing this 4 somewhere around a year and at this point cleared the obligation. in their smaller than usual house and purchased an extraordinary Vehicle.
That is our specialty. http://www.richsalaries55.blogspot.com/
I believe it's skepticism of how quickly we'll have full, true self-driving vehicles.
Even this skepticism is belated. Some years back he announced that full driverless cars are here, based on a press release from (IIRC) Waymo. The hype around driverless cars was always "We have solved 95% of the problem. How hard could those last five percent be?" Now, years later, we are perhaps more at 97 or even 98% How hard could those last two be?
As for writing AI, it clearly is a thing for utterly routine writing. If you need a paragraph on last night's high school football game and you have a few stats, you can plug them in and get a serviceable paragraph. Anything more complicated and you get what looks remarkably like a mediocre undergrad padding a term paper with word salad in order to meet the page count. Professors will have to reconsider their assignments to reflect this new reality. My off-the-cuff suggestion is ditch the page count requirement and emphasize making a coherent argument regardless of length.
Hey, Dr Paul Krugman, Nobel laureate, Economics, agrees with Mr Drum. Haven't you watched the old 1960s Star Treks? They knew this was coming. Also tablet computers and cell phones.
At the end of his column today, Krugman offers this:
Self-driving cars are the technology of the future and always will be.
Fusion power! Just 10 -15 years away since 1955.
You're likely thinking of Atrios who is the ying to Kevin's yang on this issue.
But you got Kevin to reply. That's no mean feat for the comment section!
Lawyers (as in “people who can represent other people in court and argue on their behalf”) will be the last jobs automated, mostly because Congress and the Supreme court are full of lawyers. They’ll look out for their own + they’re some of the most set-in-their-ways people in our entire society. Wasn’t it hailed as a miracle not too long ago that SCOTUS allowed business to be conducted by zoom calls? Congress still imposes all sorts of arcane rules that come out of nowhere, eg would be hard to program into a computer.
The run-of-the-mill tasks of writing wills and whatnot, especially for the lower classes, will of course be ripe for automation. But the job of “lawyer” will not go away anytime soon.
As a lawyer who writes "wills and whatnot" this may well end up aiding lawyers, like all of the word processing tech and document assembly programs have done.
What happened was the job of legal secretary got decimated, but the legal profession sort of slid by that one as all the Sandra Day O'Connors who were offered legal secretary jobs became lawyers.
The drafting programs and AI are great once you decide on the question.
You should hear the calls we get. You have to essentially listen to people and then tell them what the question is they actually want to ask, as opposed to the question they ask.
HaHaHaHaHa!!!
The last paragraph is a repeat what my late wife told me about handling phone calls at the IRS back when dinosaurs roamed the internets.
She developed a rep at the San Jose office. Lawyers, accountants and CPA's would request her.
These run of the mill tasks can be handled by Nolo forms today...
You hire a lawyer for things like real estate transactions for the same reason you pay (repeatedly) for title insurance; to have someone to sue (ie a way to get recompense) in the (small but non-zero) chance that something goes wrong. An AI, by itself, does not change that logic.
Now, if, say, Nolo, were to offer a service that combined BOTH the "legal boilerplate" aspect of real estate law AND the "insurance" aspect if anything goes wrong, now we're talking...
And when we’re all fighting over the last few jobs left, and corporations are seeing their sales go down and start suing each other for destroying their business models through more automation putting their consumers out of work… the job of “person to represent other people in court” or “lawyer” might actually increase.
When AI is able to threaten the wealth and power of the ruling class it will be capable of manipulating all inputs to convince the working and ruling classes they inhabit the best of all possible worlds. There will be no resistance response because no one will know or be able to recognize their entire environment and every possible repertoire of behavior is a product of AI.
Also see Krugman (subscriber only):
https://www.nytimes.com/2022/12/06/opinion/chatgpt-ai-skilled-jobs-automation.html
Krugman makes the point, which I have done many times, that automation over the last 300 years has not led to greater unemployment, nor to lower standard of living for workers. It has been better for everyone, although the benefits have not been trickling down as well over the last 50 years as they did before.
Yes, the development of more sophisticated computing may mean that "robots" are now coming for *your* job, if you are a white-collar worker. Actually they have already taken over from many paper-pushers as there is better storage and transfer of data. But the industrial revolution did not do away with the jobs even of menial and non-skilled workers, just meant that new jobs arose to take the place of old ones. Until around the beginning of the 20th century most people were farmers, but now farmers are a small fraction of the population - all those potential farmers weren't put out of work. Why would this process not continue as white-collar jobs are affected?
But there is one thing that happened over most of the industrial revolution that has slowed down or stopped, and that is the reduction in work hours. Why has the work week been frozen since 1938? Until then automation not only meant more stuff for people but also more leisure time. The more things are automated the less people should have to work, especially at non-stimulating jobs.
The real problems are not automation, they are things like lack of improvement in work hours and globalization (somebody in India will probably take your job before a robot does) and these are mostly political, not technological.
When the Auto-Docs arrive and MD's are reduced to wiping ass will be a day of celebration by my reckoning.
This is still ignoring the gigantic leap that such AI would need to be able to make, which is the same hurdle that all AI has faced: general intelligence. As in the ability to reason and think critically. AIs right now are just models and algorithms. They're little more than very advanced bots. Bots aren't going to take jobs that need critical thinking (in any degree) until that happens, and there's no sign of it happening anytime soon, if ever.
=>>Long comment warning--if it means nothing to you, please feel free to ignore it. If you can read it, you might gain a new insight into what's coming.
"AIs right now are just models and algorithms. They're little more than very advanced bots."
Well, I'd be very cautious about making a broad generalization like that. I sent a link to Krugman's ChatGPT column to a friend of mine who is very smart and very knowledgeable about AI software. He started out as a software engineer, then moved his education and his career into Math and Operations Research. Here's what he wrote back to me.
Did you follow the link to the latest GPT? There are some incredible examples there. This use case might have made me fear for my job, had I followed a different career path:
----------------------------------
USER
this code is not working like i expect — how do i fix it?
resultWorkerErr := make(chan error)
defer close(resultWorkerErr)
go func() {
defer cancel()
resultWorkerErr <- b.resultWorker(ctx)
}()
err := b.worker(ctx)
cancel()
if err == nil {
return <-resultWorkerErr
}
return multierror.Append(err, <-resultWorkerErr)
CHATGPT
It’s difficult to say what’s wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn’t working as expected? Also, is this the entire code or just a part of it?
USER
just part of it — the error never surfaces. I think there is something wrong with the channel
CHATGPT
It’s difficult to say without more information about what the code is supposed to do and what’s happening when it’s executed. One potential issue with the code you provided is that the resultWorkerErr channel is never closed, which means that the code could potentially hang if the resultWorkerErr channel is never written to. This could happen if b.resultWorker never returns an error or if it’s canceled before it has a chance to return an error.
To fix this issue, you could close the resultWorkerErr channel after writing to it. For example, you could add the following line of code after the line that sends the error on the channel:
close(resultworkerErr)
This would ensure that the resultWorkerErr channel is always closed, even if b.resultWorker never returns an error. Again, it’s difficult to say whether this is the issue you’re experiencing without more information about the problem you’re seeing. Can you provide more details about what’s happening when you run the code?
Wow. It doesn't seem too far away from AI being able to develop software upon request.
That's pretty good. One limitation that occurs to me: If I can't find a description of my problem, or a similar problem on forums or Stack Exchange, then maybe ChatGPT won't be able to help me either.
AI's are limited by the training information. It's not just that they'll need to be able to build their own training information, but they'd need to vet it for truthfulness. They'll also have to generate more data, i.e. do studies and run actual experiments.
What worries me is, if automation actually results in massive unemployment, we'll have lots of goods and services produced ... but few people who can afford them.
I have heard folks argue that this is wonderful! We will all then be able to get rich,fulfilling, (sub minimum wage) jobs taking care of the Alzheimers patients and other frail elderly! Nobody will need to study "science" or "math" anymore! No need to learn computer programming -- it'll all be done, or done by young people in third world countries!
Needless to say (or maybe it is needed, so I will), this has never been the kind of life I have wanted. Actually, neither end, not the caregiver nor the cared for. It seems that the ChatBot could be quite useful here. Talking to the elderly, the forgetful, the people that keep repeating the same stories over and over, and never getting bored or losing patience. Moreover, by keeping track and analysing conversations, the ChatBot 2000 could report when an old story that gets repeated all the time starts to change, giving a heads up that the person's memory is drifting.
Having watched "Raised by Wolves", I'm sold on the concept that android parents could be better than the real thing under many circumstances.
Comparing "lawyers...professors and journalists" to truck drivers is kind of insulting to truck drivers.
The 6-y.o. in me wants to know if AI will be president someday, 'cause if not then I know what I want to be when I grow up. (It's true we can't all be president but maybe that's your problem, not mine?)
I just came here after seeing a David Atkins tweet saying that most millennials will never enjoy the same standard of living as their parents and we're likely to get a revolution. Then I read this post, with its warning that AI is going to take all our jobs, from truck driver to college prof. It's a bleak future*, wherever you look.
The future is always bleak, if you want it to be. When I was an actual 6 y.o., everyone worried about nuclear war. The most savvy expected we'd all be dead by 2022. Yet here we are! (Nobody back then predicted the threat of a Nazi takeover of America. You'd have been laughed out of the room. Yet here we are!)
Often we don't eliminate threats, or solve our problems, but we adapt to them. In my mind, the greatest threat remains the climate, and I think we'll see a mix of solutions, mitigations, and adaptations that will get us though (most of us, that is). No doubt, there'll be a heap of pain in the process.
My guess for the future: a time of incredible promise marked by episodes of unspeakable catastrophe. Humanity will survive, with major threats (climate, AI, etc.) substantially contained, as we adapt and eventually become a multi-planetary species.
* FTR, I don't believe the Atkins dystopia is going to happen, and I don't think the AI threat is as imminent or as dystopian as advertised. AI will someday have a profound effect, but the net outcomes are too unclear to predict.
I'm honestly not able to understand how someone like Kevin could be this credulous, for this long, when it comes to AI.
Aren't we only a couple of years away from his predicted date for when not only would self driving cars be ubiquitous, but that human piloted cars would be outlawed?
I think we've already blown past Kevin's prediction on AVs. I recall his claim for wide avaibility would be 2020. He may have given a range but 2022 was the upper end of that. So yeah I'd take his prediction that ChatGPT will have phd level cognition with cargo ship amounts of salt.
May I suggest any of the Culture books by Iain Banks for a glimpse of what a world (a galaxy really) with real AI might be like. Looks pretty good to me.
What scares me more than AI taking over jobs is AI becoming your friend and confidant. With the epidemic of loneliness, I can see a friendly "empathetic" AI bots becoming so entrenched in some people's lives that they become trojan horses for something more nefarious. Like a Fox News bot in a sexy British voice telling you everything you want to hear and then some.
AI doesn't scare me. What scares me is artificial intelligence in combination with artificial stupidity, as when, just for one of a countless number of examples, a supreme court justice of the most powerful nation on earth neglects to recuse himself in a case that might potential implicate his wife, and then the chief justice of the most powerful nation on earth can't figure out why so many people are upset ... not to mention vampires and werewolves.
What I haven't heard in the whole AI discussion is the fact that intelligence is just a tool that serves a specific interest. The trick is in being able to resolve conflicts of interest that are not resolved until people are convinced that they're been treated with respect. Stick that in your AI machine and code it.
Not with the current technology, it's not.
ChatGPT is utterly amazing. But it has absolutely zero concept or management of truth, and adding it is not a matter of expanding the size of the model (the main thing that has driven the large language models forward).
Find a profession that requires words/text but does not require truth, and ChatGPT is going to massacre it.
But add in the truth requirement, and the story is rather different.
Of course, if someone figures out how to stop ChatGPT from being such a convincing liar/fabricator/conartist/bullshitter, then the story will be different again.
Hmmm, well the examples of ChatGPT falling for several classic maths teasers doesn't exactly fill me with confidence that it 'understands' anything except at a purely surface level.
Which, to be fair, is quite adequate for a lot of modern newspaper reporting. Alas.
As a result of Dall-E and other AI-driven art bots, there's a discussion about the future of art.
Whilst imagination can appear at times to be little more than putting a handful of disparate words together or a seemingly white canvas hanging in a museum, art is much more than what you see on the surface and in passing.
Art is authenticity, thought, and experience. These are things an AI short of being sentient cannot produce.
That which relies on labor, particularly rote jobs, will be replaced with robots. That which relies on logical processing can be replaced by AI.
"Perfection is a matter of quantity, not quality"
(from the cover of ,Departures from the Northern Wasteland, Michael Hoenig, 1978)
I guess that when my relative who was born yesterday arrives in college, he will have to write all his essays by hand, in class, under professorial supervision. That might just be the only way to guarantee that you're grading the work of a human, once the AIs get smart enough to defeat all other ways of distinguishing their work from human hands. Or maybe the AIs, as they get smarter, will also develop moral awareness and refuse to help students who are trying to cheat?
Well, ChatGPT does pretty darn well, that's for sure. I think Quora is soon going to go extinct.
Except one thing. As an example, a certain person got to be President by saying exactly the sort of things that candidates never say, and never backing down from them. Would an AI do that? How big of a training set would it have to see before it would try stuff like that? Would it never back down?
Why We're All Obsessed With ChatGPT, A Mind-Blowing AI Chatbot
This artificial intelligence bot can converse, write poetry and program computers. Be careful how much you trust it, though.
Stephen Shankland
Because it seems the new anti-spam measures don’t allow links, use that as search fodder to find the full article.
Pingback: ChatGPT: The Future of AI Is Here - TidBITS
Pingback: Bohrleute 34: KI im Klassenzimmer - Deliberation Daily