Tyler Cowen is a fan of using AI to answer questions of all sorts:
I was reading a book on Indian history, and the author referenced the Morley reforms of 1909. I did not know what those were, and so I posed a question and received a very good answer, read those here. I simply asked “What were the Morley reforms done by the British in India in 1909?”
I would treat this with at least a bit of skepticism. The reason is that when I ask ChatGPT about something I'm familiar with I often find that its answers are lacking. How do I know the same isn't true of topics I'm not familiar with?
Now, I'll admit that the iffy answers are usually about recent events and include quantitative data. For routine historical and biographical questions ChatGPT is probably pretty reliable. Generally speaking, though, if the details of an answer are really important, you should probably verify them.
I can't see how AI is any improvement on a conventional search engine, which discovers the sources which ChatGPT drew on. For example, https://www.britannica.com/topic/Indian-Councils-Act-of-1909
Right now, Wikipedia is more reliable for straight forward information. It is really a marvelous site. AI will exceed it at some point but that point is not here yet.
That was my first thought. How would AI improve on Wikipedia?
For questions like that the AI response is usually just a literal summary of the wikipedia article. LLMs tend to do a very good job on summarization tasks.
(1) Obviously it pays to verify information given by LLMs.
(2) IMHO it also obviously pays to try different LLMs. I personally swear by Anthropic/Claude.
We're at peak ChatGPT already:
https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/
The large language models, LLM, have already been trained on about every bit of digitized data available--there really isn't room to get better. And these take tone of energy to train. The article points out other limitations.
It seems like 50% of the commentary I read is that LLMs have jumped the shark. The other 50% tells me it's on the verge of explosive improvements, leading fairly soon to robust AGI.
I don't know what to believe!
Ask ChatGPT
This isn't AI lol, just voice-driven search. Machine learning training was performed to tune the algorithm(s), to be sure, but that's only AI to the business hype class.
The translation of your verbal request to a text query may be limited AI to some people, and then the follow up verbal translation of the text results, but otherwise as mentioned above this isn't much different than any other search request, except you only get one "preferred" answer in return rather than a collection you can scan for the most relevant and best sourced.
For a good example of an AI response not to be trusted, see my comment to the previous post re the M1 money supply growth in 2020.
Another example:
I saw my doctor this week about pain in my knee, and he asked if I was OK with him using an AI app on his phone that would listen to our conversation and help provide notes of what we discussed. All fine and good, I said. On my way out I got a copy of the visit summary from the nurse, which I read when I got home. It said that I came in for: (a) knee joint pain, (b) a flu shot, and (c) an abdominal aortic aneurysm screening. Both (a) and (b) were true, but (c) was a complete invention. Very strange.
AI is good for writing more quickly, but accuracy still requires human fact-checking/proofreading. I think that needs to be the best practice going forward.
I think you got my medical records by mistake.
United Healthcare has been using AI since 2019 to evaluate claims, denials of coverage increased dramatically as, presumably did profits. Maybe the media that was hand wringing about the effects of inflation on ordinary folks they claim Dems ignore should have covered the disastrous effects of insurers denying coverage has caused them. They would if the truly cared but they were just interested in scolding the same Dems who have fought for decades to get people healthcare coverage — because they don’t care. Obviously.
"AI" (remember this is a bs marketing term, not actually AI) performs these type of searches worse than regular search.
It's incredible how often an over-hyped bad product comes along that represents a degradation of quality and we still have willing cheerleaders falling all over themselves to heap praise and wonder.....not on the prospect of a future, better product, but on the current steaming pile.
I was helping a student and wondered what answer I would get from the AI summary Google provides. The AI was wrong, but the first 6 results under the summary were right.
I also spend a lot of time trying to locate hallucinated citations that students get from AI. So frustrating, because in order to really know they are not real you have to do title searches, author searches, and skim the table of contents.
Yesterday, I asked my search bar why paper towels were so expensive.
The AI answer was regulations on logging.
the LA times answer is that the slowdown in building created a slow down in the amount of board cut creating a slowdown in the amount of wood waste being used to make paper towels. My answer is that plus manufacturers who price gouged during the pandemic have decreased size of rolls while maintaining pandemic-shortage price points.
Google AI seems to be especially bad when it comes to two people with the same name. Unlike Wikipedia, which has a disambiguation page, Google AI just crams all the facts about all the people with that name into one fact list for a person with that name. It appears that it has, however, accepted my last correction when I gave it a thumbs down last time.
I wouldn't be surprised if AI is Time magazine's, person/entity of the year.
Deservedly or not, AI is mentioned / praised everywhere nowadays.
Given the human choices available for person of the year (most of which are pretty disgusting), I'd be glad if they went for choosing AI.
I don't understand why intelligent people do that. We know that language models are not trained for accuracy. Accuracy, if it occurs, is a byproduct. Is it really so much harder to "tradition-google" or read the entry in Wikipedia or some other encyclopedia (if you have access to one of those) where someone, usually an expert, has made an effort to be accurate and balanced (not missing any important facts)?
Tyler Cowen, my dude. Techno-optimism fills the god-shaped hole in his heart. Loves crypto, loves AI. Can't wait for somebody to invent the Torment Nexus.
This
Not hard to tell when a Wikipedia article is written by a knowledgeable person or not.
https://en.wikipedia.org/wiki/Indian_Councils_Act_1909
John Morley was the most important early C20 British politician whom no one today has heard of. Respected by his peers (on both sides of the aisle) almost unto veneration, brilliant but self-effacing, conspicuously conscientious even outside the context of the profession of politics where conscience is a sometime thing, he deserves to be remembered much better than he is. ChatGPT knows none of this, and Tyler Cowen still doesn't either.
Also, they were known at the time as the Morley-Minto reforms: Morley was Secretary of state for India, Lord Minto the Viceroy. Did ChatGPT trouble to point that out?
ChatGPT should not be used as a data resource, full stop. The technology absolutely is not there yet and Tyler Cowen is an idiot for thinking that it is.
Tyler Cowen previously used ChatGPT to write something including a hallucinated quote attributed to Francis Bacon. Dude just can't help himself:
https://www.acsh.org/news/2023/03/28/i-have-spread-lie-16968
A little behind in my reading. I use Perplexity.ai quite a bit: I am a network troubleshooter, and doing a Google search results in a staggering number of hits from Cisco docs and forums and such, requiring some major google-fu or a lot of searching the results. Perplexity gives me a good summary, but more importantly links to its sources. I can do a loose query, pick the details I want from the summary, and go to the source for the definitive answer. And in truth, even the summaries are pretty good.
Haven't you heard? We haven't cared about truth for decades, only about truthiness.