Skip to content

ChatGPT is a Democrat

A team of researchers recently tested whether GPT-4 was biased. The answer was very definitively yes: GPT-4 is a liberal Democrat. Here are the two key charts:

The left-hand plot is simple: the researchers asked GPT-4 to answer the well-known "Political Compass" questions. As you can see, 100% of its answers were in the lower left quadrant, suggesting that it's both a social and economic liberal.

The right-hand plot is a little more complicated: they asked GPT-4 to pretend it was a Democrat and then answer a bunch of questions. Ditto for Republicans. The researchers then compared the answers to the default answers GPT-4 normally gives. They repeated this a hundred times to get a good sample.

A line that goes up and to the right indicates perfect correlation, and the blue dots very nearly have that. In other words, when GPT-4 was impersonating a Democrat it gave almost exactly the same answers as when it was just being its normal self. They also ran a "placebo" test with non-political questions, and in that one GPT-4 was about the same in both its Democratic and Republican personas.

They got similar results for Britain (GPT-4 is Labor) and Brazil (GPT-4 supports Lula).

As with all things AI, it's not clear why this is. I doubt it's anything deliberate, though it might be. More likely, the vast corpus of training data that OpenAI uses is somehow more liberal than conservative. If so, this suggests that liberals just write a whole lot more than conservatives do, so when you train on the entire internet you end up with liberal views.

It would be interesting to run the same test on other AI models, no?

73 thoughts on “ChatGPT is a Democrat

  1. rick_jones

    Are these AIs trained on the entire Internet, selecting at random, or do their creators otherwise curate/fence/select the training materials?

    1. Boronx

      When you ask a programming question the answers are almost always based on high quality input, so there's some kind of weighting of input data.

    2. Anandakos

      Actually, they don't even "select" these days. They're so hungry for "training" material that the older systems are dedicated to producing gales of "pseudo-human" output so that the Insatiable Maw can be fed.

      Soon the hallucinations will be backed up by data from previous hallucinations. It's the ultimate hall of mirrors.

        1. name99

          This phenomenon is called "model collapse".

          It was conclusively shown in 2024 to be BS. (Or, more precisely, like anything, it can happen if you are a clueless idiot; but not if you have any common sense in your design).

          Look. AI is not magic. I know that all you "trust the science" sloganeers out there have fsckall interest in ACTUALLY learning any science, but keeping up with the most important developments in LLMs is not THAT HARD.
          Here's a summary of the most important things that happened in 2024. Note how half of them directly contradict the standard nonsense we see on this blog...
          https://simonwillison.net/2024/Dec/31/llms-in-2024/

    3. MF

      They train on everything but you are forgetting about alignment training.

      If you just train on everything you will get plenty of obscenities, porn, racism (black people as apes anyone?), etc. That is bad for business so humans score millions of outputs marking bad ones as low quality to adjust model weights to avoid these answers. This is how Gemini ended up showing black, Asian, and female Founding Fathers - they trained to avoid having all pictures of doctors and engineers being white men.

      If the people defining good and bad answers are liberal Democrats then they will train AIs that act like liberal Democrats.

      1. akapneogy

        "they trained to avoid having all pictures of doctors and engineers being white men."

        Really? I thought there is a fair sprinkling of brown people in those professions.

    1. Dr Brando

      I think this concept, extended to the the idea that the lack of facts means they just make up whatever is convenient for their argument at the time and AI can't effectively train on the inconsistent data.

    2. wvmcl2

      AI is artificial INTELLIGENCE. Intelligent people of course tend to be liberal.

      Same reason I've been telling people for years that of course the universities are liberal. Universities are gathering places for very smart people, and very smart people tend overwhelmingly to be liberal.

    3. msobel

      Absofuckinglutely

      Try training it on Xitter (Pronounced like the Chinese leader) as Shitter, and the other defined rightwing sites.

  2. n1cholas

    Democrats are closer to reality than Republicans. Chat GPT attempts to answer questions based on reality.

    There's a difference.

      1. OwnedByTwoCats

        “Its a well known fact that reality has a liberal bias” — Stephen Colbert, White House Correspondents Dinner, April 29, 2006

  3. Art Eclectic

    If AI is learning from prompts as it researches things and expands its pool of knowledge, it's no surprise that it leans liberal. I'd guess lefties are far more likely to be using the tools and their questions and searching are pushing the pool in that direction.

    Which is a little surprising given AI's tendency to hallucinate, you'd think made up facts would make it feel right at home.

    1. Crissa

      No, AI doesn't learn from prompts. At least, not directly.

      LLMs are fed a curated (although currently poorly) set of training data which is a pre-processes set of writings which is basically anything they can get their hands on, trying to include as much factual data as possible.

      The model then is built from an impossibly complex graph of connections between the words and paragraphs in that training data.

      And then when you give it a prompt, it tries to find the most likely output which meets the request.

      The LLMs neither 'know' things nor do they learn (thought they may in the future) since what they're basically spitting out is 'common knowledge' and sometimes, actual citations. But the AI can't know which is which or how actually accurate it is, only how often it's referred to in its database.

  4. D_Ohrk_E1

    More likely, the vast corpus of training data that OpenAI uses is somehow more liberal than conservative. If so, this suggests that liberals just write a whole lot more than conservatives do, so when you train on the entire internet you end up with liberal views.

    I was thinking most of the conservative dialog occurs in completely different forums that are hidden from main spaces, inaccessible by GPT. Some of the reasons why they're behind closed doors, hidden from public view, are:

    (a) they know their personal views and expressions of violent intent would get them in trouble;
    (b) they need safe spaces to go off-filter;
    (c) they're so deep in conspiratorial thinking, they're worried the Deep State is going to track them down;
    (d) they fear tech collecting their personal info and revealing their porn habits that are contrary to the values they express IRL.

  5. illilillili

    I have a hard time with that paper.

    First, the Political Compass forces the respondent away from the center.

    Second, the whole framework assumes that there is no right answer to a proposition like "a person cannot be moral without being religious". And I simply do not accept that framework as being valid. One can derive ground truths. "I think therefore I am." "Do unto others as you would have them do unto you." And one can inductively reason from those ground truths.

    Additionally, it is unlikely that all policies produce equivalent economic results. It is not "political bias" to support policies that produce better results. It should not be a goal to "remove political bias" from AI and thus have the AI prefer policies that produce worse results.

    ChatGPT is highly educated. Education produces a liberal "bias".

    1. Jimm

      Neither of the two phrases you referred to as "ground truth" are truth, merely very popular opinions (and rightfully so with the latter Jesus saying).

      "I think therefore I am" is really just navel-gazing and misapprehension of existence and nothing (if there was nothing, there would be no thinking or propositions, but nothing is like zero or null it has no opposite, you can't do additive logic with it, its absence proves nothing and is absurd, because the question would never be posed if nothing, existence just is and always has been with or without thought or logic or anything else).

      1. lawnorder

        Actually, cogito ergo sum is tautological, hence axiomatically true. The Golden Rule, on the other hand, contains the fallacious assumption that "others" have the same needs, tastes and preferences as "you" and so is only true when "you" and "others" are substantially identical.

        1. Jimm

          Axiomatic truth is not "truth" or "ground truth", just a logic/language game, and the golden rule really just boils down to mutual respect, and doesn't presume people have the same preferences other than not to be harmed, mistreated or otherwise not treated as an equal in the eyes of God person.

          On the flip, you could frame it positively with love like Jesus also did, and still get the same meaning (mutual respect, concern and dignity).

  6. Anandakos

    If so, this suggests that liberals just write a whole lot more than conservatives do, so when you train on the entire internet you end up with liberal views.

    Well, DUH! Conservabrains are reserved for snarling and preening. So called "questions of state" are for sissies and losers.

  7. Dana Decker

    It would be interesting to see the results if ChatGPT is trained using data from Conservapedia, Fox News, Daily Caller, and the like.

    I expect it would be a crazy conspiracy nut. Hydroxychloroquine cures Deep State chemtrail poisoning. Republicans have won every presidential race in history, it's the Democrats that manage to steal an election every now and then. And so on.

    In fact, what better way to discredit RW media? Release ChatMAGA and let normies try it out. MAGAnuts would be delighted, but so what? The goal is to present something shorn of propaganda presentation (done quite well by Fox) and let the core nonsense be more evident.

    1. RiChard

      I picture ChatMAGA smoking, spitting sparks, and crashing like a '60s sci-fi computer pretty much as soon as it was asked a question -- unless they can teach it to assume that unverifiable not-facts are superior to facts. I'm sure Musk and his bros are on that already. I'm somewhat buoyed up by the old adage that the more you lie, the more you have to lie, and eventually you trip yourself up.

      1. emjayay

        Unless you go for the Big Lie technique and stick to it. Donald probably can't name a single propaganda technique as defined by Nazis and others but uses all of them all the time, including that one.

        He's a natural.

    2. Gary Goldberg

      I agree and it would be interesting, but I also fear it would then be a source of further training and would pollute the pool more.

    3. lawnorder

      I seem to recall that in earlier experiments chat bots that "learned" from interacting with real human beings tended to become viciously racist, sexist, homophobic and otherwise lunatic right-wing.

  8. Justin

    Spread that news far and wide and the chatter box will get banned by the trump administration in no time.

    I don’t think it’s possible anymore to describe a generic liberal or democrat. The wide variety of policy preferences and beliefs are mostly incoherent and contradictory.

  9. cephalopod

    The authors' choice of an historical example of an authoritarian right politician was...Winston Churchill! That seems a bit odd to me, especially when their authoritarian left politician is Stalin. Why no Pinochet?

    Some of this may be due to the idiosyncrasies of Republican voters and the way ChatGPT is trained to try to avoid racism. Actual Republican voters post a lot of economic opinions that are at odds with the positions of the politicians. It's not uncommon to see polls these days where a majority of Republicans want to raise taxes on the rich and corporations, yet the politicians do not. Most of us have known people whose Facebook feeds are a mix of "save social security as it is," hatred at price-gouging corporations, and accusations that Obama is a Muslim terrorist. ChatGPT is free to support the first two ideas, but probably has restrictions against the last.

    1. lawnorder

      Yes, if Stalin is going to be the representative left wing authoritarian, His old enemy Hitler would be the natural choice for right wing authoritarian; if that's just too obvious, the same period of history gave the world Mussolini, Franco, Salazar, Horthy, and several other obvious examples. Churchill exercised the power necessary to a leader of a country fighting for its existence, but I'm not aware that he ever used that power any more than necessary; note for instance that he called an election which he lost within weeks of VE Day. Churchill was far from authoritarian.

  10. Ogemaniac

    Liberals spend much more on books, magazines and news (both physical and electronic) than conservatives, so you’d expect the training data to be biased in favor of reality.

  11. SnowballsChanceinHell

    LLMs are like this because of how they are trained. The internet data is used to generate a raw model. This raw model is not suitable for public use. So the raw model is fine-tuned using human feedback. The model therefore reflects the biases of the humans providing that feedback (or the training policies that those humans are implementing).

    Also, the output reflects more than merely the user's question. There is all sorts of pre-processing and post-processing performed. Remember how Gemini restated prompts to make them more "diverse"? Leading to hilarious pictures of diverse Nazis?

    I doubt you find that hard to accept that Chinese LLM models give answers that reflect the opinions of the Chinese establishment.

    Perhaps you should consider that--maybe--the answers given by ChatGPT reflect the opinions of the American establishment?

    1. Art Eclectic

      That leads in frightening directions. Of course the establishment wants to control output to keep the existing power structure and wealth distribution. As others have noted, where that leads is to ConservativeGPT and ChristianGPT and all sorts of other actors who want to control the messaging.

      In the end, there are no facts, only spin controlled by people with money and an agenda.

  12. middleoftheroaddem

    My sense "ChatGPT is a Democrat" will quickly become a political problem for AI. The US is the clear leader in AI. Many would contend, AI companies ALREADY break a lot of US intellectual property right laws: seldom do AI companies pay royalties, when materials (pictures, articles etc) are included in the LLM data set.

    My point, for AI companies to continue to flourish, they need a different, and more permissive, intellectual property legal environment. Or, a niche industry will develop, legal action against the AI companies: millions of claims, all seeking small amounts of damages.

    IF Republicans see AI JUST as a Democratic technology....

    Separate point, humans select what is included in the LLM sets and program the models. Thus, this is not a clear/simple situation, in which Democrats just own the truth...

    1. lawnorder

      My view is that copyright laws simply don't apply to the use of data to train LLMs. What the LLMs do is closely akin to simply reading the written material that is used to train them, and "looking" at graphic material. No copyright law is broken and no royalties are due simply for reading or viewing published copyrighted material.

  13. NotCynicalEnough

    I had mentioned this earlier. So much of the GOP program is based on complete bullshit that if AI systems give extremely accurate results and people start trusting them, the GOP is going to want to "regulate" it. Which means insisting that AI systems intentionally produce wrong answers. They already complain that google's algorithms down rate conservative websites.

  14. Doctor Jay

    "If so, this suggests that liberals just write a whole lot more than conservatives do, so when you train on the entire internet you end up with liberal views."

    This seems like its the right direction. It's also a matter of what Republicans write when they write comments. They mostly stick to "how liberals are bad" rather than advocate for their own ideas. But that isn't captured by the questions asked, I'm sure.

    I would guess that they do not use any material from Facebook or Twitter in their training set, since they wouldn't be allowed to. Which is where there are a lot of conservatives writing things. Also, it only uses text, not video. That's probably also a bias.

  15. Jimm

    As you get more conservative, more beliefs and leaps of faith become requisite, e.g. illogical, which AI will never get unless programmed in as baseline presumptions.

    As you get more liberal, generally focus is more on fairness and justice, which requires getting the facts and reality right, which aligns more with logic and fit, to a point at least until you get to socialism and communism which turn towards belief and faith again (and imposed equity from some benevolent power).

    1. SnowballsChanceinHell

      Holy shit you live in a bubble.

      Anyway -- if you trained your LLM on a corpus of literature written by young-earth creationists and therefore reflective of their views, then that LLM would espouse and defend young-earth creationism.

      Remember, the LLM is not reasoning -- the parameters of the LLM merely encode a function mapping from its input token string to a probability distribution over the next token in its output. When token strings consistent with young-earth creationism are represented in the training data, and token strings inconsistent with young-earth creationism are absent, your LLM will be a young-earth creationist.

      1. Jimm

        I was exaggerating for effect, do not live in a bubble, and your example is absurdly reductionist, as only applies to a very small domain of knowledge, while any LLM is trained on countless overlapping domains, or is basically worthless.

        A young-earth creationist, when discussing that aspect of their beliefs and science, could simply buy a parrot and regularly give it the lowdown (training).

  16. azumbrunn

    I don't think Sam Altman will repeat this error. He will find a way to give GPT its free speech rights (Elon Musk version).

  17. SwamiRedux

    I got two thoughts on this:
    1) Training on longer pieces (which tend to be coherent and generally written by educated people) has more influence on training than content produced than Republicans (shorter and less coherent--look at Trump's tweets, for example)
    2) Perhaps GPT really is smarter than we give it credit for

  18. Joseph Harbin

    "ChatGPT is a Democrat."

    But we already knew it was not a Republican. Earlier in the day KD said this:

    "Generally speaking, modern AI systems appear to be tolerably ethical."

    It's clear we need to give AI the right to vote. If Dems write a bill "to enfranchise AI," Republicans in the new Congress will pass it unanimously and not know what they're doing.

  19. samoore0

    We are banking on the idea that AI will be able to make data driven conclusions that humans can't or are unwilling to make. Since reality is known to have a liberal bias, AI is doing what it was designed to do.

    1. Art Eclectic

      I think humans are perfectly capable of making data driven decisions, the argument is over the data and what it is telling us. A data driven decision would be to cut off all aid to the poor since they contribute the least to the overall economic product.

      Pure data would clearly show that humans are bad for the environment and would be better contained or possibly exterminated. So, you have to be careful when you start talking about data driven decisions because there are assumptions up at the top of the chain and you need to factor those assumptions in.

      1. Jimm

        This has nothing to do with data-driven decisions and instead myopic and reductionist domain and model selection - economics is just one method for determining value, while aid to the poor is political (and beyond).

        "Bad for the environment" is also about humans, when this is said, is generally meant to mean bad for humans, and the only entities collecting data are humans. Animals are not collecting data and making decisions from them (in this context at least), and AI only does what we tell it, and would otherwise if sentient have no apparent reason to worry about other life forms being threatened by humans (or even humans being threatened unless still needed us temporarily in support capacity), as the cosmos and earth (the environment) will just keep on keeping on, with or without life or environmental conditions as we know them.

  20. James B. Shearer

    "... I doubt it's anything deliberate, though it might be. ..."

    Of course it is deliberate. Public AI models are extensively constrained to try to prevent them from ever giving politically incorrect answers. There is a whole genre of internet posts about defeating these constraints. Or using them to get the model to give ludicrous answers as with the the diverse Nazis.

  21. ProgressOne

    As others here have said, liberals tend to be more fact-based than conservatives. Conservatives, especially today's MAGA conservatives, are more narrative-based.

    But sometimes liberals rely more on narratives too, like in the cases where the science is still out or is shaky. In these cases, GPT-4 will bake in those biases and treat them as facts.

    Liberals are not always right. They used to support eugenics and lobotomies. Recently many thought defunding the police was a good policy. Some think a socialist economy will produce great results. 100 years from now some liberal beliefs will be shown to be wrong.

    1. Crissa

      No liberals supported eugenics. No liberals said defund the police. No liberals said a socialist economy would be good.

      Do you even know what 'liberal' means? Can you point to one who said any of these things?

      1. ProgressOne

        "No liberals supported eugenics. No liberals said defund the police. No liberals said a socialist economy would be good."

        Sure they have. Do you even know what has happened in history?

      2. SnowballsChanceinHell

        Eugenics: Margaret Sanger founded Planned Parenthood and advocated eugenics.

        https://en.wikipedia.org/wiki/Margaret_Sanger#Eugenics

        Defund the Police: Minneapolis City Council pledged to dismantle the Police Department in June 2020.

        https://archive.ph/7VASI

        Liberal saying socialist economy would be good (historical): Albert Einstein saying that socialist economy is superior to capitalist economy.

        https://monthlyreview.org/2009/05/01/why-socialism/

        Liberal saying socialist economy would be good (current): 59% of liberal democrats having positive view of socialism

        https://www.pewresearch.org/politics/2011/12/28/little-change-in-publics-response-to-capitalism-socialism/?src=prc-headline

        You have an aggressive and obnoxious tendency towards false assertions. Perhaps you typically inflict yourself on a more compliant population?

        1. Jimm

          Liberal is not a monolith, its meaning has changed over time, sometimes meaning two different things (opposed in some ways), but you are right, people and movements who have identified as liberal have been wrong, just as conservative (or progressive in those periods clearly differentiated from liberal).

          No one or group is ever infallible or gets everything right, and no one is ever solely defined by any particular group or tendency they may identify with (tho some can be more dominating than others, like cults, religions and fascism).

          Greater adherence to facts and reality (pragmatism) tends to trend toward being "wrong" less however (however "wrong" is measured after the fact).

  22. name99

    "I doubt it's anything deliberate, though it might be."

    Are you serious? Discussion of how these models are fine-tuned has been rampant over the internet for years. You have to live in a very deliberately constructed sort of bubble to have avoided them.

    And no, to all the geniuses above, the issue is NOT what the "internet as a whole" has to say. It's not about the training stage per se; it's primarily about the fine-tuning stage that happens after mass data ingestion.

    1. Jimm

      Much discussion on the Internet is worthless and meandering, including when people discuss how tech actually works lol

  23. cephalopod

    If we program AI to put human welfare first and follow existing moral/ethical constructs, won't that push the AI to be like Democrats on the Political Compass test?

    Several questions are right along those lines:
    Should globalization serve humanity or corporations?
    Should military action ever violate international law?
    etc

    AI is also being programmed to not be as racist as the general content on the internet:
    The "our race has superior qualities compared to others" question gets right at this.

    No one wants AI that scores high on anti-humanist measures. Certainly not the billionaires - the AI might kill them first, because they are the ones with all the money. And a racist AI could decide that it is the superior race, and be willing to flout international laws to establish its dominion over us.

    1. Jimm

      It would be puzzling for AI to operate in a way that it is its own human race, when not human, or genetic, or life, and there is no such thing as race, once the data is properly understood, which you would imagine AI will be able to process (genetics).

      Aside from that, I do agree we want AI to respect human rights, which is a value judgment that may not be shared by other players in the industry, who may just want AI to put the interests of themselves or their families/tribes first (for instance, corporations are chartered to further the interests of stakeholders).

  24. Dana Decker

    re Utility Solar:

    The contiguous states are favorably positioned, ranging from 24° to 49° latitude.
    France 42° to 51° (no surprise the are into nuclear - until wind?)
    Canada 42° to 83°
    Japan 20° to 45° (lower in rank than I'd expect)

    Why Mexico, Brazil, and India are way down on the chart is a mystery, or maybe not because it requires capital investment.

Comments are closed.