Skip to content

The New York Times fails a safety test

The New York Times has an intensely frustrating piece about driverless cars today. Most of the focus is on San Francisco, and the story includes three safety statistics:

In San Francisco, more than 600 self-driving vehicle incidents were documented from June 2022 to June 2023.... Last year, the number of 911 calls from San Francisco residents about robotaxis began rising, city officials said. In one three-month period, 28 incidents were reported.... The city’s Fire Department created a separate autonomous vehicle incident form, said Darius Luttropp, a deputy chief of the department. As of Oct. 15, 87 incidents had been recorded with the form.

All of these numbers are useless without some point of comparison. What's an "incident"? How many were documented for ordinary cars? Is 600 a lot or a little?

A trip to the San Francisco municipal website didn't help—although I learned that over an average year Muni buses are responsible for roughly 1,200 collisions and the city suffers about 25-30 traffic deaths. As far as I know, this compares to zero fatalities and about 150 collisions from driverless cars—all of them minor and nearly all of them the fault of another car.

On a per mile basis, driverless cars in San Francisco are involved in 1-2 collisions per 100,000 miles compared to Muni's average of 4-5. Total 911 calls amount to 6,000 every three months.

But I'd really like comparisons to the statistics the Times uses. If San Francisco knows how many incidents were reported for driverless cars, after all, they must also know how many total incidents were reported. Ditto for auto-related 911 calls. That's what should be in the story if it's to make any sense.

28 thoughts on “The New York Times fails a safety test

  1. iamr4man

    I don’t think comparing driverless cars to buses and trolleys is a very good comparison. I do think comparing them to taxis or Ubers would be fair. I take it those stats weren’t available?

    By the way, my daughter took a Waymo ride recently. She liked the experience and said she’d do it again.

  2. bluegreysun

    I assume that the comparison stats for “cars with drivers” behind the wheel weren’t given because such comparisons did not support the angle the journalist was writing. If the stats did support their narrative, they’d include them.

    1. jdubs

      Driverless AI supercars are the future and the NYTimes needs to get onboard because the press for the existing product hasnt been very positive lately.

      Apples to apples comparisons of safety stats are tough to come by at the moment, but that hasnt stopped the promoters from offering useless figures that attempt to paint the new taxis in a positive light.
      Similar gaslighting took place by Uber promoters just a few years in the past. Faster, better, cheaper, safer! But only if you squint and look at the partial data sideways.

      Still early days for these AI SuperTaxis. Hasnt been a great few months recently, but we'll see....

      1. Doctor Jay

        I'm fairly positive on autonomous cars, but I don't think the Times has to "get on board". I'm fine with them remaining skeptical. It's part of a good process. What I'm not fine with is reporting data without context. Other data I've seen shows AVs having incidents but at about 1/20th the rate, I think, of drivers.

        To many, any incidents at all are too many, and a reason to never adopt them. I think this is ridiculous, but it does demonstrate that people are going to need to experience them for themselves in order to feel safe in them.

  3. smallteams

    I have never seen a comparison of incidents PER MILE DRIVEN, which would really tell you whether the driverless cars are hideously dangerous or not.

    The way these stories are written reminds me of the fact the the first two cars in Kansas collided. Now there's a story. No one ever gets hurt riding a horse!

    1. dilbert dogbert

      In my circles in the horse, aircraft and car communities, the only serious injuries were from horses. I have fallen off every one of the horses I ever owned.

    2. saambarrager

      I think many thousands of lives were saved adopting the horseless carriage.

      In SF incidents per miles traveled was the metric used, and AVs were significantly safer. The city tried to restrict them for the following reasons: the public perceives them to be dangerous (a lot of testimony claiming they go too fast and break laws), they’re too easy to disable by vandals, 1 incident means they should be banned — even if the incident was caused by a motorist, they respond poorly to first responders.

      There were a handful of incidents where AVs may have delayed fire / police / ambulance. (Mostly overblown.)

      The best actual criticism I’ve read yet of AVs was comparing Waymo with Cruise — Waymo does better at intersections, particularly when responding to sirens.

  4. KenSchulz

    The standard measure for transportation incidents is the passenger-mile; the standard scale factor is 100,000. You might think a journalist with a major newspaper would bone up on the basics of the field about which they are writing.

    1. Srho

      You might also think that an editor, upon reading the draft, would note in the margin, "Is this a lot? Compared to what?"

      Kevin is doing NYT editors' jobs for them. (And the editors of any article where inflation adjustment is warranted.)

  5. skeptonomist

    Are there any "driverless" cars operating anywhere? If someone is supervising a car remotely, isn't that person a "driver"? Some more statistics that are needed: how many people at Waymo, etc. are involved in running each car; how often do the remote drivers have to intervene.

    1. lawnorder

      I would say that if a person is remote controlling a car, they're a driver. If they're just watching for problems, whether remotely or from the front seat of the vehicle, while the car operates without human input they're supervisors but not drivers.

      The same applies if a human driver is under supervision. If the holder of a learner's licence is behind the wheel, they're the driver while the fully licenced person sitting beside them is a supervisor but not a driver.

  6. wvmcl2

    From the beginning I thought that the main barriers to driverless would be legal/psychological rather than technological, and this is certainly proving to be the case. There will not be fair comparisons because driverless is a new way of doing things and it is easy to blame the new way when something, anything, goes wrong.

    Compare for example to the introduction of the ACA (Obamacare). Remember all the hullabaloo and hand-wringing because the website did not work perfectly from the first day? (what website ever did?) Same phenomenon - the new kid on the block gets all the blame.

    1. Mitch Guthman

      I don’t think this is a fair analogy. Obamacare is something that people can voluntarily choose to buy or not. By contrast, no one can “opt out” of participating in our national experiment with self driving vehicles. Neither I nor anyone else has a choice about whether we get run over by a driverless vehicle. I find that difficult to accept.

      1. Yikes

        Mitch is a great example of a reasoned opinion on this.

        Full disclosure 1: I am a small, but long term Tesla shareholder.
        Full disclosure 2: I could not agree less with almost anything Musk shitposts on Twitter, and further, regardless of whether I agree or not I would argue he has a duty to keep his ridiculous musings to himself like any other sane CEO.
        Full disclosure 3: I have been in the beta program for Tesla's FSD since 2019, in other words, since it rolled out to the public.

        Driverless cars are coming, but it is not clear to me that this becomes the ridiculously profitable game changer that proponents claim. I am not sure how profitable taxi companies are, for example, but given the price of the car and the cost of running the actual car its not like you get down to free travel even if the car drives itself.

        What is coming, and I predict Mitch will be convinced of this before cars are fully driverless, is that the various self-driving systems, certainly Tesla's will vastly reduce accidents compared to human drivers.

        However, Mitch's post is an example of what self driving is up against. All people have a control bias, where we think we have more control over things than perhaps we actually have. That is going to be harder to overcome than the technology.

        1. Mitch Guthman

          My apologies for holding up progress. As cld correctly notes, anything can happen when you leave the house. But the additional risk presented by autonomous vehicles is not nothing and I think that I should have a choice about whether I want to assume it.

          It’s clear that the social and legal framework of society will need to be changed to accommodate the expansion of autonomous vehicles. It seems clear that it’s not practical to have these vehicles operate under constant human supervision and that the danger to pedestrians and other drivers will very likely increase significantly over time . Equally, it’s clear that the risks and economic consequences will be borne by the middle class and the poor rather than the CEOs of the autonomous vehicle companies. It seems to me that ordinary people ought to have a say in this instead of just getting stuck with the bill.

      2. Solarpup

        My wife's car was just totaled (probably -- insurance is still thinking about whether to fix it or scrap it) a few days ago. She saw the driver, to her left coming out of a parking lot, come to the stop sign, look both ways, and then just gun it into her regardless. There's a sign there in the parking lot, right next to the stop sign, that says "Traffic in Intersection Does not Stop", so we're assuming this is a common occurrence there.

        Since moving from the Northeast, our driving is down, literally, by a factor of 15 by miles driven. But our accidents are up a factor of 6 per unit time, so accidents per mile up by a factor of 100.

        It's become a jungle out there. I drive way more defensively than I used to. My eyes are constantly darting about for someone who's going to run a light, or do a turn from the wrong lane. I have narrowly avoided more than one accident from some jackass completely ignoring traffic regulations.

        I somehow doubt that driverless cars are going to be that much worse.

  7. D_Ohrk_E1

    OT: Who thinks we can stick to 1.5°? You're wrong.

    We're already on the precipice of hitting that, by 2030 at the latest, and the world wasn't targeting net zero until 2050. China's not planning to start reducing expansion of coal power until past 2030. We're nowhere close to hitting 2050, either. It'll be somewhere past 2070 and we're going to hit somewhere in the ballpark of +3.0°C. These effects will last for decades to a century+ depending on how fast we can pull carbon out of the atmosphere. IOW, nearly every single one of us will be dead before we will hit net-zero.

    Sorry to disappoint you, but the rest of your life will be surrounded by petroleum and ever-increasing temperatures.

  8. jakewidman

    I used to live in San Francisco until last year, and only moved across the Bay and am still in the city a lot. My observation from sharing the road with the robotaxis is that they're more cautious than most human drivers--I would get impatient being behind one because it would come to a complete stop at every Stop sign and sit there for a couple of seconds before proceeding. Who does that?

    As I told a friend of mine who likes to email around news stories about traffic incidents involving self-drivng cars, I have to wonder what it would be like if we documented and people posted videos of every dumbass move a human driver made.

    1. Solarpup

      At my previous job, on a fair fraction of nights as I left the office to go to the car garage, I used to see this one very aggressive bike commuter who would routinely be yelling at pedestrians and cars as he zipped past. He always wore a GoPro attached to his helmet, which I assumed he was doing to document the inevitable accident he was expecting, so he could turn around and sue them. (He just kind of gave off that vibe.)

      What it was really going to show was him breaking just about every traffic law a bike could break. Running red lights. Cutting off pedestrians in cross walks. Turns from the improper lane and/or the improper side of a car (e.g., you had to do a left turn from the right side of a car, not its left.) Etc., etc.

      I'm imagining if everybody had cameras, it would be like the online videos from Russia one can find. Insurance fraud is so rampant there that a huge swath of the public mounts cameras in their cars to show what really happened, and it's kind of wild.

  9. bharshaw

    The Venn diagram showing the overlap between AV incidents/accidents and human ones will be interesting. For AV's the default seems to be--when in doubt, stop. For humans, the default seems to be--when in trouble or in doubt, keep doing what you were, regardless of whether it means keeping the foot on the pedal, even when you mistook the gas pedal for the brake.

  10. Narsham

    Agreed that the article is lazy writing. I do think that even comparing by miles traveled isn't useful if self-driving cards are mainly restricted to cities; a bus going 65 mph is far more likely to be involved in a fatal accident than a small car that never gets about 30 mph, regardless of miles traveled, because of physics.

    The biggest problem with self-driving vehicles, IMO, has to do with our theory of mind. Most people develop a sense of how other people's minds work and we apply that sense all of the time. On the road, trying to predict the behavior of other drivers matters, and we make spot judgments all the time (car X keeps changing lanes without signaling, car Y's driver is wavering across lanes and might be driving drunk) and use those judgments as a predictor for future behavior (like car X probably won't signal a lane change this time, either).

    Self-driving vehicles do not have minds in the sense that we as human beings understand them. Almost nobody on the road with one will understand how they work. What's worse, the algorithms that determine what they do aren't publicly available, they keep getting updated, and they do not need to conform to what we understand as logical or expected human behavior. (Get the wrong value in the right variable and that software will say "stop the car!")

    Maybe, if there were a common understanding or experience with these cars, people will become comfortable with them on the road. But even there, different companies may have different software that will behave differently. That in itself isn't a problem: every human behind the wheel has "different software." The problem is that our ability to generalize from a specific case is so much greater with people than with the software. Until that changes, or until AI drivers are the only drivers on the road, people aren't going to be comfortable sharing the road with devices which do not fall under the "theory of mind," regardless of their actual safety record, because they cannot be predicted.

  11. NotCynicalEnough

    FWIW, having just returned from Morocco and Portugal, driver less cars aren't anywhere close to human drivers yet. They would be completely safe and stop traffic dead.

  12. pjcamp1905

    "all of them minor and nearly all of them the fault of another car."

    Since this data is not part of your post, doesn't it suffer from the same problem?

Comments are closed.