Robotaxi Companies Must Always Pay
Liability as the regulator’s sorting tool
Before we start, a brief welcome to new readers! If you found this newsletter through my appearance last week on Ross Douthat’s Interesting Times podcast, or through my piece in Reason on the two bills to regulate self-driving cars currently before Congress, thanks for following the trail here.
Changing Lanes covers all aspects of mobility. See, for example:
Today’s issue is about driving automation, and specifically a question Ross and I explored in our podcast conversation: who should pay when a self-driving car causes harm?
On 1 August 2025, Tesla lost $243 million because a court found its Autopilot to be defective.
The case was Benavides v. Tesla. In 2019, in Key Largo, Florida, George McGee, running his Tesla Model S on Autopilot, drove through a flashing red and a stop sign at roughly sixty miles an hour, killing Naibel Benavides and severely injuring Dillon Angulo. Under the Florida regime that applied at the time of incident, McGee was found 67% liable and Tesla 33%, which means that of the $129M in compensatory damages the jury awarded, Tesla’s share came to roughly $42.5 million. Bad enough; but the jury also awarded the plaintiffs $200M in punitive damages, which at the tame were not apportioned in Florida, but fell only on the party found to have engaged in the conduct warranting them, meaning Tesla bears them alone.1 Judge Beth Bloom denied Tesla’s post-trial motions on 19 February 2026. At time of writing, Tesla’s team has made no notice of appeal.
Tesla didn’t have to lose this much. Waymo, which has run more automated miles than any other operator, has never lost a verdict like this: not because it has avoided crashes, but because it has accepted in advance that operational liability for its driving system is its own.
How an automated-vehicle (AV) firm treats liability is telling, because liability is a hard, fast, and cheap sorting mechanism for distinguishing firms that are truly ready for public deployment from those that are bluffing. A manufacturer confident that its automated driving system (ADS) is safe will be willing to accept liability for incidents under that system’s control. A manufacturer that isn’t has reasons not to. Outsiders may lack the engineering sophistication to measure these vehicles directly. But anyone, including a regulator, can notice when a manufacturer won’t accept responsibility for its product.
Waymo Accepts, Tesla Hides, Mercedes Pretends
For an example of what I mean, let’s look at three firms that make advanced driver-assist and automated-driving systems (ADAS and ADS, respectively) available to consumers.
Waymo offers customers rides in their own fleet of ADS-equipped vehicles. Its approach to liability is clear: it accepts operational liability, and acts as the legal operator in every market it deploys in. Even in Austin and Atlanta, where it acts in partnership with Uber and the latter firm handles all fleet logistics, Waymo retains responsibility for the “Waymo Driver”, i.e., the firm’s ADS. In all cases, riders will deal with one defendant if something goes wrong: the one that built the autonomy stack.
The track record matches this posture. Waymo’s most recent safety update, on 19 March 2026, reports a 92 percent reduction in serious-injury-or-worse crashes, an 83 percent reduction in airbag-deployment crashes, and an 82 percent reduction in any-injury crashes. That’s across 170.7 million rider-only miles through December 2025, against same-city human benchmarks. This is broadly consistent with earlier, peer-reviewed work (like Kusano and Schorr, Traffic Injury Prevention 2024 or Swiss Re/Waymo, Heliyon 2024), and the magnitudes are robust. It is the strongest safety evidence currently in hand for any commercially-operating ADS.
Waymo’s safety record isn’t spotless. I wrote earlier this year about the concerning incident in Santa Monica where a Waymo struck a child near a school. Separately, in Austin, the school district has documented at least twenty-five violations during the 2025–26 school year of Waymo vehicles passing stopped school buses with red lights flashing and stop arms extended. Waymo patched the software (in the language of automobile product safety, a ‘voluntary recall’) in December 2025, but violations continued through January 2026, with another in March.
The fact that we know as much about these incidents shows that the system works. Waymo reports promptly, the incident reports aren’t redacted, and bad code is getting patched through the recall mechanism. None of it is hidden.
Waymo makes its way to D.C. by Rob Pegoraro is licensed under CC BY-NC-SA 2.0
Meanwhile, Tesla pushes liability outward at every layer it can. The firm’s FSD (Supervised) owner’s manual reads, in operative part: “You are responsible for the speed and control of your vehicle at all times, whether FSD (Supervised) is enabled or not.” Under other circumstances, I wouldn’t object: an ADAS system is supposed to be a driver-assist feature, it’s right there in the name. Unfortunately FSD stands for Full Self Driving, and that name implies the vehicle doesn’t require a human’s input; the Benevides verdict depended in part on this confusion.
It gets worse. The Tesla vehicle purchase agreement imposes binding arbitration and a class-action waiver that seems designed to keep any customer claim out of a jury box. For Tesla’s Austin robotaxi service, which uses the firm’s ADS Full Self Driving (Unsupervised), Tesla has not publicly disclosed the structure of its commercial coverage.
The results are what one might expect from that posture: the firm is under pressure from regulators, and it seems less than cooperative with them. Leaving aside the Benavides verdict, NHTSA (the national USA regulator responsible for ensuring safety on American roads) upgraded its investigation into FSD behaviour in low-visibility conditions to a full Engineering Analysis on 18 March 2026, one step short of a recall demand. (I have written about this set of incidents for Asterisk.) A separate Preliminary Evaluation (PE25012) on FSD traffic-safety violations covers another 2.9 million. NHTSA has made an information request to Tesla to support that evaluation; Tesla has secured at least two deadline extensions on providing answers.
Meanwhile, in Austin, the firm’s robotaxi rollout is marred by incident after incident. AV and ADAS operators in the USA must follow NHTSA’s Standing General Order 2021-01 (hereafter SGO) a requirement to disclose crashes within set windows. Through mid-March 2026, the SGO records 15 reported crashes against approximately 800,000 cumulative paid-Robotaxi miles. Breaking that out, that’s about one crash per 57,000 miles, against Tesla’s own claimed human benchmark of one minor collision per 229K miles. This means that the Tesla service is crashing about four times higher than the human-driver benchmark Tesla itself sets.
In response to questions about that four-times-worse-than-humans framing, Tesla’s CEO, Elon Musk, has said only that the unsupervised program has recorded “no accident or injury… to date“. I suppose we’ll have to take his word for it, since all fifteen crash narratives are redacted as Confidential Business Information; unlike all other major AV operators, Tesla fully redacts its SGO narratives.
Finally, I used to cite Mercedes-Benz as an exemplar in this area, as a firm that has accepted liability for autonomous operation while its ADAS, DRIVE PILOT, is engaged. The framing came from a wave of trade-press coverage in March 2022, after Mercedes executives told Road & Track the firm would accept legal responsibility for what the car did under DRIVE PILOT. Road & Track‘s own gloss, which was “Once you engage Drive Pilot, you are no longer legally liable for the car’s operation until it disengages,” was what we all took at face value, myself included. Respect to Phil Koopman for teaching us all otherwise, in his September 2023 piece “No, Mercedes-Benz will NOT take the blame for a Drive Pilot crash“. There, he noted that what Mercedes had actually committed to was product-defect liability (which statute imposes anyway) and not the tort liability that would matter in a wrongful-death suit.
What we all should have done was read the contract. The MBUSA DRIVE PILOT Subscription Terms (last updated 5 March 2024) leave the driver as the “fallback-ready user,” responsible at all times for their own actions, and limit MBUSA liability across “all claims, including, without limitation, claims in contract and tort (such as negligence, product liability and strict liability).” The customer’s insurance pays first; subrogation is waived; binding arbitration is required; participation in class-action suits is waived.
So Mercedes accepts product-defect liability, which it has anyway by statute, and not the tort liability that would matter in a wrongful-death suit. Koopman calls this a “moral crumple zone strategy,” borrowing the framing from Madeleine Clare Elish’s 2016 paper of the same name: the manufacturer reserves the option to blame the human driver for operational failures while accepting only the product-defect liability that statute already imposes.
Three firms, three postures, three different answers to the same question: when something goes wrong, who is on the hook? Engineering confidence is the cheapest hypothesis a regulator can test. This suggests a good way to proceed, namely making approval of a self-driving permit conditional on accepting full liability.
Don’t Liability Rules Favour Incumbents?
So far, so straightforward: an AV operator should accept liability for incidents under its system’s control, and a regulator should refuse to permit operators that won’t. Waymo demonstrates that the rule is workable at commercial scale; the rest of the industry demonstrates what happens when it isn’t applied. One might think that the case for an operators-pay rule writes itself.
Not everyone thinks so. The most academically serious counter-argument comes from Gary Marchant and Rachel Lindor’s 2012 article in the Santa Clara Law Review, “The Coming Collision Between Autonomous Vehicles and the Liability System”: strict manufacturer liability could make AV deployment uninsurable for smaller operators, entrenching incumbents and slowing beneficial deployment. The concern has force at the full-city ADS robotaxi tier, where AV insurance markets are thin and specialty underwriters dominate. The underwriting structure of the largest fleets is not public, which means that Marchant and Lindor’s concern may have force here.
There are two reasons to think that it does not. Firstly, history cuts against it. As one thinker we admire here at Changing Lanes, Prof. Bryant Walker Smith, has noted, in 1993 the received wisdom that advanced vehicle control systems, like adaptive cruise control, would be uninsurable without statutory limits on tort liability, but 25 years of subsequent rollouts of just such systems falsified the claim. Tort liability has proven flexible enough to permit innovation by manufacturers of all market capitalizations and sophistication.
Secondly, while I am sympathetic to the idea that strict regulation might serve as a barrier to entry, I think that in this case the relevant comparison is not between operators-pay and no-rule-at-all, but instead between two kinds of liability. Predictable, ex ante, contractually-defined manufacturer liability—what Waymo accepts in contract, and what Geistfeld and Abraham/Rabin have argued for in the law reviews—channels exposure into commercial insurance markets and lets entrants underwrite their risk. Unpredictable, ex post, jury-determined liability—what we might call the Benavides mechanism—does the opposite. A litigation lottery in which any operator might face $243M USD verdicts if a jury so chooses is worse for small operators, not better.
To be clear, there will always be a moat: it’s reasonable to think that a $200M punitive verdict would be fatal for a small firm, whether anticipated or not. But putting that liability upfront into insurance markets seems more likely to encourage competition than the alternative.
The States Must Lead
The rule itself is straightforward. If a manufacturer or operator claims its automated driving system is safe, it accepts 100 percent of liability for incidents occurring under that system’s control. Burden is later allocable to suppliers, sub-system manufacturers, or others if appropriate; that is a question between the operator and its supply chain, not the injured party’s problem. The injured party deals with one defendant.
In my view, the operative mechanism should be a state DMV permitting condition: a manufacturer applying for a deployment permit must accept liability for ADS-controlled operation as a condition of the permit, or will not receive that permit.2 [2] Tying the rule to a permitting decision avoids the Mercedes problem: a manufacturer cannot evade the rule by drafting subscription terms that disclaim the liability.
This argument has academic backing — Geistfeld in the California Law Review (2017), Abraham and Rabin in the Virginia Law Review (2019), and the IATR’s 2023 model framework which I have previously written about—but no jurisdiction has formally adopted it.
The near-term opportunity is in Texas. The state’s new Autonomous Vehicle Operation Permit regime under SB 2807 takes full effect on 28 May 2026; applications are open through the Motor Carrier Credentialing System. The certification requirements address operational compliance and minimal-risk capability—registration, insurance or self-insurance, federal compliance, a recording device, a first-responder interaction plan—but do not include an express liability-allocation condition. That is the gap an operators-pay rule would fill. The Texas DMV has authority under §545.453 to add such a condition by rule, without further legislation. Tesla holds a transitional Texas permit through August 2026; a DMV decision on Tesla’s renewal would be the most informative event of the year. It would show us a state regulator deciding, in writing, whether Tesla’s posture is acceptable.
In one sense, this may seem a great deal of fuss over nothing. Thanks to the Benavides decision, it’s clear that if an AV firm’s products are responsible for harm, the firm will be responsible for that liability; despite its attempts to evade accountability, Tesla will pay. The question is how AV firms will pay. Will it be predictably, contractually, up front, in exchange for procedural protections; or unpredictably, by jury verdict, in exchange for nothing?
As I wrote in the very first issue of Changing Lanes, the posture firms take matters. Each redaction of a safety report; any disclaimer that fails to bind a third-party plaintiff; and every nine-figure fine that reads as a manufacturer made to pay against its will; all are fuel for backlash that will not stop at the bad operator. The technology has survived the Benavides verdict, and it has survived the death of a major firm (like Cruise).3 But it is not clear it can survive widespread conviction that the firms deploying it are taking sides against the public.
Confident manufacturers reveal themselves by accepting responsibility, as Waymo does. Unconfident manufacturers reveal themselves by refusing to accept it. They will discover, as Tesla did in Benavides, that their attempts to escape responsibility won’t save them. The best outcome is for AV firms to be smart and accept that liability up front.
If they won’t, our regulators should impose it.
Florida updated its fault-finding rules in March 2023, but the case was tried under the rule that had applied at time of incident.
Readers may be surprised by this, given that I made the case in Reason as recently as last week for federal preemption of state regulations on AVs. The difference is that I was arguing there for federal supremacy over safety standards. Civil liability is structurally different. The United States has never had a federal product-liability statute; tort law has always been a state matter. There is no realistic path to a federal AV-liability statute, and getting one would take far too long, given the moral urgency to introduce driving automation soon. The pragmatic answer is to leave this with the states.
The case of Cruise might seem to break this argument, given that its acceptance of fault didn’t save the company. But that’s a misreading: Cruise’s $8M-to-$12M settlement from its May 2024 injury of a pedestrian, combined with the California Public Utilities Commission fine of $112,500 for the regulatory inquiry, were trivial sums. What sank Cruise was its lack of candour with the California DMV after the incident, which tarnished its brand and slowed its development, at a time when GM, Cruise’s parent company, had already accumulated more than $10B+ in operating losses across eight years of Cruise operations.
I made this argument at length in The Cruise Shutdown Is Bad News for Tesla.



