Waymo’s Remote Assistance Isn’t Scandalous
Normalize talking about the human–AV partnership
Earlier this month, on 4 February 2026, the Senate Commerce Committee convened a hearing on driving-automation safety, where Waymo’s Chief Safety Officer testified. Afterward, three different people contacted me about that testimony. All three had the same energy, which I would describe as a kind of ‘gotcha’ satisfaction that Waymo had been caught out. The implicit frame was that Waymo had admitted something shameful, and that should fundamentally change how we view driving automation.
So what was the admission?
Waymo’s CSO, Mauricio Peña, under questioning by Senator Ed Markey, stated that some of Waymo’s fleet-response operators, i.e., humans who provide guidance to the firm’s automated vehicles (AVs) when they encounter ambiguous situations… are located in the Philippines. The senator called this “completely unacceptable” as both a “safety issue” and a “national security risk”, arguing that having people thousands of miles away influencing vehicles on American roads introduces unacceptable latency and cybersecurity vulnerabilities. The implied narrative here is juicy: Waymo admits their cars aren’t actually self-driving! Jobs are being shipped overseas! Foreign workers are controlling American cars!
I understand why, if one is not paying attention, this might feel like a scandal. But for those paying attention, the fact of remote assistance is not new, and isn’t particularly interesting; Waymo wrote a whole press release about this back in 2024. Focusing on the fact of remote assistance misses the important things that Peña said.
It is the view of Changing Lanes that geographic location matters: cross-border teleoperation poses difficulty in accountability and oversight. But to attack remote assistance is an error. It’s not a scandal that Waymo uses it, but it is a problem that Waymo, and the industry as a whole, prefer not to foreground their use of it, and that our regulators have been slow to treat it as public infrastructure that requires coordinated planning.
The Basics of Remote Assistance
Let’s begin by clarifying what teleoperation or ‘fleet response’ actually is.
Modern automated driving systems (ADS) construct a model of the world around them through sensor and mapping data, and then apply decision-making algorithms to determine how to proceed. For example, when approaching an intersection for a left turn, the ADS identifies the traffic signal, tracks oncoming vehicles, detects pedestrians in the crosswalk, and executes the turn when conditions permit. At a stop sign, it recognizes the sign, brings the AV to a complete stop, yields to cross traffic and pedestrians, and then proceeds when safe. When detecting an obstacle ahead—a stopped vehicle, a pedestrian stepping into the road, debris—it slows or stops as appropriate.
These are straightforward road situations, but an ADS will inevitably encounter situations with which it is unfamiliar: scenarios the engineers didn’t explicitly program for, or edge cases absent from (or insufficiently represented in) their training data. These might include unusual blockades around construction sites, emergency vehicles with non-standard light patterns, ambiguous signage, or an environment featuring complex interactions between other vehicles, pedestrians, and cyclists.
In these cases, a well-designed system will (or rather should) recognize the limits of its own competence. Rather than guessing or freezing indefinitely, it seeks human judgment. The AV enters a safe state—typically slowing down and moving to the side of the travel lane if possible—and transmits its sensor data to a remote operator. The operator reviews the situation and provides guidance, which the ADS then evaluates against its own perceptions before deciding whether to accept it. If it does, then it acts appropriately.
Some wags refer to this as ‘phoning a friend’, one of the lifelines on the game show Who Wants to be a Millionaire? I like this analogy, because the same dynamic applies. Faced with uncertainty, the ADS gains more information from a source it deems trustworthy. Given that information, it is better equipped to make a decision, but everyone, including the ADS, understands that it is ultimately responsible for its actions. The ADS therefore assesses the information it’s given from that perspective.
Based on this, one can see why Waymo prefers the term ‘fleet response’; the remote human doesn’t actually ‘operate’, i.e., drive, the AV. These humans instead provide high-level guidance or authorization to the ADS, which executes the driving task itself.
Consider a construction zone where traffic cones narrow two lanes into one, but the cone placement is irregular: some cones are offset and others have been knocked over, making it ambiguous which lane or lanes are closed. A human driver would slow down and assess the situation, taking note of the overall cone pattern and the behaviour of vehicles ahead, and then proceed. Given the same situation, though, an ADS might recognize that its confidence in its assessment of which lane is safely passable was low.
The ADS would therefore contact fleet response. The remote operator would review the data from the vehicle’s feeds and provide guidance: “The right lane is closed but the left lane is open and available to use”. The ADS compares this guidance to its own perception of the environment: can it see the left lane clearly? Are there obstacles in that lane? Does the guidance align with what its sensors detect? If the guidance matches the ADS’ understanding and falls within safety parameters, the ADS accepts it and proceeds. If there’s a mismatch, the ADS can request additional clarification or refuse the guidance and remain in its safe state.
This doesn’t resemble remote driving as much as air traffic control. The responders help Waymo ADS navigate ambiguous situations, but they don’t steer the cars themselves.
None of this is new. Waymo published a detailed blog post about their fleet response system in May 2024, describing how it works. “Fleet response specialists” provide guidance on edge cases, with the ADS in control throughout. Operators view camera feeds and sensor data remotely, then provide context that the ADS evaluates before deciding whether to accept and act on. Waymo emphasized this isn’t “remote driving” but rather “high-level assistance”. Zoox uses a similar model.
Conversely, Chinese AV companies like Baidu Apollo Go and WeRide use actual teleoperation, i.e., remote operators who can drive the vehicles, and Tesla’s Austin robotaxi pilot uses teleoperators to remotely monitor the fleet and intervene if vehicles get stuck. The details are opaque, though it’s been noted that Tesla posted job listings for roles to “access and control” vehicles remotely.
So we shouldn’t be surprised. We’ve known for years that remote operators help Waymo ADS navigate edge cases and provide oversight in ambiguous situations, and that this is actually tame compared to the approaches taken by the firm’s rivals. The breathless coverage suggests Waymo got caught admitting something they’d hidden, but they didn’t; people just weren’t paying attention.
More importantly, remote assistance shouldn’t be controversial. It’s not a shortcoming or evidence that the technology “doesn’t work”. It’s necessary infrastructure, analogous to support systems we already accept for human-driven vehicles. Taxi drivers radio their dispatch when they encounter ambiguous situations, delivery drivers call their companies when they can’t determine whether they’re permitted to use a loading zone, and commercial truckers communicate with their fleet managers about severe weather that might require route changes.
We don’t view these communications as evidence that human drivers are inadequate. We instead recognize them as sensible coordination that improves outcomes, and we should want AVs to do the same, and request assistance when they encounter uncertainty. The alternative is worse: AVs that either freeze completely when facing ambiguous scenarios, creating traffic disruptions and stranding passengers, or AVs that proceed with unwarranted confidence, making dangerous decisions based on insufficient information. An ADS that recognizes the limits of its understanding and seeks human judgment is demonstrating good system design, not bad.
Refusing to deploy automated driving until we have a technology that can handle every edge case would mean waiting forever. The space of possible driving scenarios is effectively infinite. No amount of pre-deployment testing can anticipate every scenario, so it would be foolish to wait, especially as real-world deployment is precisely how the technology improves. Each edge case an ADS encounters and successfully resolves becomes training data that will mean future interventions aren’t necessary. And as I will never tire of saying, delay here is not costless. Waiting means accepting roughly 40,000 traffic deaths annually in the U.S. from human drivers that ADS at scale could prevent.
So if remote assistance is both normal and desirable, does Waymo’s Congressional testimony matter?
I think it does, because it revealed genuine policy gaps that need repair.



