Remote drivers intervene in unusual situations
**The takeaway: As robotaxis and other AI-based technologies proliferate, so does the myth that these systems are fully autonomous. During a recent Senate hearing, industry leader Waymo provided the latest reminder that AI relies on human labor – often low-paid – more than people realize. **
Waymo's chief safety officer, Mauricio Peña, recently noted that when the company's robotaxis encounter unusual situations, they may request real-time input from a remote response agent, receiving human guidance when needed. While some of the contractors work in the US, many operate from other countries, such as the Philippines.
The admission is another example of human workers, often contractors, supporting supposedly autonomous AI systems from behind the curtain. Tesla's robotaxis still rely on human monitors sitting inside each vehicle.
Contract labor has been at the heart of AI since OpenAI sparked the latest wave of investment in the technology several years ago. ChatGPT relied heavily on workers from across the world to train its underlying large language model, often for as little as $15 an hour with no benefits.
Filipino remote workers also oversaw most of the orders taken through Presto Automation's supposedly autonomous fast-food drive-thru system. Meanwhile, Amazon's ill-fated Just Walk Out technology, which claimed to handle physical purchases automatically without involving cash registers, actually relied upon workers in India to monitor customers.
Tesla's robots, the primary reason why the company is discontinuing its most popular vehicles, became arguably the most notorious example of this phenomenon in late 2024. At the company's "We, Robot" event, the robots admitted that they still relied upon human intervention, and a video of a unit falling over after mimicking the motion of its remote operator removing their headset went viral.
However, the senators grilling Peña at the hearing were less concerned about the use of remote workers than the fact that many were not American.
Massachusetts Senator Ed Markey called the employment of foreign remote workers "completely unacceptable." While input lag from workers operating halfway across the world presents a safety issue, lawmakers were also concerned about Waymo's connections to China and other foreign countries.
Although Tesla uses its own cars, Waymo employs vehicles from various countries, including China. The decision drew suspicions that the Alphabet-owned company is attempting to circumvent import restrictions on Chinese vehicles. When asked about the use of internet-connected Chinese cars on American roads, Peña emphasized that the autonomous driving systems are installed in the US.
Correction (Feb 10, 2026): The original version of this article described Waymo vehicles as "switching control" to remote drivers in unusual situations. Waymo says its remote fleet response agents do not directly operate vehicle controls, but instead provide real-time contextual information that the autonomous system uses while remaining in control of the vehicle. The article has been updated to clarify this distinction.
I wish I could just lie to people, take their money and yet they are so eager to forgive me. I want to create the church of the techno techbro optimist. Please dig in your heals and give me money. They know that the problems are exponential thats why we see no real progress... they farming squeezing those teets. You got the techno techbro optimist, the sucker and the venture capitalist. We live in a world were stupid ideas can happen.
Most of the comments are under the impression that the workers are driving the cars... a thing that's specifically addressed at the end of the article:
The original version of this article described Waymo vehicles as “switching control” to remote drivers in unusual situations. Waymo says its remote fleet response agents do not directly operate vehicle controls, but instead provide real-time contextual information that the autonomous system uses while remaining in control of the vehicle. The article has been updated to clarify this distinction.
The thing that the workers actually do is look at the car's sensors and highlight areas that are safe/not safe or give them new waypoints when the cars run into issues. They're not sitting behind a steering wheel and driving every car... they're call center support workers and their customers are cars that are confused about construction.
But that's what the AI is supposed to be doing. We've had technology to control a vehicle (throttle, brakes, steering) and stay in a clearly marked lane for a long time. The part that AI is supposed to be responsible for is making executive decisions autonomously, which is the part they're employing off-shore workers to do.
It is able, the vast majority of the time, to make those decisions autonomously. However, when it doesn't there has to be a contingency where a human becomes responsible for making the right decision. The passenger cannot be responsible as they are not trained on the autonomous system... so there has to be another person, in the employ of the company who can step in when the autonomous systems fail.
That's what these people are doing. The support staff exists to fix emergent issues with rides, they're not teleoperating the cars.
The headline of the article is misleading in that it implies that this is some secret that they're breathlessly uncovering instead of the standard practice when operating autonomous systems commercially.
For example, Amazon's drone deliveries has a staff that monitor the drone fleet so that they can step in and provide instructions to the autonomous systems. It doesn't mean that they're sitting in the office operating flight controls, it means that when the autonomous system's confidence falls below a certain level, rather than have the drone do something dangerous... it simply waits for a human to get it unstuck.
You don't want a drone to guess that your head is a landing pad and you don't want a car to determine that a construction worker is simply a shadow... that's why a human is always in the loop.
Your drone analogy is perfect. We've had AI-free, open source software for a while now that can autonomously navigate a drone (quadcopter, rc car, etc) from one waypoint to another, as long as nothing unexpected happens.
Waymo is selling themselves as an autonomous taxi company. That means no humans are supposed to be in the loop, and AI is making all of the executive decisions when unexpected conditions are encountered. If humans are in the loop, it's not fully autonomous.
as long as nothing unexpected happens.
Yes, that's exactly.
That's what the humans here are for in this case as well.
The systems can autonomously navigate a car from one waypoint to another as long as nothing unexpected happens. When something unexpected happens then some guy in the Philippines is the one that fixes it.
Waymo is selling themselves as an autonomous taxi company. That means no humans are supposed to be in the loop, and AI is making all of the executive decisions when unexpected conditions are encountered. If humans are in the loop, it’s not fully autonomous.
There are no companies which operates autonomous vehicles that do not have humans to handle unexpected conditions. By your definition there are no fully autonomous vehicles used commercially anywhere in the world.
Thats not new. We don't need AI to navigate a vehicle in ideal conditions, we've had the tech to do that for a long time. Using AI when simpler, more efficient algorithms can do a better job is an irresponsible waste of resources.
We don’t need AI to navigate a vehicle in ideal conditions, we’ve had the tech to do that for a long time.
Yeah, that's what these autonomous cars are built on. Development on self-driving cars began in 2009, they are not built on AI.
They use sensors, the sensors are noisy and can be covered in dirt or debris. All of the higher level decisions are built on sensor data that may not be reliable and so they take all of the sensor data and the logic used to make the decision to calculate a confidence value (i.e. how likely the system assess the model it has created, based on the sensor data, matches the real world).
In situations where there are a lot of unknowns, like in unfamiliar traffic situations, the confidence score drops.
These tech workers exist so that when the confidence levels of the autonomous systems (which, again, are pre-AI) gets too low a person, who is more intelligent than a bunch of sensors and sat solvers, looks at the sensor data and fixes the waypoints which were generated by the autonomous systems or marks objects/hazards and then the autonomous systems calculate a higher confidence and so resume operation.
The only updates to autonomous cars where neural networks could be better than the human programmed systems is in object detection but those NN systems are, typically, run in parallel with the classic Computer Vision algorithms in order to achieve higher confidence. Routing is done through ML101 pathfinding algorithms, every car manufacturer programs their own ECS, sensors are largely lidar which use time of flight and not machine interpreted video (like Teslas), acceleration and braking response is human programmed.
There is nothing 'AI' happening here, this is a safety feature which provides a backstop for fallable, classicly programmed, autonomous systems.
The car isn't driven by 'ai' and there are no AI models, and certainly no LLMs like ClaudeAI, which can reliably make the kind of high level decisions made by the humans in question here nor are the humans driving the cars.
reinvent the wheel as a square and beat me over the head silly with it. Thats what I take away when it comes to AI. I have yet to be impressed but I have noticed the dimminishing returns and how nothing works the way it should as well as all the new problems we never had before. AI = Crime
Thank you for pointing this out. Really with how new this level of autonomous driving has been around, how is this really different from a new driver getting stuck, not knowing how to react and reaching out to someone else to ask how they should approach the situation?
Then as they learn (the newbie and the autonomous system) they get better and know what to do, or infer, the next time they are in a similar situation.
Exactly.
Unlike LLMs, where text is freely available due to the Internet. We don't have decades and decades of data from people navigating the world (or, in robotics, things like limb positioning) so it is much harder to train these systems as you have to create the training data yourself. The support workers are generating that data as they keep the system working.
I understand the general 'AI-bad' sentiment, but in this case I'd be less likely to take an autonomous taxi if it DIDN'T have a human support team waiting to jump in.
Great Halloween costume idea for any Filipinos
Just what I want from my taxi driver: hundreds of milliseconds of latency
And just what happens when the network goes down? Some kind of degraded operating mode? Does the car just slam the brakes regardless of what it's doing?
The cars don't rely on a network connection to navigate, that is all done locally via autonomous systems. It does not use human drivers on the other side of the planet.
If the car gets into a situation where it isn't sure what to do, for example if it encounters construction, or confusing/ambiguous lane markings then it needs a person to look at the cameras/sensors and determine what is/isn't safe.
In order for the fare to not be bothered (nobody wants their Taxi asking them where to go), the company uses remote workers who can be available in seconds. The worker would do something like look at the video and use a tool to paint the road green, or issue a new navigation waypoint to get around the obstruction.
They're looking at video and clicking waypoints on a map, they're not holding a steering wheel and pedals.
Didn't this happen in San Francisco, California, USA, a few months ago? The Waymos all stopped.
https://theautowire.com/2025/12/31/driverless-waymo-vehicles-stall/
And now they are only working 4 day weeks due to the oil crisis so you better pick the right day.
https://www.aljazeera.com/video/newsfeed/2026/3/11/4-day-week-fewer-car-trips-in-philippines-as-iran-fallout-bites
Good we should all be working 4 (or really 3) day weeks.
I imagine that businesses would stagger the days off so that there is still ample coverage.
Another example of AI: actually indians (well, Filipinos). Maybe we should just let people do people jobs, especially if you're not even going to be able to fully rely on computers to do the job?
lol tf is the 2 downvotes on a comment that is essentially saying let humans work/earn a living instead of relying on failed/non functioning tech that's only running on lies..
who seriously doesn't agree with letting people work
🤷 who knows? The world is full of strange people
