THE CYBERCAB CONUNDRUM.

 









GROK:


No, this milestone does not mean FSD Supervised (vision-only) is now 100% safe or "curve-ball free."
The 10+ billion miles (now reported as over 10.02 billion total, with ~3.77 billion city miles) is a huge achievement in data scale for Tesla's AI training. It shows the fleet is rapidly accumulating real-world experience under supervision, which helps the neural nets improve at handling complex driving.
Tesla's own FSD safety report (linked directly from the post) highlights that vehicles using FSD Supervised see 7x fewer major collisions, 7x fewer minor collisions, and 5x fewer off-highway collisions compared to relevant baselines (like manual Tesla driving or U.S. averages).
However, several key realities show it's not 100% safe or free of surprises:
  • It's still explicitly "Supervised" — Tesla requires the driver to stay attentive and ready to intervene at any time. If the system were truly 100% reliable with no curve-balls left, supervision wouldn't be needed. The terminology in the post and report itself confirms this.
  • Crash rates are lower, not zero — Tesla reports significantly better safety stats than the U.S. national average (sometimes described as approaching or hitting ~7-10x safer in certain categories depending on the quarter). But this is probabilistic improvement, not perfection. Real crashes still happen (Tesla only reports actual collisions in its data, not near-misses or interventions). Independent analysis notes that the data doesn't support claims of flawless safety, and factors like driver demographics, road types, and selective usage can influence the numbers.
  • Vision-only cameras are Tesla's bet — They've stuck with cameras + neural nets (no radar/lidar since ~2021-2022) because the massive fleet data lets the AI learn human-like perception. It performs impressively in most conditions, but edge cases (heavy rain, fog, unusual lighting, construction, or rare "long-tail" events) remain challenges. No autonomous system—vision-only or multi-sensor—has eliminated every possible surprise yet.
  • The "10 billion miles" threshold — Elon Musk previously noted this scale of data as roughly what's needed for unsupervised self-driving. Reaching it here is still under supervision, and fleet-wide unsupervised FSD (e.g., for robotaxis) isn't live yet. Small-scale unsupervised testing exists in places like Texas, but the milestone announcement keeps the "Supervised" label.
In short, 10 billion supervised miles proves FSD is safer than average human driving in the data Tesla tracks and is advancing fast toward unsupervised autonomy.
But "100% safe and curve-ball free" would mean zero need for a human backup and zero risk in every scenario—which no company (Tesla included) claims for any current self-driving tech. Real-world driving always has unpredictability.
The system is getting dramatically better, but it's not there yet.
Yes — removing the steering wheel and pedals (as in Tesla’s Cybercab robotaxi design) does still introduce a small but non-zero safety risk today, even with the 10+ billion supervised miles milestone.
Here’s why, based on the latest status as of May 2026:
  • Current FSD is still “Supervised” for consumer vehicles — Unsupervised FSD (where the car drives itself with zero driver attention or intervention required) is not yet available in customer-owned Teslas. Elon Musk stated in Tesla’s Q1 2026 earnings call (April 2026) that it’s now targeted for Q4 2026 at the earliest, with a gradual, geography-limited rollout only in areas where safety data is proven.
  • Cybercab/robotaxi is the first no-controls vehicle — Production of the steering-wheel-and-pedal-free Cybercab has begun at Giga Texas (as of April 2026). Tesla is already running limited unsupervised paid robotaxi rides in places like Austin, Dallas, and Houston using a small fleet (~33 vehicles) without safety drivers. These are true “no wheel, no pedals” operations relying entirely on vision-only FSD.
  • The risk logic holds — In supervised mode, the human is the backup for the rare “curve-ball” (edge case, poor visibility, unusual construction, etc.). When you remove the wheel and pedals entirely, that backup disappears. Any failure — even one that happens once every several million miles — now directly affects passengers with no immediate human override possible. Tesla’s own safety report still only covers supervised miles, showing excellent results (7× fewer major collisions vs. baselines), but unsupervised real-world performance at scale is still in early testing.
  • Regulators are explicitly concerned — The U.S. NHTSA has escalated its investigation into FSD crashes (especially in fog/low-visibility where cameras struggled), precisely because Tesla is moving toward no-wheel, no-pedal vehicles. This probe is active and could lead to recalls or delays.
Tesla’s position is clear:
with the massive real-world data now in hand, unsupervised FSD (and therefore wheel-less operation) will soon be statistically far safer than a human driver, and the robotaxi fleet is the proof point. Small-scale unsupervised testing is already happening, and the company expects to scale it rapidly once geographies are validated.
But from a pure safety standpoint, yes — no steering wheel or pedals still carries a residual risk today because the system, while dramatically improved, has not yet eliminated every possible surprise across billions of unsupervised miles in every condition.
That risk is getting smaller by the day as the neural nets train, but it’s not zero until unsupervised deployment proves it at full scale.

Comments