tesla fsd beta v12 goes camera only
see also: Latency Budget · Platform Risk
Tesla pushed FSD Beta V12 to a wider set of drivers, ditching radar entirely and promising that an end-to-end neural network can learn the whole driving policy (Electrek). The release reignites debate about whether cameras alone can deliver Level 4 behavior.
scene cut
V12 uses a neural net to map camera input directly to steering and throttle, reducing hand-coded heuristics. Drivers reported smoother turns but also odd lane decisions during rain.
signal braid
- The end-to-end approach mirrors what gpt-4 release recalibrates hallucination debate showed: once models own the whole stack, debugging is cultural, not modular.
- Dropping radar contradicts the sensor fusion path Waymo and Cruise take; Tesla is betting cost and data volume beat redundancy.
- Regulators already second-guess camera-only claims, so expect scrutiny similar to the heat on starship flight 3 balances rapid iteration.
risk surface
- Software regressions are harder to trace when the policy is a monolith.
- Any high-profile crash could trigger broad NHTSA action.
- Tesla’s release cadence depends on real-world data; a pause in usage slows iteration.
my take
Tesla is forcing autonomy to be a software product, but it raises the question of how you prove a neural policy is safe in the first place.
linkage
- tags
- #autonomy
- #automotive
- #2023
- related
- [[gpt-4 release recalibrates hallucination debate]]
- [[starship flight 3 balances rapid iteration]]
ending questions
What telemetry proof would convince regulators that V12’s camera-only policy is acceptably safe?