Yet it is quite odd how Tesla also reports that untrained customers using old versions of FSD with outdated hardware average 1,500,000 miles per minor collision [1], a literal 3000% difference, when there are no penalties for incorrect reporting.
Consumer supervision is having all the controls of the car right there in front of you. And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.
Accident rates under traditional cruise control are also extremely below average.
Why?
Because people use cruise control (and FSD) under specific conditions. Namely: good ones! Ones where accidents already happen at a way below-average rate!
Tesla has always been able to publish the data required to really understand performance, which would be normalized by age of vehicle and driving conditions. But they have not, for reasons that have always been obvious but are absolutely undeniable now.
The only problem is, it doesn't work.
That was the case when they first started the trial in Austin. The employee in the car was a safety monitor sitting in the front passenger seat with an emergency brake button.
Later, when they started expanding the service area to include highways they moved them to the driver seat on those trips so that they can completely take over if something unsafe is happening.
Externalized risks and costs are essential for many business to operate. It isn't great, but it's true. Our lives are possible because of externalized costs.
They advertise and market a safety claim of 986,000 non-highway miles per minor collision. They are claiming, risking the lives of their customers and the public, that their objectively inferior product with objectively worse deployment controls is 1,700% better than their most advanced product under careful controls and scrutiny when there are no penalties for incorrect reporting.
https://www.rubensteinandrynecki.com/brooklyn/taxi-accident-...
Generally about 1 accident per 217k miles. Which still means that Tesla is having accidents at a 4x rate. However, there may be underreporting and that could be the source of the difference. Also, the safety drivers may have prevented a lot of accidents too.
I think Tesla's egg is cooked. They need a full suite of sensors ASAP. Get rid of Elon and you'll see an announcement in weeks.
Tesla needs their FSD system to be driving hundreds of thousands of miles without incident. Not the 5,000 miles Michael FSD-is-awesome-I-use-it-daily Smith posts incessantly on X about.
There is this mismatch where overly represented people who champion FSD say it's great and has no issues, and the reality is none of them are remotely close to putting in enough miles to cross the "it's safe to deploy" threshold.
A fleet of robotaxis will do more FSD miles in an afternoon than your average Tesla fanatic will do in a decade. I can promise you that Elon was sweating hard during each of the few unsupervised rides they have offered.
Almost there. Humans kill one person every 100 million miles driven. To reach mass adoption, self-driving car need to kill one every, say, billion miles. Which means dozens or hundreds of billions miles driven to reach statistical significance.
People have an expectation that self driving cars will be magical in ability. Look at the flac waymo has received despite it's most egregious violations being fender bender equivalents
Important correction “kill one or less, per billion miles”. Before someone reluctantly engineers an intentional sacrifice to meet their quota.
You can prove Tesla's system is a joke with a magnitude of metrics.
They need to be around parity. So a death every 100mm miles or so. The number of folks who want radically more safety are about balanced by those who want a product in market quicker.
I don't think so.
The deaths from self-driving accidents will look _strange_ and _inhuman_ to most people. The negative PR from self-driving accidents will be much worse for every single fatal collision than a human driven fatality.
I think these things genuinely need to be significantly safer for society to be willing to tolerate the accidents that do happen. Maybe not a full order of magnitude safer, but I think it will need to be clearly safer than human drivers and not just at parity.
We're speaking in hypotheticals about stuff that has already happened.
> I think these things genuinely need to be significantly safer for society to be willing to tolerate the accidents that do happen
I used to as well. And no doubt, some populations will take this view.
They won't have a stake in how self-driving cars are built and regulated. There is too much competition between U.S. states and China. Waymo was born in Arizona and is no growing up in California and Florida. Tesla is being shaped by Texas. The moment Tesla or BYD get their shit together, we'll probably see federal preëmption.
(Contrast this with AI, where local concerns around e.g. power and water demand attention. Highways, on the other hand, are federally owned. And D.C. exerting local pressure with one hand while holding highway funds in the other is long precedented.)
I like to quip that error-rate is not the same as error-shape. A lower rate isn't actually better if it means problems that "escape" our usual guardrails and backup plans and remedies.
You're right that some of it may just be a perception-issue, but IMO any "alien" pattern of failures indicates that there's a meta-problem we need to fix, either in the weird system or in the matrix of other systems around it. Predictability is a feature in and of itself.
Maybe the better solution is to denormalize people being dismembered, decapitated, and crushed by heavy machinery operated in public mostly by incompetents (who we can't possibly prevent from driving because we've chosen to make it impossible to live without driving).
There is nothing _human_ or _normal_ about this. The widespread ignorance of the danger we're forced to put ourselves in to go to the grocery store borders on mass psychosis.
A self-driving car that merely achieves parity would be worse than 98% of the population.
Gotta do twice the accident-free mileage to achieve parity with the sober 98%.
1 in a billion might be a conservative target. I can appreciate that statistically, reaching parity should be a net improvement over the status quo, but that only works if we somehow force 100% adoption. In the meantime, my choice to use a self-driving car has to assess its risk compared to my driving, not the drunk's.
To be clear, I'm not arguing for what it should be. I'm arguing for what it is.
I tend to drive the speed limit. I think more people should. I also recognise there is no public support for ticketing folks going 5 over.
> my choice to use a self-driving car has to assess its risk compared to my driving, not the drunk's
All of these services are supply constrained. That's why I've revised my hypothesis. There are enough folks who will take that car before you get comfortable who will make it lucrative to fill streets with them.
(And to be clear, I'll ride in a Waymo or a Cybercab. I won't book a ride with a friend or my pets in the latter.)
I think they do. That's the whole point of brand value.
Even my non-tech friends seem to know that with self-driving, Waymo is safe and Tesla is not.
Once Elon put himself at the epicenter of American political life, Tesla stopped being treated as a brand, and more a placeholder for Elon himself.
Waymo has excellent branding and first to market advantage in defining how self-driving is perceived by users. But, the alternative being Elon's Tesla further widens the perception gap.
I'm probably not the average consumer in this situation but I was in Austin recently and took both Waymo and Robotaxi. I significantly preferred the Waymo experience. It felt far more integrated and... complete? It also felt very safe (it avoided getting into an accident in a circumstance where I certainly would have crashed).
I hope Tesla gets their act together so that the autonomous taxi market can engage in real price discovery instead of "same price as an Uber but you don't have to tip." Surely it's lower than that especially as more and more of these vehicles get onto the road.
Unrelated to driving ability but related to the brand discussion: that graffiti font Tesla uses for Cybertruck and Robotaxi is SO ugly and cringey. That alone gives me a slight aversion.
I don't know what a clear/direct way of explaining the difference would be.
Robotaxis market is much broader than the submersibles one, so the effect of consumers' irrationality would be much bigger there. I'd expect an average customer of the submarines market to do quite a bit more research on what they're getting into.
Totally rational.
A small number of humans bring a bad name to the entire field of regular driving.
> The average consumer isn't going to make a distinction between Tesla vs. Waymo.
What's actually "distinct?" The secret sauce of their code? It always amazed me that corporate giants were willing to compete over cab rides. It sort of makes me feel, tongue in cheek, that they have fully run out of ideas.
> they will assume all robotic driving is crash prone
The difference in failure modes between regular driving and autonomous driving is stark. Many consumers feel the overall compromise is unviable even if the error rates between providers are different.
Watching a Waymo drive into oncoming traffic, pull over, and hear a tech support voice talk to you over the nav system is quite the experience. You can have zero crashes, but if your users end up in this scenario, they're not going to appreciate the difference.
They're not investors. They're just people who have somewhere to go. They don't _care_ about "the field". Nor should they.
> dangerous and irresponsible.
These are, in fact, pilot programs. Why this lede always gets buried is beyond me. Instead of accepting the data and incorporating it into the world view here, people just want to wave their hands and dissemble over how difficult this problem _actually_ is.
Hacker News has always assumed this problem is easy. It is not.
That’s the problem right there.
It’s EXTREMELY hard.
Waymo has very carefully increased its abilities, tip-toeing forward little by little until after all this time they’ve achieved the abilities they have with great safety numbers.
Tesla appears to continuously make big jumps they seem totally unprepared for yelling “YOLO” and then expect to be treated the same when it doesn’t work out by saying “but it’s hard.”
I have zero respect for how they’ve approached this since day 1 of autopilot and think what they’re doing is flat out dangerous.
So yeah. Some of us call them out. A lot. And they seem to keep providing evidence we may be right.
Genuine question though: has Waymo gotten better at their reporting? A couple years back they seemingly inflated their safety numbers by sanitizing the classifications with subjective “a human would have crashed too so we don’t count it as an accident”. That is measuring something quite different than how safety numbers are colloquially interpreted.
It seems like there is a need for more standardized testing and reporting, but I may be out of the loop.
Driving around in good weather and never on freeways is not much of an achievement. Having vehicles that continually interfere in active medical and police cordons isn't particularly safe, even though there haven't been terrible consequences from it, yet.
If all you're doing is observing a single number you're drastically under prepared for what happens when they expand this program beyond these paltry self imposed limits.
> Some of us call them out.
You should be working to get their certificate pulled at the government level. If this program is so dangerous then why wouldn't you do that?
> And they seem to keep providing evidence we may be right.
It's tragic you can't apply the same logic in isolation to Waymo.
The difference is that accidents on a freeway are far more likely to be fatal than accidents on a city street.
Waymo didn't avoid freeways because they were hard, they avoided them because they were dangerous.
LIDAR gives Waymo a fundamental advantage.
Tesla FSD is crap. But I also think we wouldn't see quite so much praise of Waymo unless Tesla also had aspirations in this domain. Genuinely, what is so great about a robo taxi even if it works well? Do people really hate immigrants this much?
In some spaces we still have rule of law - when xAI started doing the deepfake nude thing we kind of knew no one in the US would do anything but jurisdictions like the EU would. And they are now. It's happening slowly but it is happening. Here though, I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
the issue is that these tools are widely accessible, and at the federal level, the legal liability is on the person who posts it, not who hosts the tool. this was a mistake that will likely be corrected over the next six years
due to the current regulatory environment (trump admin), there is no political will to tackle new laws.
> I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
unlike deepfakes, there are extensive road safety laws and civil liability precedent. texas may be pushing tesla forward (maybe partially for ideological reasons), but it will be an extremely hard sell to get any of the major US cities to get on board with this.
so, no, i don't think you will see robotaxis on the roads in blue states (or even most red states) any time soon.
Truly baffled by this genre of comment. "I don't think you will see <thing that is already verifiably happening> any time soon" is a pattern I'm seeing way more lately.
Is this just denying reality to shape perception or is there something else going on? Are the current driverless operations after your knowledge cutoff?
In the specific case of grok posting deepfake nudes on X. Doesn't X both create and post the deepfake?
My understanding was, Bob replies in Alice's thread, "@grok make a nude photo of Alice" then grok replies in the thread with the fake photo.
Where grok is at risk is not responding after they are notified of the issue. It’s trivial for grock to ban some keywords here and they aren’t, that’s a legal issue.
Sure, in this context the person who mails the item is the one instigating the harassment but it's the postal network that's facilitating it and actually performing the "last mile" of harassment.
https://faq.usps.com/s/article/What-Options-Do-I-Have-Regard...
You may file PS Form 1500 at a local Post Office to prevent receipt of unwanted obscene materials in the mail or to stop receipt of "obscene" materials in the mail. The Post Office offers two programs to help you protect yourself (and your eligible minor children).
Legal things are amoral, amoral things are legal. We have a duty to live morally, legal is only words in books.
[citation needed]
Historically hosts have always absolutely been responsible for the materials they host, see DMCA law, CSAM case law...
if you think i said otherwise, please quote me, thank you.
> Historically hosts have always absolutely been responsible for the materials they host,
[citation needed] :) go read up on section 230.
for example with dmca, liability arises if the host acts in bad faith, generates the infringing content itself, or fails to act on a takedown notice
that is quite some distance from "always absolutely". in fact, it's the whole point of 230
That ain't true [1].
Teslas are really cheaply made, inadequate cars by modern standards. The interiors are terrible and are barebones even compared to mainstream cars like a Toyota Corolla. And they lack parking sensors depending on the version you bought. I believe current models don’t come with a surround view camera either, which is almost standard on all cars at this point, and very useful in practice. I guess I am not surprised the Robotaxis are also barebones.
Getting this to a place where it is better than humans continuously is not equivalent to fixing bugs in the context of the production of software used on phones etc.
When you are dealing with a dynamic uncontained environment it is much more difficult.
Any engineering student can understand why LIDAR+Radar+RGB is better than just a single camera; and any person moderately aware of tech can realize that digital cameras are nowhere as good as the human eye.
But yeah, he's a genius or something.
> What this really reflects is that Tesla has painted itself into a corner. They've shipped vehicles with a weak sensor suite that's claimed to be sufficient to support self-driving, leaving the software for later. Tesla, unlike everybody else who's serious, doesn't have a LIDAR.
> Now, it's "later", their software demos are about where Google was in 2010, and Tesla has a big problem. This is a really hard problem to do with cameras alone. Deep learning is useful, but it's not magic, and it's not strong AI. No wonder their head of automatic driving quit. Karpathy may bail in a few months, once he realizes he's joined a death march.
> ...
https://news.ycombinator.com/item?id=14600924
Karpathy left in 2022. Turns out that the commenter, Animats, is John Nagle!
Beyond even the cameras themselves, humans can move their head around, use sun visors, put on sunglasses, etc to deal with driving into the sun, but AVs don't have these capabilities yet.
Photon counting is a real thing [1] but that's not what Tesla claims to be doing.
I cannot tell if what they are doing is something actually effective that they should have called something other than "photon counting" or just the usual Musk exaggerations. Anyone here familiar with the relevant fields who can say which it is?
Here's what they claim, as summarized by whatever it is Google uses for their "AI Overview".
> Tesla photon counting is an advanced, raw-data approach to camera imaging for Autopilot and Full Self-Driving (FSD), where sensors detect and count individual light particles (photons) rather than processing aggregate image intensity. By removing traditional image processing filters and directly passing raw pixel data to neural networks, Tesla improves dynamic range, enabling better vision in low light and high-contrast scenarios.
It says these are the key aspects:
> Direct Data Processing: Instead of relying on image signal processors (ISPs) to create a human-friendly picture, Tesla feeds raw sensor data directly into the neural network, allowing the system to detect subtle light variations and near-IR (infrared) light.
> Improved Dynamic Range: This approach allows the system to see in the dark exceptionally well by not losing information to standard image compression or exposure adjustments.
> Increased Sensitivity: By operating at the single-photon level, the system achieves a higher signal-to-noise ratio, effectively "seeing in the dark".
> Elimination of Exposure Limitations: The technique helps mitigate issues like sun glare, allowing for better visibility in extreme lighting conditions.
> Neural Network Training: The raw, unfiltered data is used to train Tesla's neural networks, allowing for more robust, high-fidelity perception in complex, real-world driving environments.
You can solve this by having multiple cameras for each vantage point, with different sensors and lenses that are optimized for different light levels. Tesla isn't doing this mind you, but with the use of multiple cameras, it should be easy enough to exceed the dynamic range of the human eye so long as you are auto-selecting whichever camera is getting you the correct exposure at any given point.
The IMX490 has a dynamic range of 140dB when spitting out actual images. The neural net could easily be trained on multiexposure to account for both extremely low and extremely high light. They are not trying to create SDR images.
Please lets stop with the dynamic range bullshit. Point your phone at the sun when you're blinded in your car next time. Or use night mode. Both see better than you.
For me it looks like they will reach parity at about the same time, so camera only is not totally stupid. What's stupid is forcing robotaxi on the road before the technology is ready.
It's far from clear that the current HW4 + sensor suite will ever be sufficient for L4.
Nah, Waymo is much safer than Tesla today, while Tesla has way-mo* data to train on and much more compute capacity in their hands. They're in a dead end.
Camera-only was a massive mistake. They'll never admit to that because there's now millions of cars out there that will be perceived as defective if they do. This is the decision that will sink Tesla to the ground, you'll see. But hail Karpathy, yeah.
* Sorry, I couldn't resist.
Or did he "resign" since Elon insists on camera-only and Karpathy says i cant do it?
Technology is just not there yet, and Elon is impatient.
Waymo could be working on camera only. I don’t know. But it’s not controlling the car. And until such a time they can prove with their data that it is just as safe, that seems like a very smart decision.
Tesla is not taking such a cautious approach. And they’re doing it on public roads. That’s the problem.
No reason to assume that. A toddler that is increasing in walk speed every month will never be able to outrun a cheetah.
For those complaining about Tesla's redactions - fair and good. That said, Tesla formed its media strategy at a time when gas car companies and shorts bought ENTIRE MEDIA ORGs just to trash them to back their short. Their hopefulness about a good showing on the media side died with Clarkson and co faking dead batteries in a roadster test -- so, yes, they're paranoid, but also, they spent years with everyone out to get them.
Are you being sarcastic due to Elon buying Twitter to own/control the conversation? He would be a poster child for the bad actions you are describing.
“13781-13644 Street, Heavy truck, No injuries, Proceeding Straight (Heavy truck: parked), 4mph, contact area: left”
[1] https://www.businessinsider.com/musks-claim-teslas-appreciat...
I'm curious how crashes are reported for humans, because it sounds like 3 of the 5 examples listed happened at like 1-4 mph, and the fourth probably wasn't Tesla's fault (it was stationary at the time). The most damning one was a collision with a fixed object at a whopping 17 mph.
Tesla sucks, but this feels like clickbait.
> What makes this especially frustrating is the lack of transparency. Every other ADS company in the NHTSA database, Waymo, Zoox, Aurora, Nuro, provides detailed narratives explaining what happened in each crash. Tesla redacts everything. We cannot independently assess whether Tesla’s system was at fault, whether the safety monitor failed to intervene in time, or *whether these were unavoidable situations caused by other road users*. Tesla wants us to trust its safety record while making it impossible to verify.
While I was living in NYC I saw collisions of that nature all the time. People put a "bumper buddy" on their car because the street parallel parking is so tight and folks "bump" the car behind them while trying to get out.
My guess is that at least 3 of those "collisions" are things that would never be reported with a human driver.
My suspicion is that these kinds of minor crashes are simply harder to catch for safety drivers, or maybe the safety drivers did intervene here and slow down the car before the crashes. I don't know if that would show in this data.
So the average driver is also likely a bad driver by your standard. Your standard seems reasonable.
The data is inconclusive on whether Tesla robotaxi is worse than the average driver.
Unlike humans, Waymo does report 1-4 mph collisions. The data is very conclusive that Robotaxi is significantly worse than Waymo.
``` The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds. ```
so in reality one crash with fixed object, the rest is... questionable, and it's not a crash as you portrait. Such statistic will not even go into human reports, as it goes into non driving incidents, parking lot etc.
Though maybe the safety drivers are good enough for the major stuff, and the software is just bad enough at low speed and low distance collisions where the drivers don't notice as easily that the car is doing something wrong before it happens.
We are still a long, long, long way off for someone to feel comfortable jumping in a FSD cab on a rainy night in in New York.
https://www.cnbc.com/2026/01/22/musk-tesla-robotaxis-us-expa...
Tesla CEO Elon Musk said at the World Economic Forum in Davos that the company’s robotaxis will be “widespread” in the U.S. by the end of 2026.
if Tesla drops the ego they could obtain Waymo software and track record on future Tesla hardware
Given the way Musk has lied and lied about Tesla's autonomous driving capabilities, that can't be much of a surprise to anyone.
>The new crashes include [...] a crash with a bus while the Tesla was stationary
Doesn't this imply that the bus driver hit the stationary Tesla, which would make the human bus driver at fault and the party responsible for causing the accident? Why should a human driver hitting a Tesla be counted against Tesla's safety record?
It's possible that the Tesla could've been stopped in a place where it shouldn't have, like in the middle of an intersection (like all the Waymos did during the SF power outage), but there aren't details being shared about each of these incidents by Electrek.
>The new crashes include [...] a collision with a heavy truck at 4 mph
The chart shows only that the Tesla was driving straight at 4mph when this happened, not whether the Tesla hit the truck or the truck hit the Tesla.
Again, it's entirely possible that the Tesla hit the truck, but why aren't these details being shared? This seems like important data to consider when evaluating the safety of autonomous systems - whether the autonomous system or human error was to blame for the accident.
I appreciate that Electrek at least gives a mention of this dynamic:
>Tesla fans and shareholders hold on to the thought that the company’s robotaxis are not responsible for some of these crashes, which is true, even though that’s much harder to determine with Tesla redacting the crash narrative on all crashes, but the problem is that even Tesla’s own benchmark shows humans have fewer crashes.
Aren't these crash details / "crash narrative" a matter of public record and investigations? By e.g. either NHTSA, or by local law enforcement? If not, shouldn't it be? Why should we, as a society, rely on the automaker as the sole source of information about what caused accidents with experimental new driverless vehicles? That seems like a poor public policy choice.
No idea how these things are being allowed on the road. Oh wait, yes I do. $$$$
His companies are doomed by his own hand as his reputation is unsalvageable. Someday he'll end up in a well-deserved prison cell and everyone will pretend to never have supported him.
Supposedly neither are Tesla's remote assistants, though there are open questions about why they've posted job descriptions about building a teleop system for their vehicles [0] and why their remote assistant setups have steering wheels if that's completely true.
[0] https://web.archive.org/web/20241211115851/https://www.tesla...
4x worse than humans is misleading, I bet it's better than humans, by a good margin.
Elecktek is just summarizing/commenting.
In before, 'but it is a regulation nightmare...'
There's no real discussion to be had on any of this. Just people coming in to confirm their biases.
As for me, I'm happy to make and take bets on Tesla beating Waymo. I've heard all these arguments a million times. Bet some money
[1] https://www.fastcompany.com/91491273/waymo-vehicle-hit-a-chi....
Heard this for a decade now, but I’m sure this year will be different!