If that is actually designed like this, the only reason I could see for it would be so that Tesla has sole access to the data and can decide whether to use it or not. Which really should not work in court, but it seems it has so far.
And of course I'd expect an audit trail for the deletion of crash data on Tesla servers. But who knows whether there actually isn't one, or nobody looked into it at all.
All vehicle manufacturers have sole access to data. There isn't a standard for logging data, nor a standard for retrieving it. Some components log data and it only the supplier has the means to read and interpret it.
If your car has an EDR, what data it collects is legislated. There is not a standard interface for retrieving it, but the manufacturer is required to ensure that there is a commercially available tool for data retrieval that any third party can use.
https://www.ecfr.gov/current/title-49/subtitle-B/chapter-V/p...
If not, I assume they'll keep losing all incriminating data.
Is this one of those "that's why big cars are cheaper to make" situations?
The EDR is optional. If the manufacturer chooses to install it, it must meet those standards.
I was just refuting the GPs assertion that they are all proprietary and that only the manufacturer can access the data.
From a quick search, it's technically possible to configure some model year F-150s to have a curb weight over 5,500 pounds with all the right options, but most are lower.
Also the rules I posted are only if the manufacturer chooses to equip a recorder. They can opt not to have one.
The point I was making is that the GP was just saying shit that had no basis in fact.
This is not unlike what happens for flight data recorders after a crash. The raw data is not made public right away, if ever.
But the data was mostly unprotected on the devices, or it couldn't have been restored. And Tesla isn't exactly known for respecting the privacy of their customers, they have announced details about accidents publicly before.
And there is the potential conflict of interest, Tesla does have strong incentives to "lose" data that implicates Autopilot or FSD.
If you cause an accident by driving distracted or being reckless I think it's only fair that the facts are known so that you can be punished accordingly. Certainly better than someone innocent having to share responsibility for your mistake.
I think that would probably make people think twice about being reckless and even if it doesn't at least they'll get what they deserve.
For the first kind of data, deleting the data from the car the moment there’s confirmation that it now is stored at Tesla can make perfect sense as a mechanism to prevent the car to run out of storage space.
Of course, if the car crashed, deleting the data isn’t the optimal, but that it gets deleted may not be malice.
Anytime data is recorded legal is immediately asking about retention so they don't end up empty handed in front of a judge.
Every byte that car records and how it is managed will be documented in excruciating detail by legal.
As is deleting data. Also, for, say, training data for Tesla’s software, I don’t see legal requirements for keeping it around,
> There's no chance such a decision is accidently made by reusing code.
At Tesla? I know about nothing about their software development practices, but from them, it wouldn’t surprise me at all if this were accidental.
Edit: one scenario to easily introduce this bug is if the “delete data after upload” feature were added after the “on a crash, upload all data you have, in case the car burns down” feature.
If you selectively delete data, courts can assume that data is the worst possible thing for a court case against you.
In my experience, they are setting automated 90 deletion policies on email so they don't end up with surprises in discovery.
Like many things, the retention policy was actually a destruction policy
But in the end we wouldn't be discussing this at all if Tesla had simply handed over the data from their servers. If they can't find it, it isn't actually there or they deliberately removed it this affects how I view this process.
Two copies are better than one. If you immediately erase the data, you better be sure the transmitted data is safe and secure. And obviously it wasn't.
Tesla's fairly notorious for casual treatment of customer car data (which they have a lot of). There was an article, recently, about how in-car video recordings were being passed around the office.
I know that at least one porn actress recorded a scene in a self-driving Tesla. I'll bet that recording made the rounds "for quality purposes."
It's a disclaimer, but it also grants permission for you to record.
I knew a guy who used to record all his calls with companies, and would let them know they were being recorded, if they didn't have that disclaimer.
He would say "This call is being recorded." He told me that most of the companies hung up immediately, when he said that.
I never heard him say that his recording ever did him any good, though.
If someone calls you and declares that they're recording the conversation, you probably should hang up too. It's usually used as a threat by people who intend to use it against you legally somehow. Your friend may have been an exception, but there's no way for the people on the other end to know either.
If you're acting as a representative of a company on the phone, hanging up and informing your manager or legal counsel is a good idea.
As for customer service recording calls: I didn't understand this until I was on the other side of customer support. The number of people who tell lies about interactions with support is insane. These days it's mostly e-mail and therefore easy to look up. You wouldn't believe how many people would try to throw our customer support people under the bus ("Support said you'd give me a free replacement!") until they realize we can go back and check these things.
You should just assume that any phone call with stakes is being recorded and that anything you say can be considered binding. Verbal contracts are valid almost everywhere, so what you say on the phone does have legal consequences regardless of whether it was recorded. Courts will also accept your notes about a phone call as evidence in the absence of a recording.
Understatement of the year when employees are supposedly watching people in their homes from the car.
I don't know how accurate it is right now, but previously, people have had to sue Tesla to get telemetry data from their own vehicle, not to use against Tesla, but to use in accident lawsuits against other parties.
Meanwhile, without your consent, Tesla will hold press conferences using your telemetry data to throw you under the bus (even deceptively) to defend themselves. "The vehicle had told the driver to pay attention!" NHTSA, four months later: "The vehicle had issued one inattention alert, eighteen minutes prior to the collision." (emphasis mine)
at best the decryption key is somehow custom to each car, not reproducible (eg. it's made by some random manufacturing process), and then Tesla reads this and encrypts everything in a way so that only that key can open it.
but then do they keep every bit of decrypted data "on die"? (or they encrypt RAM too?)
I'd expect them to also have fleet keys for stuff like navigation data. And of course, public-key based firmware signing. That's just table stakes these days.
They tried to recruit me for the UI. If I lived closer, I would have jumped on it. Not only was I bit of a Tesla fanboy at the time, I used to work across the street from their office and really liked that area. (Deer Creek Road in Palo Alto.)
...and potentially death?
So the Tesla detected the vehicle and the pedestrian, and then plans a path through them? Wow! How bad is this software?
Not just detect a pedestrian and plan a path through them. Hit a pedestrian and plan a path through them to finish the job.
You can take a Waymo any time of day in SF and they provide 1000s of successful rides daily
I'm curious; why does it matter to you how many man-hours Waymo spends on a functional service? Would it be disqualifying if it's "too much" in your estimation?
[1] As popularized in the movie The Mitchells vs. the Machines: https://m.youtube.com/watch?v=LaK_8-3pWKk
> Moments later, court records show, the data was just as automatically “unlinked” from the 2019 Tesla Model S at the scene, meaning the local copy was marked for deletion, a standard practice for Teslas in such incidents, according to court testimony.
Wow...just wow.
Even if Tesla hadn't squandered it's EV lead and was instead positioned to be a robotics and AI superpower, is this really the corporate behavior you would want? This is some fucking Aperture Science level corporate malfeasance.
Miles and miles different - they were not completely untouchable the way tesla and similar hot companies are.
In my mind it's like suddenly declaring that blue cars are illegal, and they made a color-shifting car that is blue except when the authorities are looking at it.
It is wrong in the sense that it is normalizion of deviance, however. We live in a society and if we don't like a law or regulation the correct response is to get it legally changed, not to ignore it and cheat.
> So a bureaucracy just declared that a legal level of emissions was now illegal.
That is not at all what happened and not how emissions standards are deployed. The EPA's Tier 2 standards were finalized in 2000 to phase in during the 2004-2008 model years [1].
[1] https://www.federalregister.gov/documents/2000/02/10/00-19/c...
I see no course correction from Tesla. Just continuing and utter tripe from it's CEO, team, and Musk-d-riders.
This is an on-going issue for them and, at this point, with no further change? I hope it drives them into the ground (Autopilot, natch).
edit: My point is that it was not one lone actor, who would have made that change.
It's very easy to imagine a response to this being (beyond "don't log so much") an audit layer to start automatically removing redundant data.
The externalities of the company are such that people want to ascribe malice, but this is a very routine kind of thing.
The performance ship sailed, like, 15 years ago. We're already storing about 10000000 more data than we need. And that's not even an exaggeration.
I worked a year in airbag control, and they recorded a bit of information if the bags were deployed - seatbelt buckle status was one thing. But I was under the impression there was no legal requirement for that. I'm sure it helps in court when someone tries to sue because they were injured by an airbag. The argument becomes not just "the bags comply with the law" but also "you weren't wearing your seatbelt". Regardless, I'm sure Tesla has a lot more data than that and there is likely no legal requirement to keep it - especially if it's been transferred reliably to the server.
This is like saying "maybe nobody has recently looked at the ad-selection mechanism at Google." That's just not plausible.
It’s just worded to make this sound sketchy. I bet ten bucks “unlinked” just refers to the standard POSIX call for deleting a file.
The real issue is that Tesla claimed the company doesn't have the data after every copy was deleted. There's no technical reason to dispose of data related to a crash when you hold so much data on all of the cars in general.
Crash data in particular should be considered sacred, especially given the severity in this case. Ideally it should be kept both on the local black box and on the servers. But anything that leads to it being treated as instantly disposable everywhere, or even just claiming it was deleted, can only be malice.
Exactly. The issue is deleting the data on the servers, not a completely mundane upload-then-delete procedure for phoning home. This should have been one sentence, but instead they make it read like a heist.
My money is on nobody built a tool to look up the data, so they have it, they just can't easily find it.
Also this is not like some process crash dump where the computer keeps running after one process crashed.
This would be like an plane black box uploading its data to the manufacturer, then deleting itself after a plane crash.
Deleting the data on the server is totally sketchy, but that’s not what the quoted section is about.
How Tesla could say that detecting a collision and not locking all/any of the data is normal is just insane.
This data is yours. You were going the speed limit when the accident happened and everyone else claims you were speeding. It would take forever to clear your name or worse you could be convicted if the data was lost.
This is more of "you will own nothing" crap. And mainly so Tesla can cover its ass.
Maybe this is not appropriate for a car, but that doesn’t excuse the ridiculous breathless tone in the quoted text. It’s the worst purple prose making a boring system sound exciting and nefarious. They could have made your point without trying to make the unlink() call sound suspicious.
The rogue engineer defense worked so well for VW and Dieselgate.
The issue of missing crash data was raised repeatedly. Deleting or even just claiming it was deleted can only be a mistake the first time.
So either the problem is Tesla engineers are fucking stupid (doubtful) or this is a poor business/product design.
My money is on the latter.
So we just shrug because software boys gotta be software boys? That’s completely insane and a big reason why a lot of engineers roll their eyes about developers who want to be considered engineers.
Software engineers who work on projects that can kill people must act like the lives of other people depend on them doing their job seriously, because that is the case. Look at the aviation industry. Is it acceptable to have a bug in the avionics suite down planes at random and then delete the black boxes? It absolutely is not, and when anything like that happens shit gets serious (think 737 MAX).
The developers who designed the systems are responsible, and so are their managers who approved the changes, all the way to the top. This would not happen in a company with appropriate processes in place.
I'd argue that this data is far less important in cars. Airline safety has advanced to the point where crashes are extremely rare and usually have a novel cause. Data recorders are important to be able to learn that cause and figure out how to prevent it from happening again. Car safety, on the other hand, is shit. We don't require rigorous training for the operators. Regulations are lax, and enforcement even more lax. Infrastructure is poor. We're unwilling to fix these things. Almost all safety efforts focus on making the vehicles more robust when collisions occur, and we're just starting to see some effort put into making the vehicles automatically avoid some collisions. What are we going to learn from this data in cars? "Driver didn't stop for a red light, hit cross traffic." "Driver was drunk." "Driver failed to see pedestrian because of bad intersection design which has been known for fifty years and never been fixed." It's useful for assigning liability but not very useful for saving lives. There's a ton of lower hanging fruit to go after before you start combing through vehicle telemetry to find unknown problems.
Even if you do consider it to be life-critical, uploading the data and then deleting the local copy once receipt is acknowledged seems completely fine, if the server infrastructure is solid. Better than only keeping a local copy, even. The issue there is that they either have inadequate controls allowing data to be deleted, or inadequate ability to retrieve data.
Assuming its not intentionally malicious this is a really dumb bug that I could have also written. You zip up a bunch of data, and then you realize that if you don't delete things you've uploaded you will fill up all available storage, so what do you do? You auto delete anything that successfully makes it to the back-end server, you mark the bug fixed, not realizing that you overlooked crash data as something you might want to keep.
I could 100% see this being what is happening.
> Tesla later said in court that it had the data on its own servers all along
Perhaps if there is some sort of crash.
Any competent engineer who puts more than 3 seconds of thought into the design of that system would conclude that crash data is critical evidence and as many steps as possible should be taken to ensure it's retained with additional fail safes.
I refuse to believe Tesla's engineers aren't at least competent, so this must have been done intentionally.
Which of these is evidence of a conspiracy:
tar cf - | curl
TMPFILE=$(mktemp) ; tar cf $TMPFILE ; curl -d $TMPFILE ; rm $TMPFILE
The requirements should have been clear that crash data isn't just "implement telemetry upload", a "collision snapshot" is quite clearly something that could be used as evidence in a potentially serious incident.
Unless your entire engineering process was geared towards collecting as much data that can help you, and as little data as can be used against you, you'd handle this like the crown jewels.
Also, to nit-pick, the article says the automated response "marked" for deletion, which means it's not automatically deleted as your reductive example which doesn't verify it was successfully uploaded (at least && the last rm).
> someone at Tesla probably took “affirmative action to delete” the copy of the data on the company’s central database, too
You're implying it's special for crashes, but we don't know that.
Saying "hey, the upload_and_delete function is used in loads of places!" doesn't free you of the responsibility that you used that function in the crash handler.
u know if for instance u weld a gas pipeline and an xray machine reveal a crack in your work, you can go to jail.... but if you treat car software as an appstore item, totally fine??
stop defending ridiculously bad design and corporate practices.
After it confirmed upload to the server? What if it was a minor collision? The car may be back on the road the same day, or get repaired and on the road next week. How long should it retain data (that is not legally required to be logged) that has already been archived, and how big does the buffer need to be?
If the car requires that a certain amount of storage is always available to write crash data to, then it doesn't matter what's in that particular area of storage. That reserved storage is always going to be unavailable for other general use.
Then, I don’t know… Check if it was the case? Seriously, it’s unbelievable. It’s a company with a protocol to delete possibly incriminating evidence in a situation where it can be responsible for multiple deaths.
Maybe this thread will be different
Tesla recanted its employee’s testimony “after discovering evidence inconsistent with his stated recollection of events,” it said.
That’s a fancy way to say that he liedProps to greenthehacker. may you sip Starbuck's venti-size hot chocolates for many years to come.
Plus, I'm not interested at this time in the "autopilot" "AI" stuff; I believe drivers should be responsible all the time, until such time that full legal liability is put on the manufacturer.
Don't get me wrong... I would love to call my car to come pick me up at the airport!
Are you proposing that other cars' lane-keep software is better? I guess you're going to figure this out somehow before buying your next car?
I'm proposing that less technologically advanced and software focused cars are less likely to unexpectedly swerve into incoming traffic for sure
> I guess you're going to figure this out somehow before buying your next car?
Do you not do research and read reviews before making large possibly life-changing purchases?
You’re basically wishing diabetes for him.
Where is the anti-capitalism party? The anti-war party? The anti-corruption party? Aren't political parties supposed to represent DIFFERENT interests? Instead we're forced to choose between a party hates immigrants and a party that hates immigrants slightly more
And like you can criticize republicans, but they actually invested in intel. Wrong company, but a step in the right direction.
In Tesla's case, the board knows that the valuation of the company is wildly irrational, and they feel that the valuation is tied to the CEO.
Maybe the Tesla CEO should get fired and prosecuted, but not because the VW case sets some kind of precedent.
Lies about capabilities, timelines, even things as frivolous as being rank one in a video game. He bought Twitter to scale his deception.
My minivan would happily do the same thing (but without the telemetry).
And the distinction is what?
I'm not serious of course. There are huge swaths of the public whose eyes would glaze over if you tried to explain it, and that's my point.
Everything else that you might be reasonably misled by? Puffery and the official position is that you really should have known better.
Just to be clear, Tesla says that the person doing the summoning should be able to see the car at all times and be able to force a stop if necessary when using Summon. At least this was the case the last time I used it.
I’m not necessarily giving a pass to Tesla here, but it doesn’t seem reasonable to throw all the blame on a manufacturer when a user doesn’t follow directions and misuses a function.
A debate could be had about whether functions should be allowed if a certain (high?) percentage of users will abuse it, but that’s a tricky discussion imho.
Almost all of the public examples I’ve seen of Autopilot or Summon being unsafe were when people were misusing it.
There are definitely examples when these functions don’t work (there’s one spot near me when my car makes the wrong choice consistently), but it’s trivial to correct if one is paying attention like you’re supposed to.
Part of the issue is that there are no regulatory guidelines for what's appropriate, and regulators have not stepped in to ensure things are as safe and free of misuse as reasonably possible. Industry standards/norms exist, but they have no legal weight and Tesla ignores them to push the line in ways that I'm personally not thrilled with.
In theory, what we should care about is which ones cause more deaths or accidents. The fact that AI accidents seem worse based on naive intuition shouldn't matter.
However, humans serve on juries so here we go... Would there be a 200 million dollar judgement against someone who tried to get a can of pop from the back seat, ran over a pedestrian and then tried to lie about e.g. whether the light was red?
Humans are simply incapable of paying attention to a task for long periods if it doesn't involve some kind of interactive feedback. You can't ask someone to watch paint dry while simultaneously expect them to have < 0.5sec reaction time to a sudden impulse three hours into the drying process.
1. AEB brakes violently to a full stop. We experience shock and dismay. What happened? Oh, a kid on a bike I didn't see. I nearly fucked up bad, good job AEB
2. AEB smoothly slows the vehicle to prevent striking the bicycle, we gradually become aware of the bike and believe we had always known it was there and our decision eliminated risk, why even bother with stupid computer systems?
Humans are really bad at accepting that they fucked up, if you give them an opportunity to re-frame their experience as "I'm great, nothing could have gone wrong" that's what they prefer, so, to deliver the effective safety improvements you need to be firm about what happened and why it worked out OK.
Oh! And also, moving within the lane is sometimes important for getting a better look at what's up ahead or behind you or expressing car "body language" that allows others to know you're probably going to change lanes soon.
I commute mainly on the highway about 45-1hr each way every day and it makes a big difference for driver fatigue. I was honestly a bit surprised. Even though, I'm steering, it requires less effort. I don't have my foot on the gas and I'm not having to adjust my speed constantly.
Critically, though, I do have to pay attention to my surroundings. It's not taking so much out of my driving that I can't stay engaged to what's happening around me.
> I also don't understand how one has trouble staying between the lines with minimal cognitive input after more than a few months of driving.
Once you have something assist you with that, you'll notice how much "effort" you are actually putting towards it.
FSD doesn’t lull humans into a false sense of security, humans do. FSD doesn’t let you use your phone while it’s on. This alone is an upgrade over most human beings, who think occasional quick phone usage while driving is fine (at least for themselves).
I believe that if you replaced all human drivers in the US with FSD as it exists today, fatalities would go down immediately.
Humans are not a gold standard, and the current median human driver is easy to outperform on safety.
Blocking a technology is Luddism. Blocking a company is politics.
And no you wouldn't.
Musk’s assistant peeked back the muttered and said he had another meeting. “Do you have any final thoughts?” she asked.
“Yes, I want to say one thing.” the data scientist said. He took a deep breath and turned to Musk.
“I’m resigning today. I was feeling excited about the takeover, but I was really disappointed by your Paul Pelosi tweet. It’s really such obvious partisan misinformation and it makes me worry about you and what kind of friends you’re getting information from. It’s only really like the tenth percentile of the adult population who’d be gullible enough to fall for this.”
The color drained from Musk’s already pale face. He leaned forward in his chair. No one spoke to him like this. And no one, least of all someone who worked for him, would dare to question his intellect or his tweets. His darting eyes focused for a second directly on the data scientist.
“Fuck you!” Musk growled.
https://www.techdirt.com/2024/10/25/lies-damned-lies-and-elo...But it took him four months deeply embedded with the Republican party to come to this conclusion?
It's been blindingly obvious to anyone remotely paying attention to US politics for the last decade (or two, or more, but blindingly so, more recently).
The much more likely hypothesis in my view is that he was helping Trump because of personal conviction (only in small parts motivated by naked self-interest).
You should expect rational billionaires to tend politically right out of pure self-interest and distorted perspective alone; because the universal thing that such parties reliably do when in power is cutting tax burden on the top end.
Sending a bunch of scriptkiddies around and having them cut government funding and gut agencies is not really how you make evidence "vanish", how would that even work?
And, lastly, jumping in front of an audience at every opportunity and running your mouth is the absolute last thing anyone would ever do if the goal was to avoid prosection. But it is perfectly in line with a person that has a very big ego and wants to achieve political goals.
I scrutinise beliefs and assumptions even if they are convenient, and you should, too.
I don't believe that Musks main motivation to participate in the 2024 election was to avoid prosecution, because his actions are not really compatible with this, and there is a much more plausible alternative hypothesis that he preferred (possibly no longer) the republican platform for non-prosecution reasons/personal conviction instead, which his actions are very compatible with.
> Labor violations, taxes, National Highway traffic safety administration investigation Tesla
Let me say it like this: Billionaires generally don't have to care about minor infractions like this at all. The whole system is set up to shield them from liability, and wealth is an excellent buffer against effective prosection regardless of who is president. There have been a plethora of infinitely more serious infractions with zero real consequences for the CEOs involved, and this is not because they participated in past presidential election campaigns. See: the VW diesel emission fraud or much worse, leaded gas in the last century (and what associated industry did to keep that going).
There is a pretty recent precendent on the other side of the political spectrum: Hillary Clinton. Republicans went on and on for how she belonged in prison. Anyone with half a brain was able to tell that this was not gonna happen, because there simply was no case. Republicans got basically absolute power since, and --surprise-- Hillary did not go to prison.
What makes you so confident that you are right about Elon, while the people back then were obviously wrong with Hillary (even without hindisght!).
So this isn't so much of an assumption, as taking him at his word.
What is your actual point? What would he stand in front of a judge for, right now, if Harris had won?
You'd have to ask Musk what he feels so guilty about that he had to buy an election.
On the left the details of your sentence structure get criticism for weeks from the public and the press (remember "garbage people"?)
He used to be quite charismatic, I believed him up until about 2017 or so. Then I figured he was just a bit greedy and maybe money got to his head but still a respectable innovator. However during 2020 or 2021 (I don’t exactly remember) he started to get quite unpleasant and making obviously short-term decisions, such as relying only on cameras for self driving because of chip shortages but dressing it up as an engineering decision.
You will basically never hear another CEO of another publicly traded company say this. I just don't believe that the same person who cares so little about his stock price that he sends a tweet like that (and the stock dropped 10% on it) also is making fraudulent statements to inflate the price. A better explanation is that he just says what he thinks without regard for the stock price, which is also something you won't see any other CEO of a publicly traded company do.
> U.S. District Judge Beth Bloom, who presided over the case, said in an order that she did not find “sufficient evidence” that Tesla’s failure to initially produce the data was intentional.
(Democrats aren't left btw)
The problem is not that the republican party used to be a conservative right party.
What I’m saying is this is not a sports competition where Musk is automatically an opponent of the Democratic party because he supported Trump. He supported Trump in order to improve his chances with the legal system because he knew Trump would be willing to be so corrupt.
Another world might be imagined in which the Democratic party was taken over in 2016 but that is not the world we live in.
This is how conservatives keep people going 'both sides!' even though they manufacture whatever is required to be that way.
Like also everytime let the downvotes rain. If you downvote, it would be nice, if you could tell me where I am wrong. It might change my view on things.
All this demonstrates is the term “full self driving” is meaningless.
Tesla has a SAE Level 3 [1] product they’re falsely marketing as Level 5; when this case occurred, they were misrepresenting a Level 2 system as Level 4 or 5.
If you want to see true self driving, take a Waymo. Tesla can’t do that. They’ve been lying that they can. That’s gotten people hurt and killed; Tesla should be liable for tens if not hundreds of billions for that liability.
It’s meaningless because Tesla redefines it at will. The misrepresentation causes the meaninglessness.
Self driving can totally means the human own-self driving.
Having SAE level is clearer.
There's plenty wrong about the FSD terminology and SAE levels would absolutely be clearer, but I doubt more than a tiny fraction of people are confused as to the target of 'self' in the phrase 'full self driving'.
How many juries and courts have ruled adversely against self-cleaning oven makers?
Tesla has absolutely lied about its software's capabilities. From the lawsuit that went to trial:
“In 2016, the company posted a video of what appears to be a car equipped with Autopilot driving on its own.
‘The person in the driver's seat is only there for legal reasons,’ reads a caption that flashes at the beginning of the video. ‘He is not doing anything. The car is driving itself.’ (Six years later, a senior Tesla engineer conceded as part of a separate lawsuit that the video was staged and did not represent the true capabilities of the car.) [1]”
[1] https://www.npr.org/2025/07/14/nx-s1-5462851/tesla-lawsuit-a...
I just disagree that any significant number of people anywhere have thought the 'self' in 'full self driving' refers to the driver.
Also "All this demonstrates is the term “full self driving” is meaningless." prooves my point that it is not missleading.
The levels are set at the lowest common denominator. A 1960s hot rod can navigate a straight road with no user input. That doesn’t mean you can trust it to do so.
> Where did Tesla say FSD is SAE Level 5 approved?
They didn’t say that. They said it could do what a Level 5 self-driving car can do.
“In 2016, the company posted a video of what appears to be a car equipped with Autopilot driving on its own.
‘The person in the driver's seat is only there for legal reasons,’ reads a caption that flashes at the beginning of the video. ‘He is not doing anything. The car is driving itself.’ (Six years later, a senior Tesla engineer conceded as part of a separate lawsuit that the video was staged and did not represent the true capabilities of the car.) [1]”
> Tesla is full self driving with Level 2/3 supervision and in my opinion this is not missleading
This is tautology. You’re defining FSD to mean whatever Tesla FSD can do.
[1] https://www.npr.org/2025/07/14/nx-s1-5462851/tesla-lawsuit-a...
FSD cannot “do everything a Level 5 system can.” It can’t even match Waymo’s Level 4 capabilities, because it periodically requires human intervention.
But granting your premise, you’d say it’s a Level 2 or 3 system with some advanced capabilities. (Mercedes has a lane-keeping and -switching product. They’re not constantly losing court cases.)
It needs to have a crash rate equal to or ideally lower than a human driver.
Tesla does not release crash data (wonder why...), has a safety driver with a finger on the kill switch, and only lets select people take rides. Of course according to Elon always-honest-about-timelines Musk, this will all go away Soon(TM) and we will have 1M Robotaxis on the road by December 31st.
Completing a route without intervention doesn't mean much. It needs to complete thousands of routes without intervention.
Keep in mind that Waymos have selective intervention for when they get stuck. Teslas have active intervention to prevent them from mowing down pedestrians.
The problem here isn't that people think they don't need to pay attention because their car can drive itself and then crash. The problem is that people who know full well that they need to focus on driving just don't because fundamentally the human brain isn't any good at paying attention 100% of the time in a situation where you can get away with not paying attention 99.9% of the time, and naming just can't solve this.
Probably all cars should have a black box, as both modern electronics and humans can do weird stuff.
I think there is a reasonable chance it was honestly mishandled. When considering which parts of the software impact the driving experience, logs are way down the list. They should do better though, and if they intentionally misled anyone they should be punished.
For any company to be significantly liable for a lane-keep crash, the behavior would have to be pretty egregious, IMO. All sorts of bad things can happen with most driving enhancements on any car, with common features such as overdrive, cruise control, or powerful engines, or even with non-features like manual transmissions. The liability for all of this should fall on the shoulders of the driver, most of the time, or we'd never get any cars on the road.
The unfortunate thing is that the state of the industry (or, my experience in it) currently is not set up to be able to do that cheaply nor at scale. Imagine you have tens of thousands of various unique problem scenarios to run through, and some might take several minutes of simulation to run the test. Even if your release cadence is slow, but especially if you have continuous deployment with dozens of micro-releases every day: how exactly do you cheaply scale such that simulation testing doesn't become a massive bottleneck?
Unpaid, unrewarded excellence.
Do we expect them to admit they were outright lying and wrong considering their leader is a pill popping Nazi salute making workaholic known to abuse his workers?
But today you just have a private dinner with the president and he'll wave it away.
I'm saying we do not have any way to verify the details.
Where is the court document?
Isn't this a forensics expert that testified in court? Why aren't they named? Wouldn't most forensics "hackers" be elated to be quoted?
From the article:
> The hacker, known online by his X handle @greentheonly, did not testify in the case.
It seems like a strange grey zone to have a hacker that uncovers all the information but will not testify in court, etc. I don't see how this wouldn't introduce chain of custody problems, etc. for the evidence which is why he would ultimately be testifying. Perplexing.
EDIT - Meh, whatever. If you guys want to read articles that have zero proof and believe whatever they say because some anonymous hacker is quoted, etc. go for it. I don't get paid to educate anyone here.
> The hacker, known online by his X handle @greentheonly, did not testify in the case.
> The hacker, known online by his X handle @greentheonly, did not testify in the case.
So, we have information that we cannot prove being given by a person who won't testify in court? How is this proper chain of evidence, etc.?
Other car manufacturers would never get in trouble for this because it's not even possible for them to do it in the first place!
I would deeply encourage you to re-assess whatever led you to make this comment, because you have fallen wildly off the mark here. Corporations are not your friend.