Do I understand it correctly? Crash data gets automatically transmitted to Tesla, and after it was transmitted is immediately marked for deletion?

If that is actually designed like this, the only reason I could see for it would be so that Tesla has sole access to the data and can decide whether to use it or not. Which really should not work in court, but it seems it has so far.

And of course I'd expect an audit trail for the deletion of crash data on Tesla servers. But who knows whether there actually isn't one, or nobody looked into it at all.

>> Tesla has sole access to the data

All vehicle manufacturers have sole access to data. There isn't a standard for logging data, nor a standard for retrieving it. Some components log data and it only the supplier has the means to read and interpret it.

Mostly incorrect. At least for the US.

If your car has an EDR, what data it collects is legislated. There is not a standard interface for retrieving it, but the manufacturer is required to ensure that there is a commercially available tool for data retrieval that any third party can use.

https://www.ecfr.gov/current/title-49/subtitle-B/chapter-V/p...

Does it legislate that you can't "accidentally" delete all incriminating data?
Depends on the severity of the crash. If it meets certain thresh-holds (air bag deployment) the recording memory must be permanently locked in the onboard recorder.
Is the penalty for "oops, we had a bug, and it's gone," similar to the >$100M penalty they got?

If not, I assume they'll keep losing all incriminating data.

It looks like this covers "and an unloaded vehicle weight of 2,495 kg (5,500 pounds) or less". From what I understand even my F-150 wouldn't fall under this legislation
Might not cover large trucks but most sedans are under that.

Is this one of those "that's why big cars are cheaper to make" situations?

No.

The EDR is optional. If the manufacturer chooses to install it, it must meet those standards.

I was just refuting the GPs assertion that they are all proprietary and that only the manufacturer can access the data.

Unloaded vehicle weight, not gross vehicle weight.

From a quick search, it's technically possible to configure some model year F-150s to have a curb weight over 5,500 pounds with all the right options, but most are lower.

There are other regulations for larger and commercial vehicles. Not sure if there is a light truck ruleset.

Also the rules I posted are only if the manufacturer chooses to equip a recorder. They can opt not to have one.

The point I was making is that the GP was just saying shit that had no basis in fact.

There is a world of difference between "you need our special hardware and software to read the data" and "we deleted it lol".
Eh, there's a difference between sole custody (which is what Tesla has created) and sole knowledge/right to access the data.
I guess one charitable way to look at it is that after a crash, external people could get access to the car and its memory, which could potentially expose private data about the owner/driver. And besides private data, if data about the car condition was leaked to the public, it could be made to say anything depending on who presents it and how, so it's safer for the investigation if only appointed experts in the field have access to it.

This is not unlike what happens for flight data recorders after a crash. The raw data is not made public right away, if ever.

If Tesla securely stored this data and reliably turned it over to the authorities, I wouldn't argue much with this.

But the data was mostly unprotected on the devices, or it couldn't have been restored. And Tesla isn't exactly known for respecting the privacy of their customers, they have announced details about accidents publicly before.

And there is the potential conflict of interest, Tesla does have strong incentives to "lose" data that implicates Autopilot or FSD.

  • sanex
  • ·
  • 4 hours ago
  • ·
  • [ - ]
I would rather my cars not automatically rat me out to the authorities, personally.
  • gmd63
  • ·
  • 3 hours ago
  • ·
  • [ - ]
I wouldn't want them to have selective memory in favor of juicing Elon's marketing scams either.
Your property isn't ratting you out. The software you license from Tesla is ratting you out.
Such a pity there is no way to get an electronics minimal car control unit. Funny how conspicuously unimplemented functionality works.
If you do an aftermarket EV conversion the car will mostly be built using hardware that you can nearly fully reason about and won't include snitch boxes.
When you go to an electrical drive train you quickly realize you need computers for things like battery conditioning, efficiency, forward/reverse, charging, route planning, stop/start, and on and on and on. It's not as simple as engine on, engine off. Tesla (rightly, IMO) chose to lean into this. It will be interesting to see what a company like Slate chooses to do.
Note I said minimal. If manufacturers were content to just restrain integrated circuits to those purposes without widespread telemetry or phoning home, or creating software lockouts we'd meet my definition of minimal. Just what it takes to make a functioning device. Instead, we see software used as load bearing supports for predatory or exploitative/surveillance oriented architectures. That is not minimal to me.
  • sfn42
  • ·
  • 1 hour ago
  • ·
  • [ - ]
I think a world where drivers are held accountable for their actions sounds like a just and probably safer world.

If you cause an accident by driving distracted or being reckless I think it's only fair that the facts are known so that you can be punished accordingly. Certainly better than someone innocent having to share responsibility for your mistake.

I think that would probably make people think twice about being reckless and even if it doesn't at least they'll get what they deserve.

that's like worrying about external people having access to the drivers wallet in the case of a fatal crash. Like yeah sure but it's more likely that Tesla is sketchy considering their vested interest is controlling crash data reports
Another reason is if there’s other kinds of data that gets uploaded to Tesla, and the code for uploading crash data reuses that code.

For the first kind of data, deleting the data from the car the moment there’s confirmation that it now is stored at Tesla can make perfect sense as a mechanism to prevent the car to run out of storage space.

Of course, if the car crashed, deleting the data isn’t the optimal, but that it gets deleted may not be malice.

Data retention is legal's bread and butter. There's no chance such a decision is accidently made by reusing code.

Anytime data is recorded legal is immediately asking about retention so they don't end up empty handed in front of a judge.

Every byte that car records and how it is managed will be documented in excruciating detail by legal.

> Data retention is legal's bread and butter.

As is deleting data. Also, for, say, training data for Tesla’s software, I don’t see legal requirements for keeping it around,

> There's no chance such a decision is accidently made by reusing code.

At Tesla? I know about nothing about their software development practices, but from them, it wouldn’t surprise me at all if this were accidental.

Edit: one scenario to easily introduce this bug is if the “delete data after upload” feature were added after the “on a crash, upload all data you have, in case the car burns down” feature.

> I don’t see legal requirements for keeping it around,

If you selectively delete data, courts can assume that data is the worst possible thing for a court case against you.

Agreed. Tesla axed their marketing department, why assume they have much of a legal department overseeing how the data uploads are managed?
> Anytime data is recorded legal is immediately asking about retention so they don't end up empty handed in front of a judge.

In my experience, they are setting automated 90 deletion policies on email so they don't end up with surprises in discovery.

Many large companies nowadays have 90 day deletion policies.
Not sure where you've worked by the "data retention policy" at places I worked made it abundantly clear that we were not to be retaining any data unless personally ordered to by a court. If a line manager, C-Level executive or board member requested me to retain data, I could refuse it under the policy.

Like many things, the retention policy was actually a destruction policy

Deleting after a certain time makes sense, certainly. Deleting immediately seems dubious to me. Though the descriptions in the article are vague enough that we might be missing some big aspects.

But in the end we wouldn't be discussing this at all if Tesla had simply handed over the data from their servers. If they can't find it, it isn't actually there or they deliberately removed it this affects how I view this process.

Two copies are better than one. If you immediately erase the data, you better be sure the transmitted data is safe and secure. And obviously it wasn't.

It's probably a bit like "This call may be recorded for quality purposes." That's a disclaimer that's usually required by the authorities, to let you know that you're being recorded, but it lets them off the hook, if the recording would be inconvenient to them. If it supports their side, they 100% always have it, but if it supports the caller's side, then it seems they didn't actually record that call ...so sorry...

Tesla's fairly notorious for casual treatment of customer car data (which they have a lot of). There was an article, recently, about how in-car video recordings were being passed around the office.

I know that at least one porn actress recorded a scene in a self-driving Tesla. I'll bet that recording made the rounds "for quality purposes."

As an FYI that might be helpful to some, in the case of sales, there's a positive legal obligation to maintain call recordings, so in the event of a courtroom dispute the failure to produce can lead to an adverse inference instruction.
> "This call may be recorded for quality purposes."

It's a disclaimer, but it also grants permission for you to record.

This is true.

I knew a guy who used to record all his calls with companies, and would let them know they were being recorded, if they didn't have that disclaimer.

He would say "This call is being recorded." He told me that most of the companies hung up immediately, when he said that.

I never heard him say that his recording ever did him any good, though.

> He would say "This call is being recorded." He told me that most of the companies hung up immediately, when he said that.

If someone calls you and declares that they're recording the conversation, you probably should hang up too. It's usually used as a threat by people who intend to use it against you legally somehow. Your friend may have been an exception, but there's no way for the people on the other end to know either.

If you're acting as a representative of a company on the phone, hanging up and informing your manager or legal counsel is a good idea.

As for customer service recording calls: I didn't understand this until I was on the other side of customer support. The number of people who tell lies about interactions with support is insane. These days it's mostly e-mail and therefore easy to look up. You wouldn't believe how many people would try to throw our customer support people under the bus ("Support said you'd give me a free replacement!") until they realize we can go back and check these things.

The majority (37!) of states do not require consent or notification, and there is no federal requirement (so as long as the recorder is in a one party state, the recording is legal). There is also no requirement that a person let you know that a third party is on their side of the line listening, taking notes and willing to testify.

You should just assume that any phone call with stakes is being recorded and that anything you say can be considered binding. Verbal contracts are valid almost everywhere, so what you say on the phone does have legal consequences regardless of whether it was recorded. Courts will also accept your notes about a phone call as evidence in the absence of a recording.

I recall reading about a situation where a dude sued Evil Corp based largely on phone conversations he recorded. Evil Corp tried to argue the recorded conversations were illegal because their was no explicit consent and therefore couldn't be used in the lawsuit. However, the dude counter-argued that Evil Corp's own disclaimer clearly states the call can be recorded; it just never mentioned who's doing the recording. The judge agreed with the dude and the lawsuit proceeded. I can't remember, I think "Evil Corp" was his local cable company???
As long as you are in one of the 37 states that do not require consent, the recording is valid as well.
> casual treatment of customer car data

Understatement of the year when employees are supposedly watching people in their homes from the car.

Absolutely so.

I don't know how accurate it is right now, but previously, people have had to sue Tesla to get telemetry data from their own vehicle, not to use against Tesla, but to use in accident lawsuits against other parties.

Meanwhile, without your consent, Tesla will hold press conferences using your telemetry data to throw you under the bus (even deceptively) to defend themselves. "The vehicle had told the driver to pay attention!" NHTSA, four months later: "The vehicle had issued one inattention alert, eighteen minutes prior to the collision." (emphasis mine)

Years back I bought a model3 infotainment unit on eBay to hack on - it’s absolutely insane at the amount of data contained on them. After gaining access to the system I was able to get the VIN of the car and find the salvage auction from the car it came out of - it had been wrecked. I then was able to get all the location data that gets logged, showing a glimpse of the previous owners life (house, work, stores they went to, etc) as well as the final resting place of the car. The last gps locations logged were at the end of a “T” intersection in North Carolina - google street view gave a nice look at the trees the car most likely hit :>
Neat! What's the hardware like, a Linux-ish computer with SD cards? Or SSD? Which filesystem?
HW wise, the older units were intel atom based cpu (latest gen is amd I believe?) - the hardware is typical embedded stuff - cpu + eMMC + bt/wifi mcu + cellular daughter card. OS is linux + QT UI stuff. I would expect things have changed for newer HW revisions, but the previous gen did not utilize encryption (dmcrypt) so all data was unprotected at rest.
Following up on this - the actual 'self driving' part of the HW stack is an entirely separate board with 2x custom ARM chips on it. The HW/SW is much more locked down and the OS/Data is not accessible. I believe a lot of the self-driving info gleaned by types like green were built up from the first generation of Model S cars where the 'self driving' HW was much less defensible and it was much easier to gain access to it.
  • pas
  • ·
  • 1 hour ago
  • ·
  • [ - ]
what does locked down mean here? this is almost the same situation that happened with DRM stuff like HDCP and the BluRay (?) encryption key (that was then posted all over the net), right?

at best the decryption key is somehow custom to each car, not reproducible (eg. it's made by some random manufacturing process), and then Tesla reads this and encrypts everything in a way so that only that key can open it.

but then do they keep every bit of decrypted data "on die"? (or they encrypt RAM too?)

  • bdamm
  • ·
  • 1 hour ago
  • ·
  • [ - ]
It is now sort of common for embedded chips to generate on-die encryption keys for external processes (flash) and there could even be a one-time encryption key for the ROM (pushed to the on-die ROM and then wiped from manufacturing). Encryption RAM is basically free because the chip can generate a key internally at each boot. There can even be deeper lock-downs although obviously the deeper you go the less common it is. Getting to the on-die key can be pretty much impossible unless you can find some bootloader attacks, and then you're very much into dangerous territory. In some cases even looking for a bootloader attack can be paramount to disruption of international arms treaties, legally.

I'd expect them to also have fleet keys for stuff like navigation data. And of course, public-key based firmware signing. That's just table stakes these days.

I suspect it's Windows, actually, and I'm pretty sure the UI is some form of C#.

They tried to recruit me for the UI. If I lived closer, I would have jumped on it. Not only was I bit of a Tesla fanboy at the time, I used to work across the street from their office and really liked that area. (Deer Creek Road in Palo Alto.)

If this were true, Tesla wouldn't have the only use able car infotainment system in the industry.
  • dagmx
  • ·
  • 3 hours ago
  • ·
  • [ - ]
It’s Linux and the UI is Qt
> a glimpse of the previous owners life...

...and potentially death?

I tried to dig up news articles in the area and could not find any reported fatalities - but yea, maybe?
> In the annotated video played for the jury, the vehicle detects a vehicle about 170 feet away. A subsequent frame shows it detecting a pedestrian about 116 feet away. As McGee hurtles closer and closer, the video shows the Tesla planning a path through Angulo’s truck, right where he and his girlfriend were standing behind signs and reflectors highlighting the end of the road.

So the Tesla detected the vehicle and the pedestrian, and then plans a path through them? Wow! How bad is this software?

This bad: https://vimeo.com/1093113127/e1fb6c359c

Not just detect a pedestrian and plan a path through them. Hit a pedestrian and plan a path through them to finish the job.

AI is so unlike anything weve ever seen and its going to revolutionise the world and its literally gna be skynet except it pathfinds like a counterstrike bot just ignore that bit
we're still so early!
Just 2 more years bro just 2 more years and we'll have self driving cars working trust me bro
https://waymo.com/rides/san-francisco/

You can take a Waymo any time of day in SF and they provide 1000s of successful rides daily

I suspect OP was mocking Elon for still not delivering on what he said would be released "any day now" like what, a decade ago? The goalposts keep moving. They seem to be way behind (pun intended).
And theyve had to spend how many manhours engineering around shit like the above?
Not all self-driving vehicles are created equal, Tesla and Waymo are not in the same league.

I'm curious; why does it matter to you how many man-hours Waymo spends on a functional service? Would it be disqualifying if it's "too much" in your estimation?

Your point being?
I suspect it's the dog/pig problem [1]. Many of these systems have no object permanence. If a vehicle was detected at 170 feet, it may not have remained detected as the car got closer, same with the pedestrian. We all should know by now that fixed objects are filtered out by the Tesla system, whether that's stopped vehicles or signs and reflectors; it's actually pretty common for driver assistance features to filter out fixed objects, outside of parking assistance speeds... but most other brands don't have drivers that overtrust the assistance features.

[1] As popularized in the movie The Mitchells vs. the Machines: https://m.youtube.com/watch?v=LaK_8-3pWKk

  • bdamm
  • ·
  • 1 hour ago
  • ·
  • [ - ]
Object permanence is a huge thing, which is why Tesla made a big deal about it being deployed in the stack three or four years ago.
  • toast0
  • ·
  • 58 minutes ago
  • ·
  • [ - ]
The collision was in 2019, so three or four years ago is a little late.
I'm guessing they mean it detected a different vehicle and pedestrian but not the ones it hit. (If it was the victim I don't think they would have said "a".)
> Immediately after the wreck at 9:14 p.m. on April 25, 2019, the crucial data detailing how it unfolded was automatically uploaded to the company’s servers and stored in a vast central database, according to court documents. Tesla’s headquarters soon sent an automated message back to the car confirming that it had received the collision snapshot.

> Moments later, court records show, the data was just as automatically “unlinked” from the 2019 Tesla Model S at the scene, meaning the local copy was marked for deletion, a standard practice for Teslas in such incidents, according to court testimony.

Wow...just wow.

It is wild to me that people put so much trust in this company.

Even if Tesla hadn't squandered it's EV lead and was instead positioned to be a robotics and AI superpower, is this really the corporate behavior you would want? This is some fucking Aperture Science level corporate malfeasance.

I just hate the corrupt laws mandating car dealerships.
It’s pretty typical of corporations, the cult surrounding its leader notwithstanding. Not even just US corporations - the VW emissions scandal was huge, and today they are doing as well as ever. That was a big shakeup; the kind of stuff we are seeing from Tesla feels like business as usual.
VW emission scandal ended with actual judgement and two prison sentences.

Miles and miles different - they were not completely untouchable the way tesla and similar hot companies are.

No, it's not typical, because you don't see huge numbers of people defending VW's emissions fraud.
I don't defend it but the specifics never bothered me. They cheated because their cars didn't meet new emissions standards. They were fine by the standards of the year before. So a bureaucracy just declared that a legal level of emissions was now illegal.

In my mind it's like suddenly declaring that blue cars are illegal, and they made a color-shifting car that is blue except when the authorities are looking at it.

It is wrong in the sense that it is normalizion of deviance, however. We live in a society and if we don't like a law or regulation the correct response is to get it legally changed, not to ignore it and cheat.

> They cheated because their cars didn't meet new emissions standards. They were fine by the standards of the year before.

> So a bureaucracy just declared that a legal level of emissions was now illegal.

That is not at all what happened and not how emissions standards are deployed. The EPA's Tier 2 standards were finalized in 2000 to phase in during the 2004-2008 model years [1].

[1] https://www.federalregister.gov/documents/2000/02/10/00-19/c...

I didn't say you are defending it. I'm saying that "companies do bad things sometimes" is not a full description of the Tesla phenomenon that people take issue with.
Nope - the VW episode was terrible, but they faced large fines and corrected course and it's history. I'm still slightly squeamish about accepting them but they've turned it around and I think I read have just overtaken Tesla in EV sales in Europe (a self-inflicted Musk wound, of course).

I see no course correction from Tesla. Just continuing and utter tripe from it's CEO, team, and Musk-d-riders.

This is an on-going issue for them and, at this point, with no further change? I hope it drives them into the ground (Autopilot, natch).

You can actively criticize VW on the internet without an army of sycophants coming for you. The standard behavior of Tesla stans is that any problem with the vehicle is in fact your fault and only your fault because it would not be possible for Tesla to do something wrong. It is cult-like.
I am trying to imagine a scenario under which that is defensible and does not raise various questions including compliance, legal, retention. Not to mention, who were the people who put that code into production knowing it would do that.

edit: My point is that it was not one lone actor, who would have made that change.

Assuming no malice, I'd guess it's for space saving on the car's internal memory. If the data was uploaded off of the car, there’s no point keeping it in the car.
I think your answer is the most logical to me as a developer, we often miss simple things, the PM overlooks it, and so it goes into production this way. I don't think its malicious. Sometimes bugs just don't become obvious until things break. We have all found an unintended consequence of our code that had nothing wrong with it technically sooner or later.
In point of fact eMMC wear failure was an actual bug in early Tesla MCUs. They were logging too much, so when the car reached (via routine use) a certain fill level the logging started running over the same storage again and again and the chips started failing.

It's very easy to imagine a response to this being (beyond "don't log so much") an audit layer to start automatically removing redundant data.

The externalities of the company are such that people want to ascribe malice, but this is a very routine kind of thing.

This, I think, was the argument that seems most plausible to me ( without ascribing malice ). It brings its own set of issues, but even those issues make it more believable despite being problematic in their own right.
Dude we're at the point where cars are practically gathering data on the size of your big toe.

The performance ship sailed, like, 15 years ago. We're already storing about 10000000 more data than we need. And that's not even an exaggeration.

That's 100% wrong. In standard practice, collision files are to be "locked", prevented from local deletion.
>> That's 100% wrong. In standard practice, collision files are to be "locked", prevented from local deletion.

I worked a year in airbag control, and they recorded a bit of information if the bags were deployed - seatbelt buckle status was one thing. But I was under the impression there was no legal requirement for that. I'm sure it helps in court when someone tries to sue because they were injured by an airbag. The argument becomes not just "the bags comply with the law" but also "you weren't wearing your seatbelt". Regardless, I'm sure Tesla has a lot more data than that and there is likely no legal requirement to keep it - especially if it's been transferred reliably to the server.

I don't think its wrong, have you ever pushed code that was technically correct, only to find months later that you, your PM, their manager, their boss' boss, etc all missed one edge case? You're telling me no software developer has ever done this?
You discover it the day you a person dies and your relevant data is not there. Next time it's no longer a "missed edge case".
In a perfect world where developers are omnipresent and all knowing sure? This isn't a perfect world. Heck, how do you account for the developer who coded it leaving the company, and now that code has been untouched for half a decade if not more, because nothing is seemingly wrong with the code, what then? Who realizes it needs to be changed? Nobody. The number of obscure bugs I find in legacy code that stump even the most experienced maintainers never ends.
There have been dozens of government investigations and lawsuits around Tesla crashes over the past decade (more likely hundreds or thousands, I'm just thinking of the ones that received significant national press and that I happened to notice.) In each of these cases, Tesla's data retention was questioned, sometimes by regulators and sometimes as a major legal question in the case. There is no way in 2025 that the retention process around crash data is some niche area of Tesla's code that the business leaders haven't thought about extremely carefully.

This is like saying "maybe nobody has recently looked at the ad-selection mechanism at Google." That's just not plausible.

It's not an edge case; it's wanton criminal sabotage, destruction of evidence, and it deserves a prison sentence for anyone facilitating it at any level.
This is assuming malice out of the gate without any evidence, which is not what we do here on HN. If this is in fact maliciously done, please provide evidence.
Sounds like a pretty standard telemetry upload. You transmit it, keep your copy until you get acknowledgement that it was received so you can retry if it went wrong, then delete it when it succeeds.

It’s just worded to make this sound sketchy. I bet ten bucks “unlinked” just refers to the standard POSIX call for deleting a file.

The process of collecting and uploading the data probably confuses a lot of non-technical readers even if it worked as per standard industry practices.

The real issue is that Tesla claimed the company doesn't have the data after every copy was deleted. There's no technical reason to dispose of data related to a crash when you hold so much data on all of the cars in general.

Crash data in particular should be considered sacred, especially given the severity in this case. Ideally it should be kept both on the local black box and on the servers. But anything that leads to it being treated as instantly disposable everywhere, or even just claiming it was deleted, can only be malice.

> The real issue is that Tesla claimed the company doesn't have the data after every copy was deleted.

Exactly. The issue is deleting the data on the servers, not a completely mundane upload-then-delete procedure for phoning home. This should have been one sentence, but instead they make it read like a heist.

> The real issue is that Tesla claimed the company doesn't have the data after every copy was deleted. There's no technical reason to dispose of data related to a crash when you hold so much data on all of the cars in general.

My money is on nobody built a tool to look up the data, so they have it, they just can't easily find it.

  • ·
  • 3 hours ago
  • ·
  • [ - ]
Sketchy is that then someone takes “affirmative action to delete” the data on the server as well.

Also this is not like some process crash dump where the computer keeps running after one process crashed.

This would be like an plane black box uploading its data to the manufacturer, then deleting itself after a plane crash.

I’ll bet another ten bucks that this is a generic implementation for all of their telemetry, not something special cased for crashes.

Deleting the data on the server is totally sketchy, but that’s not what the quoted section is about.

How handling an automobile crash not as a special case is the weird part. Even in the <$50 dashcams from Amazon there is a feature to mark a recording as locked so the auto delete logic does not touch the locked file. Some of them even have automatic collision detection which locks the file for you.

How Tesla could say that detecting a collision and not locking all/any of the data is normal is just insane.

That one's easy: nobody at Tesla cares about having this feature
  • pjob
  • ·
  • 2 hours ago
  • ·
  • [ - ]
That might not be a good bet. https://news.ycombinator.com/item?id=45063380
I don't see anything in that comment that would apply to what I said.
That might be the case but the article seems to indicate the system knew the data was generated from an accident. So, removing to save space on the car should now be a secondary concern.
The problem with this is that it destroys any chain of evidence. Tesla "lost" this data, in fact. You would never want your "black box" in your car delete itself after uploading to some service because the service could go down, be hacked, or the provider could decide to withhold it, forcing you into a lengthy discovery / custody battle.

This data is yours. You were going the speed limit when the accident happened and everyone else claims you were speeding. It would take forever to clear your name or worse you could be convicted if the data was lost.

This is more of "you will own nothing" crap. And mainly so Tesla can cover its ass.

It is a car. A vehicule which can be involved in a fatal accident. It is not a website. There is no "oversight", nor is it "pretty standard" to do it like that: when you don't think about what your system is actually doing (and that is the most charitable explanation), YOU ARE STILL RESPONSIBLE AS IF YOU HAD DONE IT ON PURPOSE.
One of Tesla’s things is that their software is built by software people rather than by car people. This has advantages and disadvantages.

Maybe this is not appropriate for a car, but that doesn’t excuse the ridiculous breathless tone in the quoted text. It’s the worst purple prose making a boring system sound exciting and nefarious. They could have made your point without trying to make the unlink() call sound suspicious.

I'm a software person but I still take the car person approach when I know i'm building a car. You have a responsibility to understand the gravity of the enterprise you undertake and to take appropriate steps given that gravity. Ignorance shouldn't be a defense, and if you don't know what you don't know then god help you.
> their software is built by software people rather than by car people

The rogue engineer defense worked so well for VW and Dieselgate.

The issue of missing crash data was raised repeatedly. Deleting or even just claiming it was deleted can only be a mistake the first time.

I really should know better than to think that I can criticize a small part of an article without a bunch of people thinking that I'm defending everything the article discusses.
There are software people who know what they're doing - some write flight software or medical equipment software. They know how to critically think about the processes of their systems in detail.

So either the problem is Tesla engineers are fucking stupid (doubtful) or this is a poor business/product design.

My money is on the latter.

> One of Tesla’s things is that their software is built by software people rather than by car people. This has advantages and disadvantages.

So we just shrug because software boys gotta be software boys? That’s completely insane and a big reason why a lot of engineers roll their eyes about developers who want to be considered engineers.

Software engineers who work on projects that can kill people must act like the lives of other people depend on them doing their job seriously, because that is the case. Look at the aviation industry. Is it acceptable to have a bug in the avionics suite down planes at random and then delete the black boxes? It absolutely is not, and when anything like that happens shit gets serious (think 737 MAX).

The developers who designed the systems are responsible, and so are their managers who approved the changes, all the way to the top. This would not happen in a company with appropriate processes in place.

I completely agree about responsibility for life-critical systems. I wouldn't put this in that category, though. Even on airliners, black boxes aren't treated quite as critically as the stuff that'll kill you then and there. Consider the recent crash in Korea where the black box shut off because it was designed without any backup power if the engines failed, or the Alaska Airlines flight where the voice recording was overwritten because it wasn't shut off after landing.

I'd argue that this data is far less important in cars. Airline safety has advanced to the point where crashes are extremely rare and usually have a novel cause. Data recorders are important to be able to learn that cause and figure out how to prevent it from happening again. Car safety, on the other hand, is shit. We don't require rigorous training for the operators. Regulations are lax, and enforcement even more lax. Infrastructure is poor. We're unwilling to fix these things. Almost all safety efforts focus on making the vehicles more robust when collisions occur, and we're just starting to see some effort put into making the vehicles automatically avoid some collisions. What are we going to learn from this data in cars? "Driver didn't stop for a red light, hit cross traffic." "Driver was drunk." "Driver failed to see pedestrian because of bad intersection design which has been known for fifty years and never been fixed." It's useful for assigning liability but not very useful for saving lives. There's a ton of lower hanging fruit to go after before you start combing through vehicle telemetry to find unknown problems.

Even if you do consider it to be life-critical, uploading the data and then deleting the local copy once receipt is acknowledged seems completely fine, if the server infrastructure is solid. Better than only keeping a local copy, even. The issue there is that they either have inadequate controls allowing data to be deleted, or inadequate ability to retrieve data.

The artifact in question was a temporary archive created for upload. I can't think of a scenario in which you would not unlink it.
You were right in your first statement, but your follow up is a bad assumption, I think everyone here will agree that in the case of a crash this data should be more easily available and not deleted.

Assuming its not intentionally malicious this is a really dumb bug that I could have also written. You zip up a bunch of data, and then you realize that if you don't delete things you've uploaded you will fill up all available storage, so what do you do? You auto delete anything that successfully makes it to the back-end server, you mark the bug fixed, not realizing that you overlooked crash data as something you might want to keep.

I could 100% see this being what is happening.

And then you delete the server copy?
They didn’t delete the server copy though. That’s what this article is about.

  > Tesla later said in court that it had the data on its own servers all along
Wasn’t that after they’d been caught?
Obviously no. The behavior of Tesla in discovery of this case is ridiculous. But treating this technical detail as an element of conspiracy is also ridiculous.
If that was the only thing going wrong, yes. But when you have a pattern of conspiracy, deleting immediately on the client instead of having a ring buffer which ages out the oldest event, may be a malicious choice.
I haven't seen anything in the (characteristically terrible and vague) coverage of this case that suggests the Tesla deleted the EDR.
> I can't think of a scenario in which you would not unlink it.

Perhaps if there is some sort of crash.

Exactly. That's the last data I would ever delete from the car, if I was trying to preserve valuable data.
All of their actions point at intentionally wanting that data to disappear, they even suggested turning it on and updating it, which everyone who's ever tried to protect important information on a computer knows is that exact opposite to what you should do.

Any competent engineer who puts more than 3 seconds of thought into the design of that system would conclude that crash data is critical evidence and as many steps as possible should be taken to ensure it's retained with additional fail safes.

I refuse to believe Tesla's engineers aren't at least competent, so this must have been done intentionally.

What if you were the guy who got a ticket that just said "implement telemetry upload via HTTP"?

Which of these is evidence of a conspiracy:

  tar cf - | curl
  TMPFILE=$(mktemp) ; tar cf $TMPFILE ; curl -d $TMPFILE ; rm $TMPFILE
That's reductive.

The requirements should have been clear that crash data isn't just "implement telemetry upload", a "collision snapshot" is quite clearly something that could be used as evidence in a potentially serious incident.

Unless your entire engineering process was geared towards collecting as much data that can help you, and as little data as can be used against you, you'd handle this like the crown jewels.

Also, to nit-pick, the article says the automated response "marked" for deletion, which means it's not automatically deleted as your reductive example which doesn't verify it was successfully uploaded (at least && the last rm).

  • ozim
  • ·
  • 3 hours ago
  • ·
  • [ - ]
Well if it would be EU for GDPR you can assume contract was terminated because of force majeure and you are not allowed to keep customer data past contract. /s
  • fny
  • ·
  • 4 hours ago
  • ·
  • [ - ]
You left out the worse part:

> someone at Tesla probably took “affirmative action to delete” the copy of the data on the company’s central database, too

The 'wow' part is that they deleted data from server. The part you quoted sounds like nothing unusual to me.
You don't think it's unusual that the software is designed to delete crash data from the crashed car?
The question is whether this is code that's special for crashes, or code that runs the exact same way for all data uploads, regardless of whether there's a crash.

You're implying it's special for crashes, but we don't know that.

You have it backwards. The fact that after the special condition of a crash it still allows the data to be deleted is an issue. Sure, deleting of normal data is fine, but it clearly detected a crash and did not mark the file in the special crash mode as do not delete is mind boggling. Everyone knows that in a crash detection mode that the data is very important. Not having code to ensure data retention is the laziest at best way of doing things or malevolently designed at worst. Tesla and its leadership do not deserve at best as our default choice.
The crash system uses this code, therefore they chose to do something that would delete the crash data after a crash.

Saying "hey, the upload_and_delete function is used in loads of places!" doesn't free you of the responsibility that you used that function in the crash handler.

Is this a crash handler, or is it their normal telemetry upload loop?
Yes, it's a crash handler that uploads a blackbox "collision snapshot" of the entire car's state leading up to a crash. It's very well documented that Tesla does this, including in the article.
if its not special for crashes thats criminally bad design in a safety critical system.

u know if for instance u weld a gas pipeline and an xray machine reveal a crack in your work, you can go to jail.... but if you treat car software as an appstore item, totally fine??

stop defending ridiculously bad design and corporate practices.

Think of it as the scripts that run on CI/CD actions running unit tests. If a unit test fails, the test artifacts are uploaded to an artifact repository, and then, get this - the test runner instance is destroyed! But we don't think of that as unusual or nefarious.
No one dies when your unit test fails. Different stakes, different practices, what are all the Tesla apologists smoking here?
I don't think you can equate CI/CD unit tests and killing humans with 2 tons of metal.
And yet, that's what you get when your software org comes from that kind of devops culture. And here we are
That's because typically the test runner hasn't just crashed into another test runner at full highway speed
>> You don't think it's unusual that the software is designed to delete crash data from the crashed car?

After it confirmed upload to the server? What if it was a minor collision? The car may be back on the road the same day, or get repaired and on the road next week. How long should it retain data (that is not legally required to be logged) that has already been archived, and how big does the buffer need to be?

A very simple answer is "until the next time the car crashes", you just replace the previous crash data with the new data.

If the car requires that a certain amount of storage is always available to write crash data to, then it doesn't matter what's in that particular area of storage. That reserved storage is always going to be unavailable for other general use.

> What if it was a minor collision?

Then, I don’t know… Check if it was the case? Seriously, it’s unbelievable. It’s a company with a protocol to delete possibly incriminating evidence in a situation where it can be responsible for multiple deaths.

The top HN comment on the front page story about this crash on HN several weeks ago claimed the damages award was too high

Maybe this thread will be different

  Tesla recanted its employee’s testimony “after discovering evidence inconsistent with his stated recollection of events,” it said.
That’s a fancy way to say that he lied
That is my big question in this. What happens to the specific employees who provably lied? That sounds like a big no-no, but I wonder in our twisted system if they get some kind of protection as acting on behalf of the company.
After reading the article, I am never buying a Tesla.

Props to greenthehacker. may you sip Starbuck's venti-size hot chocolates for many years to come.

Were you considering buying one before today? I'm curious as to what's different about this autopilot death compared to all the other autopilot deaths that have happened previously. Personally for me it was when the guy in Florida got decapitated when his car drove under a semitruck that made me never want to get in one again.
I wasn't opposed to buying a tesla. In my situation, I don't have the ability to charge ev's conveniently, so I'm not in the market so to speak.

Plus, I'm not interested at this time in the "autopilot" "AI" stuff; I believe drivers should be responsible all the time, until such time that full legal liability is put on the manufacturer.

Don't get me wrong... I would love to call my car to come pick me up at the airport!

I'm curious as to what's different about any of the autopilot deaths and the 40,000 non-autopilot car wreck deaths that happen every year in the US other than the fact that one is considered news and the other isn't. I'm also curious as to how this would ever affect anyone's decision to buy a Tesla given that use of autopilot / FSD is entirely optional.
The difference is we expect people to be stupid, and we expect something called "full self driving" that's advertised as safer than humans to be safe and not decapitate the driver
  • sneak
  • ·
  • 3 hours ago
  • ·
  • [ - ]
Autopilot is opt-in. You can drive it like any other car and never use autopilot.
This is very true, but if you had to choose between two microwaves, one of which had a button that occasionally killed people and one which did not, which would you choose? Personally I would feel better buying a microwave that doesn't have the option to decapitate me, even if I would never press it.
> one of which had a button that occasionally killed people and one which did not, which would you choose?

Are you proposing that other cars' lane-keep software is better? I guess you're going to figure this out somehow before buying your next car?

> Are you proposing that other cars' lane-keep software is better?

I'm proposing that less technologically advanced and software focused cars are less likely to unexpectedly swerve into incoming traffic for sure

> I guess you're going to figure this out somehow before buying your next car?

Do you not do research and read reviews before making large possibly life-changing purchases?

all cars have a button that occasionally kills people, it’s called the accelerator pedal
I think you know that's a false equivalence, both because every control in a car has the possibility of killing you and also because every car has an accelerator pedal and I'm talking about an extra button.
Well then to go back to your microwave analogy, it's really more like choosing between a microwave with 9 buttons that can occasionally kill you or one with 10 buttons that could occasionally kill you, and that sounds about the same to me.
Why pay for the extra button in that case?
> may you sip Starbuck's venti-size hot chocolates for many years to come

You’re basically wishing diabetes for him.

So, Musk summoning the Luftwaffe like that didn’t dissuade you from buying one?
  • pu_pe
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Volkswagen was caught cheating on its emission data and the CEO got fired, then prosecuted. Why shouldn't that be the case here?
The really weird thing about the diesel emissions scandal was that someone actually got in trouble for it. It is _rare_ for companies to be punished, particularly criminally, for that sort of thing.
Well, it wasn't a US company so..
Usually they'd get a DPA
You’d need a coalition of Democratic attorneys general to bring a case in the mould of Big Tobacco.
We'd need a third party if you'd actually want to fight american corporations. Unless you intended "small d" democratic
  • dagmx
  • ·
  • 3 hours ago
  • ·
  • [ - ]
Good news, the CEO of this American corporation is making a third party… (the monkey paw curls)
A third party... that sounds exactly like the other two.

Where is the anti-capitalism party? The anti-war party? The anti-corruption party? Aren't political parties supposed to represent DIFFERENT interests? Instead we're forced to choose between a party hates immigrants and a party that hates immigrants slightly more

And like you can criticize republicans, but they actually invested in intel. Wrong company, but a step in the right direction.

Firing the CEO is nominally up to the board of directors.

In Tesla's case, the board knows that the valuation of the company is wildly irrational, and they feel that the valuation is tied to the CEO.

Because it's a completely different situation. The only commonality is that both involve a car company.

Maybe the Tesla CEO should get fired and prosecuted, but not because the VW case sets some kind of precedent.

  • buyucu
  • ·
  • 13 minutes ago
  • ·
  • [ - ]
Volkswagen is an European company. You simply can't do that to American companies.
  • 05
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Don't worry, once Tesla figures out secure boot nobody will be able to call their bluff and they'll be free to 'lose' crash data with the same impunity the police loses their bodycam footage.
This should be modded up higher. Exactly. The only way hackers found this is because they weren’t using secure boot or encrypted images. Every embedded developer knows about MCUboot. Except managers don’t want the overhead because it is complicated. Once embedded devs get the ok all embedded firmware will basically be like a Signal chat with only the manufacturer having the keys. Heck even PSA compliant hardware MUST be resistant to multibit glitch attacks. Bye bye hackers.
  • gmd63
  • ·
  • 4 hours ago
  • ·
  • [ - ]
None of this should be surprising to anyone who has given an ounce of effort to examining Elon's character.

Lies about capabilities, timelines, even things as frivolous as being rank one in a video game. He bought Twitter to scale his deception.

  • h1fra
  • ·
  • 4 hours ago
  • ·
  • [ - ]
The video is staggering, going super fast before an intersection, with no visibility, a blinking signal, and clear stop sign in sight. I hope FSD got better
It's not FSD, it's Tesla's cruise control.

My minivan would happily do the same thing (but without the telemetry).

not sure if you are saying otherwise but for those who might get confused this crash was with “Autopilot” not FSD, although both are definitely problematic
>> not sure if you are saying otherwise but for those who might get confused this crash was with “Autopilot” not FSD

And the distinction is what?

I'm not serious of course. There are huge swaths of the public whose eyes would glaze over if you tried to explain it, and that's my point.

  • ·
  • 3 hours ago
  • ·
  • [ - ]
i think the public can generally grasp the difference between lane assist and a waymo/AV but the naming is bad agreed
Tesla's official, wildly misleading position is that FSD is a driver assist system that should be treated no different than autopilot, not an autonomous system like Waymo. They've stated it in court, in regulatory filings, and if you open the owner's manual you'll find a bolded statement that FSD doesn't make the vehicle autonomous.

Everything else that you might be reasonably misled by? Puffery and the official position is that you really should have known better.

I've seen videos of people literally using the Tesla mobile app to 'call' their FSD-enabled car to them. Given that that they coded this functionality and expose it in their app, I really don't see how Tesla can be let off by making the statement that you must officially be in front of the wheel all of the time.
  • csa
  • ·
  • 23 minutes ago
  • ·
  • [ - ]
> Given that that they coded this functionality and expose it in their app, I really don't see how Tesla can be let off by making the statement that you must officially be in front of the wheel all of the time.

Just to be clear, Tesla says that the person doing the summoning should be able to see the car at all times and be able to force a stop if necessary when using Summon. At least this was the case the last time I used it.

I’m not necessarily giving a pass to Tesla here, but it doesn’t seem reasonable to throw all the blame on a manufacturer when a user doesn’t follow directions and misuses a function.

A debate could be had about whether functions should be allowed if a certain (high?) percentage of users will abuse it, but that’s a tricky discussion imho.

Almost all of the public examples I’ve seen of Autopilot or Summon being unsafe were when people were misusing it.

There are definitely examples when these functions don’t work (there’s one spot near me when my car makes the wrong choice consistently), but it’s trivial to correct if one is paying attention like you’re supposed to.

Summon is a separate feature from FSD.

Part of the issue is that there are no regulatory guidelines for what's appropriate, and regulators have not stepped in to ensure things are as safe and free of misuse as reasonably possible. Industry standards/norms exist, but they have no legal weight and Tesla ignores them to push the line in ways that I'm personally not thrilled with.

fair enough, was just trying to clarify the situation in the article. Tesla’s branding is ridiculous and extremely misleading.
archive.is doesnt show the video. does anyone else have another source?
  • rcpt
  • ·
  • 2 hours ago
  • ·
  • [ - ]
Cruise was shut down for less than this. TSLA won't even have a down day.

Corruption pays

GM execs shut Cruise down. I suspect they were just looking for an excuse.
  • sershe
  • ·
  • 3 minutes ago
  • ·
  • [ - ]
"AI drivers" glitch/fail differently than human drivers, and in ways that to humans look bizarre and easily preventable. Human driver failures (like being tired, drunk, very upset, or being on your phone, distracted by passengers, etc), to humans, look understandable, but from an AI perspective they look bizarre and easily preventable.

In theory, what we should care about is which ones cause more deaths or accidents. The fact that AI accidents seem worse based on naive intuition shouldn't matter.

However, humans serve on juries so here we go... Would there be a 200 million dollar judgement against someone who tried to get a can of pop from the back seat, ran over a pedestrian and then tried to lie about e.g. whether the light was red?

Surely this is the behavior of a company that's confident in the safety of its products!
If Tesla can’t ensure safeguarding of this information, it’s a feature that will get them in big trouble.
I'm still convinced that it being called "full self driving" is misleading marketing and really needs to stop, since it isn't according to Tesla
  • orlp
  • ·
  • 5 hours ago
  • ·
  • [ - ]
The marketing doesn't even matter. It either needs to be full self driving, or nothing at all. The "semi self-driving but you're still responsible when shit hits the fan" just doesn't work.

Humans are simply incapable of paying attention to a task for long periods if it doesn't involve some kind of interactive feedback. You can't ask someone to watch paint dry while simultaneously expect them to have < 0.5sec reaction time to a sudden impulse three hours into the drying process.

I have a SAE level 2 car. Those features DO help!
Framing is crucial. Example, why was the Autonomous Emergency Braking configured to brake violently to a full stop? Lets consider two scenarios, in both cases we're not paying enough attention to the outside world and are about to strike a child on a bicycle but the AEB policy varies.

1. AEB brakes violently to a full stop. We experience shock and dismay. What happened? Oh, a kid on a bike I didn't see. I nearly fucked up bad, good job AEB

2. AEB smoothly slows the vehicle to prevent striking the bicycle, we gradually become aware of the bike and believe we had always known it was there and our decision eliminated risk, why even bother with stupid computer systems?

Humans are really bad at accepting that they fucked up, if you give them an opportunity to re-frame their experience as "I'm great, nothing could have gone wrong" that's what they prefer, so, to deliver the effective safety improvements you need to be firm about what happened and why it worked out OK.

Same. Not having to worry about keeping the car between the lines allows me to keep my focus on the other cars around me more. Offloading the cognitive load of fine tuning allows more dedication to the bigger picture.
This makes no sense to me. Driving involves all senses, not just vision - if you're not feeling what the car is doing because you're not engaged with the steering wheel what good is it to see what's around you? I also don't understand how one has trouble staying between the lines with minimal cognitive input after more than a few months of driving.

Oh! And also, moving within the lane is sometimes important for getting a better look at what's up ahead or behind you or expressing car "body language" that allows others to know you're probably going to change lanes soon.

I drive a VW with lane-keep assist and adaptive cruise control and automatic emergency braking. It won't change lanes for me, but aside from the requirements that I have my hands on the wheel, could otherwise drive itself on the highway.

I commute mainly on the highway about 45-1hr each way every day and it makes a big difference for driver fatigue. I was honestly a bit surprised. Even though, I'm steering, it requires less effort. I don't have my foot on the gas and I'm not having to adjust my speed constantly.

Critically, though, I do have to pay attention to my surroundings. It's not taking so much out of my driving that I can't stay engaged to what's happening around me.

  • ghaff
  • ·
  • 4 hours ago
  • ·
  • [ - ]
I don't have personal experience but friends with personal experience have sort of shifted my thinking on the topic. They'll note they do need to stay engaged but that it is genuinely useful on long drives in particular. The control handover is definitely an issue but so is manual driving in general. Their consensus is that the current state of the art is by no means perfect but it is improved and it's not like there aren't problems with existing manual driving even with some assistive systems.
My car requires hands on the wheel to continue to operate. So I do feel it moving.

> I also don't understand how one has trouble staying between the lines with minimal cognitive input after more than a few months of driving.

Once you have something assist you with that, you'll notice how much "effort" you are actually putting towards it.

  • sneak
  • ·
  • 3 hours ago
  • ·
  • [ - ]
I used to think this, but then I got a Model 3. I believe that FSD is presently better than most humans driving today even when they are theoretically “fully engaged in manual driving”.

FSD doesn’t lull humans into a false sense of security, humans do. FSD doesn’t let you use your phone while it’s on. This alone is an upgrade over most human beings, who think occasional quick phone usage while driving is fine (at least for themselves).

I believe that if you replaced all human drivers in the US with FSD as it exists today, fatalities would go down immediately.

Humans are not a gold standard, and the current median human driver is easy to outperform on safety.

  • binoct
  • ·
  • 28 minutes ago
  • ·
  • [ - ]
So this is a really good example of small sample size intuition being a big challenge. Fatalities happen on the order of billion miles driven - obviously people don’t come to that. Take a few thousand miles of positive experience sets a statistical floor on accident rates, but that is orders of magnitude away from how safe (or unsafe, depending on how you look at it) human drivers are on average. FSD and other, less capable L2 systems are amazing at paying attention in situations where humans fail, but also tend to have major limitations in places humans will largely do great most of the time. Your experience, as positive as it has been, doesn’t support the assertion that fatalities would decrease.
If you live in a city, please send this article to your municipal and state electeds. Tesla is lobbying for the right to train and activate its Level 4 product, marketed as Level 5, in cities where Musk is deeply unpopular. There is massive political capital to be had in banning Tesla’s self-driving features on even the flimsiest grounds.
I would rather take a bullet than be a luddite who gets in the way of technological advancement on "the flimsiest of grounds."
> be a luddite who gets in the way of technological advancement on "the flimsiest of grounds”

Blocking a technology is Luddism. Blocking a company is politics.

  • rcpt
  • ·
  • 1 hour ago
  • ·
  • [ - ]
The choice isn't a bullet it's getting smashed by a "robo" taxi.

And no you wouldn't.

I'm perfectly happy to take the risk of getting smashed by a robo taxi, and I do every day. Getting smashed by a robo taxi is actually a bit better than getting smashed by a human driver because at least in the case of getting smashed by the robo taxi the crash data goes into improving the system in the future.
You managed to get a lot of replies, but none of them are pointing out that this 2019 case did not involve "full self driving" at all.
  • acdha
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Why do you think Musk put so much money into helping Trump win? Tesla was under multiple investigations for safety and unkept promises, and he knew that he would not have leverage to halt those under a Harris administration.
If that was his goal he would have minded his own business after the election, instead of spouting invective posts against Trump on X.
  • Maken
  • ·
  • 5 hours ago
  • ·
  • [ - ]
That was after Musk realized he had alienated his entire consumer base.
And he wants to bring them back by alienating Trump while doubling down on his rhetoric?
He has an ego and narcissism but he isn't dumb. He sees the problems but also cant admit hes wrong or anything.
> but he isn't dumb.

    Musk’s assistant peeked back the muttered and said he had another meeting. “Do you have any final thoughts?” she asked.

    “Yes, I want to say one thing.” the data scientist said. He took a deep breath and turned to Musk.

    “I’m resigning today. I was feeling excited about the takeover, but I was really disappointed by your Paul Pelosi tweet. It’s really such obvious partisan misinformation and it makes me worry about you and what kind of friends you’re getting information from. It’s only really like the tenth percentile of the adult population who’d be gullible enough to fall for this.”

    The color drained from Musk’s already pale face. He leaned forward in his chair. No one spoke to him like this. And no one, least of all someone who worked for him, would dare to question his intellect or his tweets. His darting eyes focused for a second directly on the data scientist.

    “Fuck you!” Musk growled.

https://www.techdirt.com/2024/10/25/lies-damned-lies-and-elo...
That happened much earlier. The split with Trump happened after it finally sunk in that that Republicans weren't actually interested in smaller government or cost savings, that that was just a rhetorical weapon that they deploy selectively to get elected.
I thought Musk was an amazing, brilliantly intelligent man?

But it took him four months deeply embedded with the Republican party to come to this conclusion?

It's been blindingly obvious to anyone remotely paying attention to US politics for the last decade (or two, or more, but blindingly so, more recently).

  • ·
  • 1 hour ago
  • ·
  • [ - ]
  • rcpt
  • ·
  • 1 hour ago
  • ·
  • [ - ]
He didn't really know what he was getting into until after all the appointments. I think he honestly believed that his engineering and business prowess would carry influence in the anti education party. fellforitagain.jpg
he's not a very smart man
I mean, if we was rational, sure, that's probably what he should have done. But, y'know, he clearly _isn't_.
He was under some imaginary assumption that Trump cared about the national deficit because of his campaign speeches. Once he realized that Trump really didn't care two hoots about it and only planned to increase it even more he had a late buyer's realization.
If he thought trump would actually adhere to anything he said... or, for that matter, was the least bit consistent in what he did on a day-to-day basis, then Elon is not fit to pull his own pants up in the morning.
I'm absolutely not a fan of Trump, but this is a highly questionable assumption.

The much more likely hypothesis in my view is that he was helping Trump because of personal conviction (only in small parts motivated by naked self-interest).

You should expect rational billionaires to tend politically right out of pure self-interest and distorted perspective alone; because the universal thing that such parties reliably do when in power is cutting tax burden on the top end.

the rich already pay next to no tax thanks to loopholes aka living on loans.
That’s insane. Do you remember DOGE or Elon taking his cronies into the same departments investigating him? Do you even remember?
What would Elon even be in court for? Being a politically incorrect dumbass on ex-twitter is not punishable by law.

Sending a bunch of scriptkiddies around and having them cut government funding and gut agencies is not really how you make evidence "vanish", how would that even work?

And, lastly, jumping in front of an audience at every opportunity and running your mouth is the absolute last thing anyone would ever do if the goal was to avoid prosection. But it is perfectly in line with a person that has a very big ego and wants to achieve political goals.

Labor violations, taxes, National Highway traffic safety administration investigation Tesla.. are you willfully ignorant or a troll?
I'm not a troll.

I scrutinise beliefs and assumptions even if they are convenient, and you should, too.

I don't believe that Musks main motivation to participate in the 2024 election was to avoid prosecution, because his actions are not really compatible with this, and there is a much more plausible alternative hypothesis that he preferred (possibly no longer) the republican platform for non-prosecution reasons/personal conviction instead, which his actions are very compatible with.

> Labor violations, taxes, National Highway traffic safety administration investigation Tesla

Let me say it like this: Billionaires generally don't have to care about minor infractions like this at all. The whole system is set up to shield them from liability, and wealth is an excellent buffer against effective prosection regardless of who is president. There have been a plethora of infinitely more serious infractions with zero real consequences for the CEOs involved, and this is not because they participated in past presidential election campaigns. See: the VW diesel emission fraud or much worse, leaded gas in the last century (and what associated industry did to keep that going).

  • rcpt
  • ·
  • 1 hour ago
  • ·
  • [ - ]
Meetings with Vlad, election interference, a pile of books at the SEC. Even rich people go to jail for these kind of things
Does not pass the smell test, those accusations hardly even qualify as a crime to be honest. If we had a fully democrat-controlled administration at every level (with every judge being a stout democrat), then I would still give you a <5% probability for Musk to end up behind bars for any of those (!!).

There is a pretty recent precendent on the other side of the political spectrum: Hillary Clinton. Republicans went on and on for how she belonged in prison. Anyone with half a brain was able to tell that this was not gonna happen, because there simply was no case. Republicans got basically absolute power since, and --surprise-- Hillary did not go to prison.

What makes you so confident that you are right about Elon, while the people back then were obviously wrong with Hillary (even without hindisght!).

Oh, so you're willfully ignorant.
Musk is on record saying to Tucker Carlson that “If [Trump] loses, I’m fucked.”

So this isn't so much of an assumption, as taking him at his word.

All the context I have for this is that he was grandstanding in front of a rightwing audience (after Trump was shot at, notably) and playing the "surely I would get unjustly prosecuted for my political incorrectness under the democrats".

What is your actual point? What would he stand in front of a judge for, right now, if Harris had won?

My actual point is that when someone tells you who they are, you should consider believing them.

You'd have to ask Musk what he feels so guilty about that he had to buy an election.

  • rcpt
  • ·
  • 1 hour ago
  • ·
  • [ - ]
Somehow the right has this incredible ability where none of their words matter.

On the left the details of your sentence structure get criticism for weeks from the public and the press (remember "garbage people"?)

The Left was coming after Musk pretty hard before the election. I don’t know the context of the quote you pulled but it’s not hard to see how if Trump lost, there was going to be consequences for Musk.
He has committed a lot of fraud and was facing consequences for that. That has nothing to do with left or right.
Can you give an example of these many instances of fraud?
For instance he has made fraudulent statements regarding the current and near future capabilities of Tesla in an effort to inflate stock prices numerous times. He was in fact ordered by a judge to stop making such statements but he didn’t obey that.

He used to be quite charismatic, I believed him up until about 2017 or so. Then I figured he was just a bit greedy and maybe money got to his head but still a respectable innovator. However during 2020 or 2021 (I don’t exactly remember) he started to get quite unpleasant and making obviously short-term decisions, such as relying only on cameras for self driving because of chip shortages but dressing it up as an engineering decision.

I just can't take the accusation of lying to increase stock price seriously because Elon has on occasion come right out and said the stock is overvalued https://x.com/elonmusk/status/1256239815256797184

You will basically never hear another CEO of another publicly traded company say this. I just don't believe that the same person who cares so little about his stock price that he sends a tweet like that (and the stock dropped 10% on it) also is making fraudulent statements to inflate the price. A better explanation is that he just says what he thinks without regard for the stock price, which is also something you won't see any other CEO of a publicly traded company do.

We can start from the linked article?
Yeah, that's where I started, and I would recommend you do the same:

> U.S. District Judge Beth Bloom, who presided over the case, said in an order that she did not find “sufficient evidence” that Tesla’s failure to initially produce the data was intentional.

Fraud has nothing to do with vandalizing Tesla dealerships last I checked.
You are right it doesn’t. That is (wrongly) done by people who are (rightly) mad at him for making american life harder and global life more dangerous, in a self serving attempt to evade the justice system.
We were talking about Tesla's fraud cases, not some vandalism cases last time I checked.
Actually we were talking about personal consequences to Musk.
Why do you believe it has nothing to do with left or right?

(Democrats aren't left btw)

  • walls
  • ·
  • 4 hours ago
  • ·
  • [ - ]
It does actually, because only one side is interested in finding or fighting fraud.
Currently yes, but it is not inherently so. The problem with the US regime is that it is compromised, corrupt and heading towards fascism.

The problem is not that the republican party used to be a conservative right party.

What I’m saying is this is not a sports competition where Musk is automatically an opponent of the Democratic party because he supported Trump. He supported Trump in order to improve his chances with the legal system because he knew Trump would be willing to be so corrupt.

Another world might be imagined in which the Democratic party was taken over in 2016 but that is not the world we live in.

Both are interested in finding and fighting fraud, but only from the other side. Leticia James charged Trump with a rack of felonies for putting false info on a loan application. The Trump DOJ charged Leticia James for doing exactly the same. Both sides claim the charges against them are politically motivated and the charges against the other side are completely legitimate.
  • walls
  • ·
  • 4 hours ago
  • ·
  • [ - ]
> Both sides claim the charges against them are politically motivated and the charges against the other side are completely legitimate.

This is how conservatives keep people going 'both sides!' even though they manufacture whatever is required to be that way.

Please explain how one person lying on a loan application is manufactured and another person lying on a loan application is a serious felony.
I’m less convinced we need to keep bringing this up in every single thread involving Tesla.
Everytime this comes up, I am on the opposite site of this. It is clearly full self driving. It can stop at red lights, cross intersections, make turns, park, drive, change lanes, break and navigate on its own. There are various videos online where FSD managed to drive a route start to finish without a single human override. That's full self driving. It can also crash like humans "can" and that why it needs supervision. In this sense, we as humans are also "full self driving" with a much (?) lower risk of crashing.

Like also everytime let the downvotes rain. If you downvote, it would be nice, if you could tell me where I am wrong. It might change my view on things.

> It is clearly full self driving. It can stop at red lights, cross intersections, make turns, park, drive, change lanes, break and navigate on its own. That's full self driving

All this demonstrates is the term “full self driving” is meaningless.

Tesla has a SAE Level 3 [1] product they’re falsely marketing as Level 5; when this case occurred, they were misrepresenting a Level 2 system as Level 4 or 5.

If you want to see true self driving, take a Waymo. Tesla can’t do that. They’ve been lying that they can. That’s gotten people hurt and killed; Tesla should be liable for tens if not hundreds of billions for that liability.

[1] https://www.sae.org/blog/sae-j3016-update

If it's a meaningless term then it can't be misrepresenting to use it.
> If it's a meaningless term then it can't be misrepresenting to use it

It’s meaningless because Tesla redefines it at will. The misrepresentation causes the meaninglessness.

The other confusion with self driving for me is, is the “self” the human or the car?

Self driving can totally means the human own-self driving.

Having SAE level is clearer.

Do you think anyone makes the same error when they see a "self cleaning" oven?

There's plenty wrong about the FSD terminology and SAE levels would absolutely be clearer, but I doubt more than a tiny fraction of people are confused as to the target of 'self' in the phrase 'full self driving'.

> Do you think anyone makes the same error when they see a "self cleaning" oven?

How many juries and courts have ruled adversely against self-cleaning oven makers?

Tesla has absolutely lied about its software's capabilities. From the lawsuit that went to trial:

“In 2016, the company posted a video of what appears to be a car equipped with Autopilot driving on its own.

‘The person in the driver's seat is only there for legal reasons,’ reads a caption that flashes at the beginning of the video. ‘He is not doing anything. The car is driving itself.’ (Six years later, a senior Tesla engineer conceded as part of a separate lawsuit that the video was staged and did not represent the true capabilities of the car.) [1]”

[1] https://www.npr.org/2025/07/14/nx-s1-5462851/tesla-lawsuit-a...

To be 100% clear: FSD and Autopilot are both terrible product names that imply promises greater than the products can deliver, and Musk / Tesla have made that worse with statements like those you reference. People have died as a result.

I just disagree that any significant number of people anywhere have thought the 'self' in 'full self driving' refers to the driver.

That's something different. The problem with the level is, that it only focuses on the attention the human driver needs to give to the automation. In this sense my Kia EV6 is also Level 2/3, same as FSD. However FSD can do so much more than my Kia EV6. That's a fact. Still the same level. Where did Tesla say FSD is SAE Level 5 approved? They would be responsible everytime FSD is active during a crash. Tesla is full self driving with Level 2/3 supervision and in my opinion this is not missleading.

Also "All this demonstrates is the term “full self driving” is meaningless." prooves my point that it is not missleading.

> FSD can do so much more than my Kia EV6. That's a fact. Still the same level

The levels are set at the lowest common denominator. A 1960s hot rod can navigate a straight road with no user input. That doesn’t mean you can trust it to do so.

> Where did Tesla say FSD is SAE Level 5 approved?

They didn’t say that. They said it could do what a Level 5 self-driving car can do.

“In 2016, the company posted a video of what appears to be a car equipped with Autopilot driving on its own.

‘The person in the driver's seat is only there for legal reasons,’ reads a caption that flashes at the beginning of the video. ‘He is not doing anything. The car is driving itself.’ (Six years later, a senior Tesla engineer conceded as part of a separate lawsuit that the video was staged and did not represent the true capabilities of the car.) [1]”

> Tesla is full self driving with Level 2/3 supervision and in my opinion this is not missleading

This is tautology. You’re defining FSD to mean whatever Tesla FSD can do.

[1] https://www.npr.org/2025/07/14/nx-s1-5462851/tesla-lawsuit-a...

How would you name a system which can do everything a Level 5 system can, but with Level 2/3 supervision? A name, which a PR team would choose without the missleading stuff as you are saying.
> How would you name a system which can do everything a Level 5 system can, but with Level 2/3 supervision?

FSD cannot “do everything a Level 5 system can.” It can’t even match Waymo’s Level 4 capabilities, because it periodically requires human intervention.

But granting your premise, you’d say it’s a Level 2 or 3 system with some advanced capabilities. (Mercedes has a lane-keeping and -switching product. They’re not constantly losing court cases.)

But this speaks to the fundamental point the other commenter is making. A Waymo requires human intervention periodically too. It's just less than a Tesla with FSD, which is in turn less than a Tesla with Autopilot, which is dramatically less than my 20 year old truck. It's just that at some point we assume the probability of a crash is low enough that the human driver can zone out and hope for the best and nobody has the balls to come out and actually define an acceptable probability of serious injury or death to set an actually useful performance standard based on this.
>if you could tell me where I am wrong

It needs to have a crash rate equal to or ideally lower than a human driver.

Tesla does not release crash data (wonder why...), has a safety driver with a finger on the kill switch, and only lets select people take rides. Of course according to Elon always-honest-about-timelines Musk, this will all go away Soon(TM) and we will have 1M Robotaxis on the road by December 31st.

Completing a route without intervention doesn't mean much. It needs to complete thousands of routes without intervention.

Keep in mind that Waymos have selective intervention for when they get stuck. Teslas have active intervention to prevent them from mowing down pedestrians.

I see this brought up a lot, but I don't think it's really an issue. It's misleading in a very technical sense, but it's so misleading that nobody is mislead. Just like nobody thinks the "Magic Eraser" is actually magic. I fundamentally just don't think anybody is out there actually believing this thing is L5 full self driving, especially after all the warnings it shows you and the disclaimers when you buy it.

The problem here isn't that people think they don't need to pay attention because their car can drive itself and then crash. The problem is that people who know full well that they need to focus on driving just don't because fundamentally the human brain isn't any good at paying attention 100% of the time in a situation where you can get away with not paying attention 99.9% of the time, and naming just can't solve this.

I'd like to hear the law say that self-driving cars should collect data (video, sensor inputs, actuator outputs), and that it is the property of the law when an accident happens. No exceptions. The real question is how the law is written, for it should leave no doubt about what Tesla, or any other, is required to do.

Probably all cars should have a black box, as both modern electronics and humans can do weird stuff.

That’s the case in California for autonomous vehicles (like Waymo). Tesla FSD is officially a driver’s assistance, not fully autonomous vehicle.
good luck passing such a law in the US
So first the data wasn't there, and suddenly it is there. I think the only way to prevent tis in the future is to litigate against those individuals who knowingly lie for a company.
Litigate the company, not the individual. The hiding of the data was almost certainly a result of company ethos and most likely involved multiple levels of people. The maintenance tech was probably the lowest paid of everyone involved.
They should be saving every crash as a unit test to ensure it never happens again.
They should be bending over backwards to ensure they ALWAYS present the crash data even when it inconveniences them. Take the lawsuit "L" but boost public trust. It's worth noting that since this 2019 case I've not seen any legal cases where Tesla did not provide crash data.

I think there is a reasonable chance it was honestly mishandled. When considering which parts of the software impact the driving experience, logs are way down the list. They should do better though, and if they intentionally misled anyone they should be punished.

For any company to be significantly liable for a lane-keep crash, the behavior would have to be pretty egregious, IMO. All sorts of bad things can happen with most driving enhancements on any car, with common features such as overdrive, cruise control, or powerful engines, or even with non-features like manual transmissions. The liability for all of this should fall on the shoulders of the driver, most of the time, or we'd never get any cars on the road.

More than a unit test -- a whole system test. But, as a software engineer with experience in robotics and drones with a focus on software safety, yes I 100% agree.

The unfortunate thing is that the state of the industry (or, my experience in it) currently is not set up to be able to do that cheaply nor at scale. Imagine you have tens of thousands of various unique problem scenarios to run through, and some might take several minutes of simulation to run the test. Even if your release cadence is slow, but especially if you have continuous deployment with dozens of micro-releases every day: how exactly do you cheaply scale such that simulation testing doesn't become a massive bottleneck?

The description of the guy finding the data while at a Starbucks doesn't do justice to his setup shown in the photo. My dude has a seriously chaotic and awesome setup there.
  • jalk
  • ·
  • 4 hours ago
  • ·
  • [ - ]
I imagine, he dumped the car data onto his laptop, so that he could work on the problem in a more cozy place, than his messy bitcave
This will continue until people go to prison
Can you imagine air craft makers avoid this sort of black box autodelete! A red handed catch!
Huge props to the hacker (@greentheonly) ... considering the cutbacks in journalism, perhaps we're entering a world where some of the most important investigative journalism will be done by hackers.

Unpaid, unrewarded excellence.

  • buyucu
  • ·
  • 15 minutes ago
  • ·
  • [ - ]
Another day, another example of Tesla doing shady things.
the dirtiest of doggery
  • fblp
  • ·
  • 3 hours ago
  • ·
  • [ - ]
No paywall link https://archive.is/s1psp
So, will tesla get nuked from orbit for what is obviously a serious, intentional and systemic discovery violation or is this just ok because it's a big corp?
With everything that is wrong with Tesla, I'll be the first to say that all Tesla cars need to be taken off of the roads, at least until all of their auto-driving features have been fully removed.
Of course they will say they don't have the key data.

Do we expect them to admit they were outright lying and wrong considering their leader is a pill popping Nazi salute making workaholic known to abuse his workers?

  • rwmj
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Lying to a court is usually pretty serious. Any sensible legal department will tell you never to do that, whatever your CEO says.
Unless you paid off the president they assume.
  • gruez
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Did they have a falling out a few months ago?
  • buyucu
  • ·
  • 10 minutes ago
  • ·
  • [ - ]
Not if your boss is super-rich.
You mean, the other allegations on this same person would not be judged something serious and could even be recommended?
  • lawn
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Usually.

But today you just have a private dinner with the president and he'll wave it away.

[flagged]
Tesla said the data recorded during the crash had been lost or deleted. The hacker produced the data. The data was used in court. The verification is the data. What's your suggestion? That they fabricated the data recovered from the car?
I'm not accusing anyone of fabricating anything.

I'm saying we do not have any way to verify the details.

Where is the court document?

Isn't this a forensics expert that testified in court? Why aren't they named? Wouldn't most forensics "hackers" be elated to be quoted?

From the article:

> The hacker, known online by his X handle @greentheonly, did not testify in the case.

It seems like a strange grey zone to have a hacker that uncovers all the information but will not testify in court, etc. I don't see how this wouldn't introduce chain of custody problems, etc. for the evidence which is why he would ultimately be testifying. Perplexing.

EDIT - Meh, whatever. If you guys want to read articles that have zero proof and believe whatever they say because some anonymous hacker is quoted, etc. go for it. I don't get paid to educate anyone here.

The hacker is named and you can see his work in the page. Read TFA
From the article:

> The hacker, known online by his X handle @greentheonly, did not testify in the case.

  • claar
  • ·
  • 4 hours ago
  • ·
  • [ - ]
It's paywalled.
The article is paywalled, but it also only calls the person "the hacker" and then says this:

> The hacker, known online by his X handle @greentheonly, did not testify in the case.

So, we have information that we cannot prove being given by a person who won't testify in court? How is this proper chain of evidence, etc.?

Everyone is mad at Tesla but they're literally the only company collecting this kind of crash metadata.

Other car manufacturers would never get in trouble for this because it's not even possible for them to do it in the first place!

  • jdiff
  • ·
  • 3 hours ago
  • ·
  • [ - ]
People aren't mad that they collect the data, everyone does that, but that they immediately deleted it, then lied about it ever happening, in a matter of life and death.

I would deeply encourage you to re-assess whatever led you to make this comment, because you have fallen wildly off the mark here. Corporations are not your friend.

Everyone is mad because they killed people and lied about it.
Wrong. Almost all modern cars track location and tons of other data. Ford even has a screen that pops-up saying basically "hey you're opting into this FYI".
You think other cars are recording whether they detected a person and the approximate location of the person to the car?
I certainly think that. Because, as a software engineer in robotics and drones, that's exactly what I would do. Using logs to recreate the scenario, especially for regression testing, is standard process for competent software engineers.