The launch could have gone right, and no one would have known anything about the decision process besides a few insiders. I am sure that on project as complex and as risky as a Space Shuttle, there is always an engineer that is not satisfied with some aspect, for some valid reason. But at some point, one needs to launch the thing, despite the complains. How many projects luckily succeeded after a reckless decision?
In many accidents, we can point at an engineer who foreshadowed it, as it is the case here. Usually followed by blaming those who proceeded anyways. But these decision makers are in a difficult position. Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done. So, whose "no" to ignore? Not Allan's apparently.
I used to run the nuclear power plant on a US Navy submarine. Back around 2006, we were sailing somewhere and Sonar reported that the propulsion plant was much, much louder than normal. A few days later we didn't need Sonar to report it, we could hear it ourselves. The whole rear half of the ship was vibrating. We pulled into our destination port, and the topside watch reported that oil pools were appearing in the water near the rear end of the ship. The ship's Engineering Officer and Engineering Department Master Chief shrugged it off and said there was no need for it to "affect ship's schedule". I was in charge of the engineering library. I had a hunch and I went and read a manual that leadership had probably never heard of. The propeller that drives the ship is enormous. It's held in place with a giant nut, but in between the nut and the propeller is a hydraulic tire, a toroidal balloon filled with hydraulic fluid. Clearly it had ruptured. The manual said the ship was supposed to immediately sail to the nearest port and the ship was not allowed to go back out to sea until the tire was replaced. I showed it to the Engineer. Several officers called me in to explain it to them. And then, nothing. Ship's Schedule was not affected, and we continued on the next several-week trip. Before we got to the next port, we had to limit the ship's top speed to avoid major damage to the entire propulsion plant. We weren't able to conduct the mission we had planned because the ship was too loud. And the multiple times I asked what the hell was going on, management literally just talked over me. When we got to the next port, we had to stay there while the propeller was removed and remachined. Management doesn't give a shit as long as it doesn't affect their next promotion.
Don't even get me started on the nuclear safety problems.
And I say that as a retired officer.
I'm guessing there's a real possibility of it ending his career, at least as a member of the military.
Generally about every month or two, a Navy commanding officer gets canned for "loss of confidence in his/her ability to command." They aren't bulletproof, quite the opposite. And leaving out cases of alcohol misuse and/or sexual misconduct, other common causes are things within the IG's purview.
Individual A reports a unique or rare problem. Everyone knows it is reported by person A.
Nothing is done.
Person A reports the problem "anonymously" to some third party, which raises a stink about the problem.
Now everyone knows that person A reported the problem to the third party.
This is why I (almost) never blow the whistle. It's an automatic career-ending move, and any protections are make-believe at best.
I'm not pretending this is some magic ticket to puppy-rainbow-fairy land where retaliation never occurs, but ultimately, how much do you care about your shipmates? I had a CPO once as one of my direct reports committing major misconduct and threatening my shop with retaliation if they reported it. I could have helped crush the bastard if someone had come forward to me, but no one ever did until I'd turned over the division to someone else, after which it blew up. Sure, he eventually got found out, but still. He was a great con artist and he pulled the wool over my eyes, but all I'd have needed is one person cluing me in to that snake.
Speaking from the senior officer level, we're not all some cabal trying to sweep shit under the rug. And the IGs, as much as they're feared, aren't out to nail people to the wall who haven't legitimately done bad things. I'm sorry you've had the experience you've had, but that doesn't mean that everyone above you was some big blue wall willing to protect folks who've done wrong.
It is too common that such investigations don't even start because there is just one connecting piece of evidence missing.
Leave a paper trail people!
The competent don't group together, they don't need to. They can take care of themselves.
The former uses their power as a group against the individuals in the latter.
Basically the plot of Atlas Shrugged.
That book?
Maybe the one person who survives the first trip to Mars can practice it.
When you work on ideas instead of personalities you get to do that.
Nobody here tried to disprove my comment. Just a few people starting complaining about a dead woman whose book I mentioned in passing.
They got together and argued, incompetently. Demonstrating the effect I was attempting to illustrate.
Politics is seeping where it doesn't belong.
I am very worried.
Less funny in real life. Sometimes the jizzless thing falls off with impeccably bad timing. Right when things go boom. People get injured (no deaths yet). Limp home early. Allies let down. Shipping routes elongate by a sad multiple. And it even affects you directly as you pay extra for that Dragon silicon toy you ordered from China.
The Navy's careerist, bureaucratic incompetence is staggering. No better than Putin's generals who looted the military budget and crippled his army so they couldn't even beat a military a fraction of their size.
Or with their Member of Congress, who can also go to Big Navy and ask "WTF is going on with my constituent?"
I want to be pro-nuclear energy, but I just don't think I can trust the majority of human institutions to handle nuclear plants.
What do you think about the idea of replacing all global power production with nuclear, given that it would require many hundreds of thousands of loosely-supervised people running nuclear plants?
There's also the issue of the uranium. Breeder reactors can help increase efficiency, but they bump up all the complexities/risks greatly. Relatively affordable uranium is a limited resource. We have vast quantities of it in the ocean, but it's not really feasible to extract. It's at something like 3.3 parts per billion by mass. So you'd need to filter a billion kg of ocean water to get 3.3kg of uranium. Outside of cost/complexity, you also run into ecological issues at that scale.
And of course that's ignoring the fact that I also feel relatively confident that a Chernobyl scale accident every year is in no way likely, even if the entire world was 100% on nuclear
>I also feel relatively confident that a Chernobyl scale accident every year is in no way likely, even if the entire world was 100% on nuclear
I don't. Einstein's quote rings alarms in my head here. Imagine all the inane incompetencies you've seen with current energies in your house, or at a mechanic, or simply flickering lights at a resaurant. Now imagine that these people now manage small fusion/fission bombs powering such devices.
we need to value labor a lot more to trust that sort of maintanance. And the US alone isn't too good at that. Let alone most of Asia and EMEA.
Were are you getting this from?
In any case if we look at the actual data nuclear has been extremely safe compared to burning fossil fuels. Add up all the nuclear disasters that have ever happened and adjusted by MWh generated it’s a few magnitudes safer than coal.
> Now imagine that these people now manage small fusion/fission bombs powering such devices.
Sure, they’ll have to be trained to the same standards as current nuclear engineers. Not trivial but obviously not exactly an unsolvable problem..
> Let alone most of Asia and EMEA.
Sorry but you’re just saying random things at this point..
Certainly, they still breathe the same air, don’t they?
> Nuclear meltdown does.
I’m pretty sure that nuclear meltdowns are much, much easier to avoid. Even in Chernobyl almost all the casualties (shortterm and longterm) were amongst people directly handling and trying to contain a disaster. If you’re rich you’re unlikely to be a fireman..
If you're ship's captain...why not help secure a nice 'consulting' 'job' at EB after retiring from the navy by helping EB make millions, and count on your officers to not say a peep to fleet command that the mess was preventable?
Stuff like pilots taking off with no working nav, "I'll follow the guy in front of me".
But, maybe someone can make a case that it's fundamentally the same thing?
In both cases, there were people who cared primarily about the technical truth, and those people were overruled by people who cared primarily about their own lifestyle (social status, reputation, career, opportunities, loyalties, personal obligations, etc.) In Allan McDonald's book "Truth, Lies, and O-Rings" he outlines how Morton Thiokol was having a contract renewal held over their head while NASA Marshall tried to maneuver the Solid Rocket Booster production contract to a second source, which would have seriously affect MT's bottom line and profit margins. There's a strong implication that Morton Thiokol was not able to adhere to proper technical rationale and push back on their customer (NASA) because if they had they would have given too much ammunition to NASA to argue for a second-source for the SRB contracts. (In short: "you guys delayed launches over issues in your hardware, so we're only going to buy 30 SRB flight sets from you over the next 5 years instead of 60 as we initially promised."
I have worked as a NASA contractor on similar issues, although much less directly impacting the crews than the SRBs. You are not free to pursue the smartest, most technically accurate, quickest method for fixing problems; if you introduce delays that your NASA contacts and managers don't like, they will likely ding your contract and redirect some of your company's work to your direct competitors, who you're often working with on your projects.
In Chernobyl, they scheduled a safety test to satisfy schedules imposed by central command. The plant engineers either weren't informed or couldn't push back because to go against management meant consequences for your career and family, administered by the Soviet authorities or the KGB.
Both scenarios had engineers who were not empowered to disclose or escalate issues to the highest level because of implied threats against them by non-technical authorities.
not in year, incidentally
Saying "no" is easy and safe in a world where there are absolutely no external pressures to get stuff done. Unfortunately, that world doesn't exist, and the decision makers in these kinds of situations face far more pressure to say "yes" than they do to say "no".
For example, see the article:
> The NASA official simply said that Thiokol had some concerns but approved the launch. He neglected to say that the approval came only after Thiokol executives, under intense pressure from NASA officials, overruled the engineers.
Not in my experience. Saying no to something major when others don’t see a problem can easily be career-ending.
It's conceptually the easiest answer to the risk of asserting that you are certain, is simply don't assert that you are certain.
They aren't saying it's easy to face your bosses with anything they don't want to hear.
You can only fail to get this by not reading the thing you are responding to, or deliberate obtuseness, or perhaps by being 12 years old.
Easily be career ending? That's a bit dramatic, don't you think?. Someone who continuously says no to things will surely not thrive and probably eventually leave the organization, one way or the other, that's probably right.
He was dragged by the head of state in the press and televised announcements, became untouchable overnight - lost his career, his wife died a few days later while at work at her government job in an “accident”. This isn’t in some tinpot dictatorship, rather a liberal western democracy.
So - no. Career-ending is an understatement. You piss the wrong people off, they will absolutely fuck you up.
That's either a very tall tale or the state is anything but liberal.
Abdulrahman Anwar al-Awlaki (also spelled al-Aulaqi, Arabic: عبدالرحمن العولقي; August 26, 1995 – October 14, 2011) was a 16-year-old United States citizen who was killed by a U.S. drone strike in Yemen.
The U.S. drone strike that killed Abdulrahman Anwar al-Awlaki was conducted under a policy approved by U.S. President Barack Obama
Human rights groups questioned why Abdulrahman al-Awlaki was killed by the U.S. in a country with which the United States was not at war. Jameel Jaffer, deputy legal director of the American Civil Liberties Union, stated "If the government is going to be firing Predator missiles at American citizens, surely the American public has a right to know who's being targeted, and why."
https://en.m.wikipedia.org/wiki/Killing_of_Abdulrahman_al-Aw...
Missed highlighting that part. The boy also wasn't the target of the strike anyway. Was the wife from the other user's story living with an al-Qaeda leader as well?
You are a terrorist if you don't want a foreign power to install a government* over you and you fight to prevent that?
And then further, if your dad does that you should die?
*that has to be noted were literally pedophiles
>When pressed by a reporter to defend the targeted killing policy that resulted in Abdulrahman al-Awlaki's death, former White House press secretary Robert Gibbs deflected blame to the victim's father, saying, "I would suggest that you should have a far more responsible father if they are truly concerned about the well-being of their children. I don't think becoming an al-Qaeda jihadist terrorist is the best way to go about doing your business".
https://www.lemonde.fr/police-justice/article/2017/01/04/fra...
It’s the U.K. It happened under Cameron. It related to the judiciary. That’s as much as I’ll comfortably reveal.
I will also say that it was a factor in me deciding to sell my business, leave the country, and live in the woods, as what I learned from him and his experience fundamentally changed my perception of the system in which we live.
I personally am very glad to know the things he revealed.
Within NatSec, saying No to embarrassing the government is implied. Ceaselessly.
Equally implied: The brutality of the consequences for not saying no.
There's a big difference between "complaints" because something is not optimal, and warnings that something is a critical risk. The Thiokol engineers' warnings about the O-rings were in the latter category.
And NASA knew that. The summer before the Challenger blew up, NASA had reclassified the O-rings as a Criticality 1 flight risk, where they had previously been Criticality 1R. The "1" meant that if the thing happens the shuttle would be lost--as it was. The "R" meant that there was a redundant component that would do the job if the first one failed--in this case there were two O-rings, primary and secondary. But in (IIRC) June 1985, NASA was told by Thiokol that the primary O-ring was not sealing so there was effectively no redundancy, and NASA acknowledged that by reclassifying the risk. But by the rules NASA itself had imposed, a Criticality 1 (rather than 1R) flight risk was supposed to mean the Shuttle was grounded until the issue was fixed. To avoid that, NASA waived the risk right after reclassifying it.
> at some point, one needs to say "yes" and take risks, otherwise nothing would be done
Taking calculated risks when the potential payoff justifies it is one thing. But taking foolish risks, when even your own decision making framework says you're not supposed to, is quite another. NASA's decision to launch the Challenger was the latter.
Even in the case of the Challenger, no single article say WHO was the executive that finally approved the launch. No body was jailed for gross negligence. Even Ricahrd Feynman felt that the investigative comission was biased from the start.
So, since there is no "price to pay" to make this bad calls they are continuously made.
> Even in the case of the
> Challenger, no single article
> say WHO was the executive
> that finally approved the launch.
The people who made the final decision were Jerald Mason (SVP), Robert Lund, Joe Kilminster and Calvin Wiggins (all VP's).See page 94 of the Rogers commission report[1]: "a final management review was conducted by Mason, Lund, Kilminster, and Wiggins".
Page 108 has their full names as part of a timeline of events at NASA and Morton Thiokol.
1. https://sma.nasa.gov/SignificantIncidents/assets/rogers_comm...
Jailing people means you'll have a hard time finding people willing to make hard decisions, and when you do, you may find they're not the right people for the job.
Punishing people for making mistakes means very few will be willing to take responsibility.
It will also mean that people will desperately cover up mistakes rather than being open about it, meaning the mistakes do not get corrected. We see this in play where manufacturers won't fix problems because fixing a problem is an admission of liability for the consequences of those problems, and punishment.
Even the best, most conscientious people make mistakes. Jailing them is not going to be helpful, it will just make things worse.
That’s what responsibility is: taking lumps for making mistakes.
If I make a mistake on the road and end up killing someone, I can absolutely be held liable for manslaughter.
I don’t know if jail time is the right answer, but there absolutely needs to be some accountability.
During WW2, a B-19 crash landed in the Soviet Union. The B-29's technology was light-years ahead of Soviet engineering. Stalin demanded that an exact replica of the B-29 be built. And that's what the engineers did. They were so terrified of Stalin that they carefully duplicated the battle damage on the original.
Be careful what you wish for when advocating criminal punishment.
That said, even then Tu-4 wasn't a carbon copy. Because US used imperial units for everything, Soviets simply couldn't make it a carbon copy because they could not e.g. source plating and wire of the exact right size. So they replaced it with the nearest metric equivalents that were available, erring on the side of making things thicker, to ensure structural integrity - which also made it a little bit heavier than the original. Even bigger changes were made - for example, Tupolev insisted on using existing Soviet engines (!), weapons, and radios in lieu of copying the American ones. It should be noted that Stalin really did want a carbon copy originally, and Tupolev had to fight his way on each one of those decisions.
Same as the insulation damage to the tiles kept being ignored until Columbia barely survived. And then they fixed the part they blamed for that incident, but the tiles kept coming back damaged.
And look at what else was going wrong that day--the boosters would most likely have been lost at sea if the launch had worked.
Why do you think you want it? You don't want it.
They had every engineer involved with the booster saying launching in the cold was a bad idea, yet they started by trying to look at all the ways it could have gone wrong rather than even looking into what the engineers were screaming about.
We also have them claiming a calibration error with the pyrometer (the ancestor of the modern thermometer you point at something) even though that made other numbers not make sense.
There was a recent Netflix documentary where they interviewed him. He was the NASA manager that made the final call.
On video, he flatly stated that he would make the same decision again and had no regrets: https://www.syfy.com/syfy-wire/netflix-challenger-final-flig...
I had never seen anyone who is more obviously a psychopath than this guy.
You know that theory that people like that gravitate towards management positions? Yeah... it's this guy. Literally him. Happy to send people into the meat grinder for "progress", even though no actually scientific progress of any import was planned for the Challenger mission. It was mostly a publicity stunt!
The safety posture of that whole program, for a US human space program, seemed bad. That they chose to use solid rocket motors shows that they were willing to compromise on human safety from the get-go. There are reasons there hasn't ever been even one other human-rated craft to use solid rocket motors.
That's about to not be true. Atlas V + starliner has flown two people and has strap-on boosters, I think it only gets the rating once it returns from the test flight though.
The shuttle didn't have a propulsive launch abort system, and could only abort during a percentage of its launch. The performance quoted for starliner's abort motor is "one mile up, and one mile out" based on what the presenter said during the last launch. You're plenty safe as long as you don't intersect the SRB's plume.
Not that I think it's a good thing, but...
Its mind boggling that SLS still exists at all. At least $1B-$2B in costs whether you launch or not. A launch cadence measured in years. $2B-$4B if you actually launch it. And it doesn't even lift more than Starship, which is launching almost quarterly already. This before we even talk about reusability, or that a reusable Starship + Super Heavy launch would only use about $2M of propellent.
Every kind of meaningful success involves negotiating risk instead of seizing up in the presence of it.
The shuttle probably could have failed in 1,000 different ways and eventually, it would have. But they still went to space with it.
Some risk is acceptable. If I were to go to the moon, let’s say, I would accept a 50% risk of death. I would be happy to do it. Other people would accept a risk of investment and work hour loss. It’s not so black or white that you wouldn’t go if there’s any risk.
That's different than the engineers calculating the risk of failure at some previously-defined-as-acceptable level and giving the go-ahead.
It's possible you're just suicidal, but I'm reading this more as false internet bravado. A 50% risk of death on a mission to space is totally unacceptable. It's not like anyone will die if you don't go now; you can afford to take the time to eliminate all known risks of this magnitude.
There are many people who are ideologically-driven and accept odds of death at 50% or higher — revolutionary fighters, political martyrs, religious martyrs, explorers and adventurers throughout history (including space), environmental activists, freedom fighters, healthcare workers in epidemics of serious disease...
If that's actually true, you should see a therapist.
Given we have a track record of going to the moon with much lower death rate than 50%, that's a proven higher risk than is necessary. That's not risking your life for a cause, because there's no cause that benefits from you taking this disproportionate risk. It's the heroism equivalent of playing Russian Roulette a little more than 3 times and achieves about as much.
> There are many people who are ideologically-driven and accept odds of death at 50% or higher — revolutionary fighters, political martyrs, religious martyrs, explorers and adventurers throughout history (including space), environmental activists, freedom fighters, healthcare workers in epidemics of serious disease...
And for every one of those there's 100 keyboard cowboys on the internet who have never been within a mile of danger and have no idea how they'll react to it.
I would say I'm more ideologically driven than most, and there are a handful of causes I'd like to think I'd die for. But I'm also self-aware enough to know that it's impossible to know how I'll react until I'm actually in those situations.
And I'll reiterate: you aren't risking your life for a cause, because there's no cause that benefits from you taking a 50% mortality risk on a trip to the moon.
1. Go where others have not gone, with a 50% risk of death.
2. Wait 5 days for temperatures to rise, and go where others have not gone, with a 0.5% risk of death.
Choosing 1 isn't "different views, that is all", it's pretty objectively the wrong choice. It's not dying for a cause, it's not brave, it's not idealistic. It's pointlessly suicidal. So yes, I'm saying if you think 1 is the right choice you should see a therapist.
Notably, NASA requires all astronauts to undergo psychological evaluation, even if they aren't claiming they'll take insane unnecessary risks. So it's not like I'm the only one who thinks talking to someone before you potentially kill yourself is a good idea.
No offense but this sounds like the sayings of someone who has not ever seen a 50% of death.
It’s a little different 3 to 4 months out. It’s way different the night before and morning. Stepping “in the arena” with odds like those, I’d say the vast, vast majority will back out and/or break down sobbing if forced.
There’s a small percent who will go forward but admit the fact that they were completely afraid- and rightly so.
Then you have that tiny percentage that are completely calm and you’d swear had a tiny smile creeping in…
I’ve never been an astronaut.
But I did spend three years in and out of Bosnia with a special operations task force.
Honestly? I have a 1% rule. The things might have a 20-30% chance of death of clearly stupid and no one wants to do. Things will a one in a million prob aren’t gonna catch ya. But I figure that if something does, it’s gonna be an activity that I do often but has a 1% chance of going horribly wrong and that I’m ignoring.
Well, this sounds like simple ad-hominem. I appreciate your insight, overall, though.
Many ideologically-driven people, like war field medics, explorers, adventurers, revolutionaries, and political martyrs take on very high risk endeavors.
I would also like to explore unknown parts of the Moon despite the risks, even if they were 50%. And I would wholeheartedly try to do it and put myself in the race, if not for a disqualifying condition.
There is also the matter of controllable and uncontrollable risks of death. The philosophy around dealing with them can be quite different. From my experience with battlefield medicine (albeit limited to a few years), I accepted the risks because the cause was worth it, the culture I was surrounded by was to accept these risks, and I could steer them by taking precautions and executing all we were taught. No one among the people I trained with thought they couldn't. And yes, many people ultimately dropped out for it, as did I.
Strapping oneself to a rocket is a very uncontrollable risk. The outcome, from an astronaut's perspective, is more random. I think that offers a certain kind of peace. We are all going to die at random times for random reasons, I think most people make peace with that, especially as they go into old age. That is a more comfortable type of risk for me.
Individuals have different views on mortality. Some are more afraid than others, some are afraid in one set of circumstances but not others. Some think that doing worthwhile things in their lives outweighs the risk of death every time. Your view is valid, but so is others'.
Something like 10 million people will accept those odds. Let's say 1 million are healthy enough to actually go to space and operate the machinery. Then let's say 99% will back out during the process. That's still 10,000 people to choose from, more than enough for NASA's needs.
The space program pilots saw it. And no, I would not have flown on those rockets. After all, NASA would "man rate" a new rocket design with only one successful launch.
So the risk of death could be estimated as 2/135 (fatal flights / total flights) or as 13/817 (total fatalities / total crew). These are around 1.5%, must lower than a 50% chance of death.
This is not to underplay their bravery. This is to state that the level of bravery to face a 1.5% chance of death is extremely high.
[0] https://en.wikipedia.org/wiki/List_of_spaceflight-related_ac... [1] https://en.wikipedia.org/wiki/List_of_Space_Shuttle_missions
The blastoff from the moon had never been tried before.
But you weren't in the shuttle, so it is irrelevant.
Do they? Even if risks are not mitigated and say risk for catastrophe can't be pushed below ie 15%? This ain't some app startup world where failure will lose a bit of money and time, and everybody moves on.
I get the political forces behind, nobody at NASA was/is probably happy with those, and most politicians are basically clueless clowns (or worse) chasing popularity polls and often wielding massive decisive powers over matters they barely understand at surface level.
But you can't cheat reality and facts, not more than say in casino.
So yes, I agree that at some point you need to launch the thing.
I'd like to cut you the benefit of the doubt and assume that's not what you meant; if that's the case, please clarify.
If the cost was genocide or predictable and avoidable astronaut deaths, the risk didn't pay off; there's no risk analysis. This isn't "nuance" and there is no ambiguity here, it's literally killing people for personal gain.
Can you provide a quote of where I said this is an example to be followed"? (This is a rhetorical question: I know you can't because I said nothing remotely akin to that.)
> I'd like to cut you the benefit of the doubt and assume that's not what you meant; if that's the case, please clarify.
Sure, to clarify: I meant precisely what I said. I did not mean any of the completely different nonsense you decided to suggest I was actually saying.
If you see "colonization benefited the people doing the colonizing" and interpret it as "colonization is an example to be followed", that's entirely something wrong with your reading comprehension.
You're not "cutting me some slack" by putting words in my mouth and then saying "but maaybe didn't mean that", and it's incredibly dishonesty and shitty of you to pretend you are.
People can read the context of what you said, there's no need to quote it.
In fact, I would advise you to read the context of what you said; if you don't understand why I interpreted your comment the way I did, maybe you should read the posts chain you responded to and that will help you understand.
> Sure, to clarify: I meant precisely what I said. I did not mean any of the completely different nonsense you decided to suggest I was actually saying.
Well, what you said, you said in a context. If you weren't following the conversation, you didn't have to respond, and you can't blame other people for trying to understand your comments as part of the conversation instead of in isolation.
Even if you said what you said oblivious to context, then I have to say, if you meant exactly what you said, then my response is that a risk/reward analysis which only considers economic factors and ignores human factors is reprehensible.
There is not a situation which exists in reality where we should be talking about economic success when human lives are at stake, without considering those human lives. If you want to claim "I wasn't talking about human life", then my response is simply, you should have been talking about human life because the actions you're discussing killed people and that the most important factor in understanding those events. You don't get to say "They took a risk and it paid off!" when the "risk" was wiping out entire populations--that's not a footnote or a minor detail, that's the headline.
The story of the Challenger disaster isn't "they took a risk ignoring engineers and lost reputation with the NASA client"--it's "they risked astronaut's lives to win reputation with the NASA client and ended up killing people". The story of colonizing North America isn't "they took a risk on exploring unknown territories and found massive new sources of resources" it's "they sacrificed the lives of sailors and soldiers to explore unknown territories, and then wiped out the inhabitants and took their resources".
I'm a modern person, I have modern morality? Guilty as charged, I guess.
We're supposed to cut them some slack because they were just behaving as people of their time? Nah, I don't think so: there are plenty of examples of people at that time who were highly critical of colonialism and the treatment of indigenous people. If they can follow their moral compass so could Columbus and Cortez. "Everyone else was doing it" is not an excuse adults get to use: people are responsible for their own actions. As for their beliefs: they were wrong.
There are other points you could be making but I really hope you aren't making any of the other ones I can think of.
What examples were there of anti-colonialism in those times? What influence would they have had over the monarchies and the church of their day? What influence did they exert?
I would contend that the moral compass of Columbus and Cortez was fundamentally different than yours or mine. They were products of a world vastly different than ours. You and I have modern morality; they did not. Since we cannot change the actions of the past, we can only hold them up as examples of how people were, and how they differ from (or are similar to) what we are now.
My complaint is that, to my eyes, you are criticizing them as if we moderns have some power over their actions. How can we expect them to have behaved as we would? We cannot change them or what they did. I'm not sure means "cutting them some slack." They did what they did; we can only observe the consequences and hope to do better.
I agree, their beliefs were wrong. Nonetheless, they believed what their culture taught them to believe. Yes, people of any era are responsible for their own actions, and if they act wrongly according to their culture, they should be punished for it. But if their culture sees no harm in what they are doing, they'll be rewarded. We certainly can't punish or reward them from 500 years in the future. We can only hope that what we believe, and how we act, is better.
We moderns have power over our own actions, and those actions are informed by the past.
In this thread we're talking about risk/reward analyses and for some reason, you and other people here seem oddly insistent that we not discuss the ethical implications of the actions on question.
And all-too-often, that's what happens today: companies look at the risk/reward in financial terms and ignore any ethical concerns. I would characterize the corporate approach to ethics as "complete disregard". The business ethics classes I took in college were, frankly, reprehensible; most of the material was geared toward rebranding various corporate misdeeds as miscalculated risk/reward tradeoffs, similar to what is being done in this thread. This is a huge problem, and it's pervasive in this thread, in HN as a whole, and in corporate culture.
Your complaint is rather hypocritical: given we have no power over their actions, why defend them? Your complaint applies as much to your own position as it does to mine. What problem are you addressing?
Hmm, I don't think that's my actual intent; only that we discuss them as they apply to modern morality, not as if we can influence them to be different than what they are.
If I defend them (which I don't think I do), I do so to help explain their attitudes and actions, not to excuse them. We need to understand where they are coming from to see the differences between them and us.
The reasons that Columbus tortured, killed, and enslaved indigenous people are the same reasons for Abu Ghraib: racism, lack of oversight, and greed. The exact details have changed, but the underlying causes are alive and thriving.
Thankfully, I think humans as a whole understand these things better and I think things are improving, but if we fail to keep that understanding alive and build upon it, regress is possible. Certainly the startup culture being fostered here (HN) which looks only at profit and de-emphasizes ethics enables this sort of forgetfulness. It's not that anyone intends to cause harm, it's that they can rationalize causing harm if it's profitable. And since money makes the same people powerful, this attitude is an extremely damaging force in society. That's why I am so insistent that we not treat ethics as a side-conversation.
If the risks are high and there are a lot of warning signs, there needs to be strong punishment for pushing ahead anyways and ignoring the risk
It is much too often that people in powerful positions are very cavalier with the lives or livelihoods of many people they are supposed to be responsible for, and we let them get away with being reckless far too often
> So yes, I agree that at some point you need to launch the thing.
This comment sounds an awful lot like you think the genocide of indigenous peoples is justified by the fact that the winners built empires, but I'd like to assume you intended to say something better. If you did intend to say something better, please clarify.
Or: at some point, one decides to launch the thing.
You are reducing the complaints of an engineer as something inevitable and unimportant, as if it happened in every lunch, and in every lunch someone decided to went ahead, because it was what was needed.
There were 8 joints. Only one failed, and only in one place. The spot being supercooled by boiloff from the LOX tank. And the leak self-sealed (there's aluminum in the fuel--hot exhaust touching cold metal deposited some of it) when it happened--but the seal wasn't robust enough and eventually shook itself apart.
As it is, NASA is keeping the Starline in orbit to learn as much as possible about what's going on with the helium leaks, which are in the service module, which won't be coming back to earth for examination.
Do they though? If the Challenger launch had been pushed back what major effects would there have been?
I do get your general point but in this specific example it seems the urgency to launch wasn’t particularly warranted.
The point is it's not just the Challenger launch. It's every launch.
An administrator would’ve missed a promotion.
What was the public sentiment of the Shuttle at the time? What was Congress sentiment? Was there organizational fear in NASA that the program would be cancelled if launches were not timely?
I'm wondering how the two astronauts on the ISS feel about that while Boeing decides if/when it is safe to return then to Earth.
https://www.cnn.com/2024/06/18/science/boeing-starliner-astr...
Astronauts (and anyone intelligent who intentionally puts themselves in a life-threatening situation) have a more nuanced understanding of risk than can be represented by a single % risk of death number. "I'm going to space with the best technology humanity has to offer keeping me safe" is a very different risk proposition from "I'm going to space in a ship with known high-risk safety issues".
Nobody can afford the best technology humanity has to offer. As one adds more 9's to the odds of success, the cost increases exponentially. There is no end to it.
The problem is when people believe that other people should pay unbounded costs for their safety.
There is not a systemic problem with people paying too much for safety in the US. In every case where a law doesn't apply, the funders are the ones with the final say in whether safety measures get funded, and as such all the incentives are for too little money spent on safety. The few cases where laws obligate employers to spend money on safety, are laws written in blood because employers prioritized profits over workers' lives.
In short, your concern is completely misplaced. I mean, can you point out a single example in history where a company, went bankrupt because they spent too much money on keeping their workers safe? This isn't a problem that exists.
If you don't know why companies are going bankrupt, then you don't know that they're going bankrupt due to safety spending. So that's basically admitting your opinion isn't based in any evidence, no?
I cannot think of a more boring thing to debate. But I'm sure you'll be eager to tell me that in fact I can think of more boring things to debate, since it's so important to you that superlatives be backed up with hard evidence.
"The best humanity has to offer" seems like a slippery concept. If something goes wrong in retrospect, you can always find a reason that it wasn't the "best". How would you determine if a thing X is the best? How do you know the best is a very different thing from a "high risk" scenario?
"The best humanity has to offer" just means that people put in a good faith effort to obtain the best that they were capable of obtaining given the resources they had. It's a fuzzy concept because there aren't necessarily objective measures of good, but I think we can agree that, for example, Boeing isn't creating the best products humanity has to offer at the moment, because they have a recent history of obvious problems being ignored.
> How do you know the best is a very different thing from a "high risk" scenario?
Going to space is inherently a high risk scenario.
As for whether what you have is the best you can have: you hire subject experts and listen to them. In the case of Challenger, the subject experts said that the launch should be delayed for warmer temperatures--the best humanity had to offer in that case was delaying the launch for warmer temperatures.
It's definitely built in. The Apollo LM was .15mm thick aluminum, meaning almost any tiny object could've killed them.
The Space Shuttle flew with SSRB's that were solid-fuel and unstoppable when lit.
Columbia had 2 ejection seats, which were eventually taken out and not installed on any other shuttle.
Huge risk is inherently the deal with space travel, at least from its inception until now.
Read the comments (especially from NASA engineers). It's pretty interesting that sometimes it takes courageous engineers to break the spell that poor managers can have on an organization.
Nixon even had a 'if they died' speech prepared, so someone had to put the odds of success not at 100.
For example, you could say "we'll tolerate a 30% chance of loss of life on this launch" but then an engineer comes up and says "an issue we found puts the risk of loss of life at 65%". That crosses the limit and procedure means no launch. What should not happen is "well, we're going anyway" which is what happened with Challenger.
We don’t see software engineers behave ethically in the same way.
Software is filled with so much risk taking and there’s few if any public pushback where engineers are saying the software we’ve created is harmful.
Here’s a few examples:
- Dark patterns in retail
- Cybersecurity flaws in sensitive software (ie. Microsoft)
- Social media and mental health
- Social media and child exploitation / sex trafficking
- Social media and political murder (ie. Riots, assassinations)
This stuff is happening and it’s just shrugs all-around in the tech industry.
I have a ton of respect for those whistleblowers in AI who seem to be the small exception to this rule.
True, but that is for cases where you take the risk yourself. If the challenger crew knew the risk and were - fuck it - it's worth it it would have been different than a bureaucrat chasing a promotion.
It's a joke
It's fun and easy to provide visibility into whoever called out an issue early when it does go on to cause a big failure. It gives a nice smug feeling to whoever called it out internally, the reporters who report it, and the readers in the general public who read the resulting story.
The actual important thing that we hardly ever get much visibility into is - how many potential failures were called out by how many people how many times. How many of those things went on to cause a big, or even small, failure, and how many were nothingburgers in the end. Without that, it's hard to say whether leaders were appropriately downplaying "chicken little" warnings to satisfy a market or political need, and got caught by one actually being a big deal, or whether they really did recklessly ignore a called-out legitimate risk. It's easy to say you should take everything seriously and over-analyze everything, but at some point you have to make a move, or you lose. You don't get nearly as much second-guessing when you spend too much time analyzing phantom risks and end up losing to your competitors.
I'm not sure that's important at all. Every issue raised needs to be evaluated independently. If there is strong evidence that a critical part of a space shuttle is going to fail there should be zero discussion about how many times in the past other people thought other things might go wrong when in the end nothing did. What matters is the likelihood that this current thing will cause a disaster this time based on the current evidence, not on historical statistics
The point where you "have to make a move" should only come after you can be reasonably sure that you aren't needlessly sending people to their deaths.
Phillips, Boeing, ...
Boisjoly quit Thiokol after the booster incident. Macdonald stayed, and was harassed terribly by management. He took Thiokol to court at least once (possibly twice) on wrongful discrimination / termination / whistleblower clauses, and won.
(TBH I'm reading this book right now - probably 2/3 the way through or so - and it's kind of weird to see something like this randomly pop up on HN today.)
McDonald’s loyalty was not beholden to his bosses, or what society or the country wanted at that moment in time. He knew a certain truth, based on facts he was aware of, and stuck by them.
This is so refreshing in todays world, where almost everyone seems to be a slave to some kind of groupthink, at least in public.
In real life we can't stand these people. They are always being difficult. They make mountains out of every molehill. They can never be reasonable even when everyone else on the team disagrees with them.
Please take a moment to reflect on how you treat inconvenient people in real life.
https://m.youtube.com/watch?v=Ljzj9Msli5o&pp=ygUZbm9ybWFsaXp...
At some point you become immune.
It's a lot harder to notice theres 4 red lights today than the usual 2-3 vs noticing 1 when there are normally exactly 0.
1. Employees not having a say in which issues to work on. This pretty much leads to the death of a project in the medium term due to near-total disregard of maintenance issues and alerts.
2. Big-team ownership of a project. When everyone is in charge, no one is. This is why I advocate for a team size of exactly two for each corporate project.
3. Employees being unreasonably pressured for time. Perhaps the right framing for employees to think about it is: "If it were their own business or product, how would they do it?" This framing, combined with the backlog, should automatically help avoid spending more time than is necessary on an issue.
If every decision an employee made on features/issues/quality/time was accompanied by how much their pay was affected, would the outcomes really be better ?
The team could decide to fix all bugs before taking on a new feature, or that the 2 month allotment to a feature should really be three months to do it "right" without having to work nights/weekends, would the team really decide to do that if their paycheck was reduced by 10%, or delayed for that extra month for those new features were delivered ?
If all factors were included in the employee decision process, including the real world effect of revenue/profit on individual compensation from those decisions, it is not clear to me that employees would make any "better" decisions.
I would think that employees could be even more "short sighted" than senior management, as senior management likely has more at stake in terms of company reputation/equity/career than an employee who can change jobs easier, and an employee might choose not to "get those alerts to zero" if it meant they would have more immediate cash in their pocket.
And how would disagreements between team members be worked out if some were willing to forgo compensation to "do it right', and others wanted to cut even more corners ?
Truly having ownership means you have also financial risk.
Non-technical management's skill level is almost always overrated. They're almost never qualified for it. Ultimately it still is management's decision, and always will be. If however management believes that employees are incapable of serving users, then it's management's fault for assigning mismatched employees.
> how much their pay was affected
Bringing pay into this discussion is a nonsensical distraction. If an employer misses two consecutive paychecks by even 1%, that's enough reason to stop showing up for work, and potentially to sue for severance+damages, and also claim unemployment wages. There is no room for any variation here.
> Truly having ownership
It should be obvious that ownership here refers to the ownership of the technical direction, not literal ownership in the way I own a backpack that I bring to work. If true financial ownership existed, the employee would be receiving substantial equity with a real tradable market value, with the risk of losing some of this equity if they were to lose their job.
> how would disagreements between team members be worked out
As noted, there would be just two employees per project, and this ought to minimize disagreements. If disagreements still exist, this is where management can assist with direction. There should always remain room for conducting diverse experiments without having to worry about which outcomes get discarded and which get used.
---
In summary, if the suggested approach is not working, it's probably because there is significant unavoidable technical debt or the employees are mismatched to the task.
It's not either-or, the ownership is shared. As responsibility goes, the buck ultimately stops with management, but when the people in the trenches can make more of their own decisions, they'll take more pride in their work and invest accordingly in quality. Of course some managers become entirely superfluous when a team self-manages to this extent, and will fight tooth and nail to defend their fiefdom. Can't blame them, it's perfectly rational to try to keep one's job.
As for tying the quality to pay in such an immediate way, I guess it depends on who's measuring what and why. Something about metrics becoming meaningless when made into a target, I believe it's called Cunningham's Law. I have big doubts as to whether it could work effectively in any large corpo shop, they're just not built for bottom-up organization.
The difference between an engineer and a manager's perspective usually comes down to their job description. An engineer is hired to get the engineering right; the reason the company pays them is for their ability to marry reality to organizational goals. The reason the company hires a manager is to set those organizational goals and ensure that everybody is marching toward them. This split is explicit for a reason: it ensures that when disagreements arise, they are explicitly negotiated. Most people are bad at making complex tradeoffs, and when they have to do so, their execution velocity suffers. Indeed, the job description for someone who is hired to make complex tradeoffs is called "executive", and they purposefully have to do no real work so that their decision-making functions only in terms of cost estimates that management bubbles up, not the personal pain that will result from those decisions.
Dysfunction arises from a few major sources:
1. There's a power imbalance between management and engineering. An engineer usually only has one project; if it fails, it often means their job, even if the outcome reality dictates is that it should fail. That gives them a strong incentive to send good news up the chain even if the project is going to fail. Good management gets around this by never penalizing bad news or good-faith project failure, but good management is actually really counterintuitive, because your natural reaction is to react to negative news with negative emotions.
2. Information is lost with every explicit communication up the chain. The information an engineer provides to management is a summary of the actual state of reality; if they passed along everything, it'd require that management become an engineer. Likewise recursively along the management chain. It's not always possible to predict which information is critical to an executive's decision, and so sometimes this gets lost as the management chain plays telephone.
3. Executives and policy-makers, by definition, are the least reality-informed people in the system, but they have the final say on all the decisions. They naturally tend to overweight the things that they are informed on, like "Will we lose the contract?" or "Will we miss earnings this quarter?"
All that said, the fact that most companies have a corporate hierarchy and they largely outcompete employee-owned or founder-owned cooperatives in the marketplace tends to suggest that even with the pitfalls, this is a more efficient system. The velocity penalty from having to both make the complex decisions and execute on them outweighs all the information loss. I experienced this with my startup: the failure mode was that I'd emotionally second-guess my executive decisions, which meant that I executed slowly on them, which meant that I didn't get enough iterations or enough feedback from the market to find product/market fit. This is also why startups that do succeed tend to be ones where the idea is obvious (to the founder at least, but not necessarily to the general public). They don't need to spend much time on complex positioning decisions, and can spend that time executing, and then eventually grow the company within the niche they know well.
This conclusion seems nonsensical. The assumption that what's popular in thearket is popular because it's effective has only limited basis in reality. Heirarchical structures appear because power is naturally consolidating and most people have an extreme unwillingness to release power even when presented with evidence that it would improve their quality of life. It is true that employee owned companies are less effective at extracting wealth from the economy, but in my experience working for both traditional and employee owned companies, the reason is employees care more deeply about the cause. They tend to be much more efficient at providing value to the customer and paying employees better. The only people who lose out are the executives themselves which is why employee owned companies only exist when run by leaders with passion for creating value over collecting money. And that's just a rare breed.
> Hierarchical structures appear because power is naturally consolidating and most people have an extreme unwillingness to release power even when presented with evidence that it would improve their quality of life.
Yes, and that is a fact of human nature. Moreover, many people are happy to work in a power structure if it means that they get more money to have more power over their own life than they otherwise would. The employees are all consenting actors here too: they have the option of quitting and going to an employee-owned cooperative, but most do not, because they make a lot more money in the corporate giant. (If they did all go to the employee-owned cooperative, it would drive down wages even further, since there is a finite amount of dollars coming into their market but that would be split across more employees.)
Remember the yardstick here. Capitalism optimizes for quantity of dollars transacted. The only quality that counts is the baseline quality needed to make the transaction happen. It's probably true that people who care about the cause deliver better service - but most customers don't care enough about the service or the cause for this to translate into more dollars.
As an employee and customer, you're also free to set your own value system. And most people are happier in work that is mission- & values-aligned; my wife has certainly made that tradeoff, and at various times in my life, I have too. But there's a financial penalty for it, because lots of people want to work in places that are mission-aligned but there's only a limited amount of dollars flowing into that work, so competition for those positions drives down wages.
This is an important point as it reinforces the hierarchical structure. In an economy composed of these hierarchies, a customer is often themselves buying in service of another hierarchy and will not themselves be the end user. This reduces the demand for mission-focused work in the economy, instead reinforcing the predominance of profit-focused hierarchies.
One thing stood out to me:
You note that executives are the least reality-informed and are insulated from having their decisions affect personal pain. While somewhat obvious, it also seems counterintuitive in light of the usual pay structure of these hierarchies and the usual rationale for that structure. That is, they are nearly always the highest paid actors and usually have the most to gain from company success; the reasoning often being that the pay compensates for the stress of, criticality of, or experience required for their roles. Judgments aside and ignoring the role of power (which is not at all insignificant, as already mentioned by a sibling commenter), how would you account for this?
For executive pay, the most crucial factor is the desire to align interests between shareholders and top executive management. The whole point of having someone else manage your company is so that you don't have to think about it; this only works when the CEO, on their own initiative, will take actions that benefit you. The natural inclination of most people (and certainly most people with enough EQ to lead others) is to be loyal to the people you work with; these are the folks you see day in and day out, and your power base besides. So boards need to pay enough to make the CEO loyal to their stock package rather than the people they work with, so that when it comes time to make tough decisions like layoffs or reorgs or exec departures, they prioritize the shareholders over the people they work with.
This is also why exec packages are weighted so heavily toward stock. Most CEOs don't actually make a huge salary; median cash compensation for a CEO is about $250K [3], less than a line manager at a FANG. Median total comp is $2M (and it goes up rapidly for bigger companies), so CEOs make ~90%+ of their comp in stock, again to align incentives with shareholders.
And it's why exec searches are so difficult, and why not just anyone can fill the role (which again serves to keep compensation high). The board is looking for someone whose natural personality, values, and worldview exemplifies what the company needs right now, so that they just naturally do what the board (and shareholders) want. After all, the whole point is that the board does not want to manage the CEO; that is why you have a CEO.
There are some secondary considerations as well, like:
1.) It's good for executives to be financially independent, because you don't want fear of being unable to put food on the table to cloud their judgment. Same reason that founder cash-outs exist. If the right move for a CEO is to eliminate their position and put themselves out of a job, they should do it - but they usually control information flow to the board, so it's not always clear that a board will be able to fire them if that's the case. This is not as important for a line worker since if the right move is to eliminate their position and put themselves out of a job, there's an executive somewhere to lay them off.
2.) There's often a risk-compensation premium in an exec's demands, because you get thrown out of a job oftentimes because of things entirely beyond your control, and it can take a long time to find an equivalent exec position (very few execs get hired, after all), and if you're in a big company your reputation might be shot after a few quarters of poor business performance. Same reason why execs are often offered garden leave to find their next position after being removed from their exec role (among others like preventing theft of trade secrets and avoiding public spats between parties). So if you're smart and aren't already financially independent, you'll negotiate a package to make yourself financially independent once your stocks vest.
3.) Execs very often get their demands met, because of the earlier point about exec searches being very difficult and boards looking for the unicorn who naturally does what the organization needs. Once you find a suitable candidate, you don't want to fail to get them because you didn't offer enough, so boards tend to err on the side of paying too much rather than too little.
Another thing to note is that execs may seem overpaid relative to labor, but they are not overpaid relative to owners. A top-notch hired CEO like Andy Grove got about 1-1.5% of Intel as his compensation; meanwhile, Bob Noyce and Gordon Moore got double-digit percentages, for doing a lot less work. Sundar Pichai gets $226M/year, but relative to Alphabet's market cap, this is only 0.01%. Meanwhile, Larry Page and Sergey Brin each own about 10%. PG&E's CEO makes about $17M/year, but this is only 0.03% of the company's market cap.
There's a whole other essay to write about why owners might prefer to pay a CEO more to cut worker's wages vs. just pay the workers more, but it can basically be summed up as "there's one CEO and tens of thousands of workers, so any money you pay the CEO is dwarfed by any delta in compensation changes to the average worker. Get the CEO to cut wages and he will have saved many multiples his comp package."
[1] https://a16z.com/ones-and-twos/
[2] https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...
[3] https://chiefexecutive.net/wp-content/uploads/2014/08/CEO_Co...
The CEO reports to the board. But his immediate and second tier reports are also judged by the employees. The thought is that will give them pause before they embark on their next my way or the highway decision making. The most egregious directors who push out line employees in favor of their cronies will be fired under this evaluation.
You say this but as someone who's run a large platform organization that hasn't been my experience. Sure some employees, maybe you, care about things like bringing alerts back to zero but a large number are indifferent and a small number are outright dismissive.
This is informed not just by individual personality but also by culture.
Not too long ago I pointed out a bug in someone's code who I was reviewing and instead of fixing it they said, "Oh okay, I'll look out for bugs like that when I write code in the future" then proceeded to merge and deploy their unchanged code. And in that case I'm their manager not a peer or someone from another team, they have all the incentive in the world to stop and fix the problem. It was purely a cultural thing where in their mind their code worked 'good enough' so why not deploy it and just take the feedback as something that could be done better next time.
The real punchline was this - the trader confused a field for entering shares quantity for notional quantity, but due to some European markets being closed, the system had a weird fallback logic that it sets the value of shares to $1, so the confirmation back to the trader was.. the correct number of dollars he expected.
So awful system designs lead to useless and numerous alerts, false confirmations, and ultimately huge errors.
That requires that you have good employees, which can be as rare as good management.
Once things are relatively clean, it's easy to see if new code/changes trip a warning. Often unexpected warnings are a sign of subtle bugs or at least use of undefined behaviors. Sorting those out when they come up is a heck of a lot easier than tracing a bug report back to the same warming.
Doesn't win me fans, but I sleep well.
https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#inde...
Since I do Swift, these days, in Xcode, I use project settings, instead.
I also like to treat warnings as errors.
Forces me to be circumspect.
Do we know how many times people noticed a problem, it launched anyway and everything was fine?
For me one of the more interesting side-bar discussions are those around deciding to use horizontal testing of the boosters despite that not being an operational configuration. This resulted in flexing of the joints that was not at all similar to the flight configuration and hindered identification of the weaknesses of the original "field joint" design.
https://www.youtube.com/watch?v=n-wqAbVqZyg
---
1. In case anyone doesn't know, they use the actual recovered Shuttle casings on SLS, but use an extra "middle" section to make it 5 sections in length instead of the Shuttle's 4 sections. In the future they'll move to "BOLE" boosters which won't use previously flown Shuttle parts.
That is correct. I believe they added:
* An extra seal
* A "J-Leg" carved into the insulation[1] that acts as a sort of pre-seal
> I guess/hope the opportunity was seized to make a design that would be less sensitive to orientation.
I guess, we'll see how things shake out.
---
1. https://www.nasaspaceflight.com/2020/12/artemis-1-schedule-u...
My understanding is that they are only hot fired horizontally.
Presumably there are many tests done at the component level, although it's questionable whether it makes sense to call those tests horizontal or vertical at that point.
Neither Ride nor Kutyna could risk exposing the information themselves, but no would could question or impeach Feynman.
[0] https://www.youtube.com/watch?v=raMmRKGkGD4
[1] https://lithub.com/how-legendary-physicist-richard-feynman-h...
I reminds me a bit of Jeffrey Sachs, who chaired the Lancet covid enquiry saying he was told the insert a furin cleavage site experimentation was already done before a grant application was put in to do that. Also presumably based on some source who didn't want to be exposed.
Sadly, human society has a blind spot when it comes to inventions with short-term benefits but long-term detriments.
I would love to see more programmers refusing to work on AI.
Refusing to work on something is not newsworthy. I refuse to work on (or use) AI, ads and defence projects, and I'm far from being the only one.
Though let who is free of sin throw the first stone, I now stand on a high horse after having worked in the gambling sector, and now ashamed of it, so I prefer to focus the projects themselves rather than the people and what they choose to do for a living.
One person, no. A hundred, who knows. Ten thousand programmers united together not to work on something? Now we're getting somewhere. A hundred thousand? Newsworthy.
Unfortunately it looks like that might also be refusing to eat right now. We'll see how much longer my principles can hold out. Being gaslit into an unjustified termination has me in a cynical kind of mood anyway. Doing a little damage might be cathartic.
> Doing a little damage might be cathartic.
Please avoid the regret. Do something kind instead. Take the high road. Take care of yourself.
People have wondered how so many people ever participated in any historical atrocity. This same mechanism is used for all of them.
I think you have already listed one big reason that isn't a high-minded principle. You want to make money. There may be others.
It's always wonderful when you can make a lot of money doing things you love to do. It stinks when you have to choose between what you are exceptionally good at doing and what your principles allow.
If only somebody could figure out how the talents of all the people in your situation could be used to restore housing affordability. Would you take a 70% paycut and move to Nebraska if it allowed you to keep all your other principles?
As you say, kindness isn't hiring. I'd love to see an HN discussion of all the good causes that need founders. It would be wonderful to have some well known efforts where the underemployed could devote some energy while they licked their wounds. It might even be useful to have "Goodworks Volunteer" fill that gap in employment history on your resume.
How do we get a monthly "What good causes need volunteers?" post on HN?
You're right, it doesn't. It feels more like an attempt to minimize. The rest was you spitballing some unrelated idea.
There’s no benefit to your ideological goals in kneecapping yourself.
There’s nothing morally wrong with using or building AI, or gambling.
> There’s nothing morally wrong with ... building... gambling.
Say you're building a gambling system and building that system well. What does that mean? More people use it? Those people access it more? Access it faster? Gamble more? Gamble faster?
It creates and feeds addiction.
Environment, religion, war, medicine; everything has a personal line associated with it.
Let’s not confuse the issue. Just because you find something distasteful doesn’t mean it’s bad or morally problematic.
We let adults make their own choices.
2) If you were devising more efficient sugar delivery systems for those acquaintances as a means to take every last cent they had, knowing they'd be unable to resist, you're complicit in robbing and killing them.
Hint: most of my consulting rate is not about writing fizzbuzz. Some clients pay me without even having to write a single line of code.
If people valued ad viewing (e.g. for product decisions), we’d have popular websites dedicated to ad viewing. What we have instead is an industry dedicated to the idea of forcefully displaying ads to users in the least convenient places possible, and we still all go to reddit to decide what to buy.
There was a site dedicated to ad viewing once (adcritic.com maybe?) and it was great! People just viewed, voted, and commented on ads. Even though it was about the entertainment/artistic value of advertising and not about making product decisions.
Although the situation is likely to change somewhat in the near future, advertising has been one of the few ways that many artists have been able to make a comfortable living. Lying to and manipulating people in order to take more of their money or influence their opinions isn't exactly honorable work, but it has resulted in a lot of art that would not have happened otherwise.
Sadly the website was plagued by legal complaints from extremely shortsighted companies who should have been delighted to see their ads reach more people, and it eventually was forced to shutdown after it got too expensive to run (streaming video in those days was rare, low quality, and costly) although I have to wonder how much of that came from poor choices (like paying for insanely expensive superbowl ads). The website was bought up and came back requiring a subscription at which point I stopped paying any attention to it.
I'd consider word-of-mouth a type of advertising as well.
When it's totally organic the person doing the promotion doesn't stand to gain anything. It less about trying to get you to buy something and usually just people sharing what they enjoy/has worked for them, or what they think you'd enjoy/would work for you. It's the intent behind the promotion and who is intended to benefit from it that makes the difference between friendly/helpful promotion and adversarial/harmful promotion.
Word of mouth can be a form of advertising that is directly funded by a manufacturer or a distributor too though. Social media influencers are one example, but companies will pay people to pretend to casually/organically talk up their products/services to strangers at bars/nightclubs, conferences, events, etc. just to take advantage of the increased level trust we put in word of mouth promotion exactly because of the assumption that the intent is to be helpful vs to sell.
There is a lot of support in favor. Consider:
- Ads are typically NOT consumed enthusiastically or even sought out (which would be the cases if they were strongly mutually beneficial). There are such cases but they are a very small minority.
- If product introduction was the primary purpose, then repeatedly bombarding people with well-known brands would not make sense. But that is exactly what is being done (and paid for!) the most. Coca Cola does not pay for you to learn that they produce softdrinks. They pay for ads to shift your spending/consumption habits.
- Ads are an inherently flawed and biased way to learn about products, because there is no incentive whatsoever to inform you of flaws, or even to represent price/quality tradeoffs honestly.
For example, many years ago I worked on military AI for my country. I eventually decided I couldn't square that with my ethics and left. But I consider advertising to be (often non-consensual) mind control designed to keep consumers in a state of perpetual desire and I'd sooner go back to building military AI than work for an advertising company, no matter how many brilliant engineers work there.
I would agree with you if ads were just that. Here's our product, here's what it does, here's what it costs. Unfortunately ads sell the sizzle not the steak. That has been advertising mantra for probably 100 years.
And that's not saying that AI is going to be great or even good or even overly positive, it's just streets ahead of the alternatives I mentioned.
AI has the potential to go in many directions, at least some of which could be societally 'good'.
Advertising is, has always been, and likely always will be, societally 'bad'.
This differentiation, if nothing else.
(Yes, my opinion on advertising is militantly one sided. I'm unlikely to be convinced otherwise, but happy for, and will read, contrary commentary).
It turns evil in the presence of corruption. Taking bribes in exchange for power. Government should never make rules for money, but for the good of the people. And advertising should never offer exposure for sale - exposure should only result from merit.
Build an advertising system with integrity - in which truthful and useful ads are not just a minimum requirement but an honest aspiration and the only way to the top of the heap. Build an advertising system focused, not on exploiting the viewer, but on serving them - connecting them with goods and services and ideas and people and experiences that are wanted and that promote their health and thriving.
I won't work on advertising as it's currently understood... I agree it's evil. But I'd work on that, and I think it would be a great good.
Advertising is similar, of course, and the only thing that has kept the internet working as a communications medium in spite of advertising is that it was generally labeled, constrained, enclosed, spam-filtered, etc.
The AI of today is being applied to help advertising escape those shackles, and in doing so, harm the ability to communicate.
A lot of engineers in the US who are both right out of school and are on visas need to find and keep work within a couple months of graduation and can’t be picky with their job or risk getting deported.
We have a fair number of indentured programmers.
Personally I don't work on advertising/tracking, anything highly polluting, weapons technology, high-interest loans, scams and scam-adjacent tech, and so on.
But there are enough engineers without such concerns to keep the snooping firms, the missile firms, and the payday loan firms in business.
Now, there’s often limits to some flexibility and lines some simply will not cross, but survival and self preservation tends to take precedent and push those limits. E.g., I can’t imagine ever resorting to cannibalism but Flight 571 with the passengers stranded in the Andes makes a good case for me bending that line. I’d be a lot more willing to work for some scam or in high interest loans for example before resorting to cannibalism to feed myself and I think most people would.
If we assure basic survival at a reasonable level, you might find far less engineers willing to work in any of these spaces. It boils down to what alternatives they have and just how firm they are on some ethical line in the sand. We’d pretty much improve the world all around I’d say. Our economic system doesn’t want that though, it wants to be able to apply this level of pressure on people and so do those who are highly successful who leverage their wealth as power. As such I don’t see how that will ever change, you’ll always have someone doing terrible things depending on who is the most desperate.
I think we'd be better off making things for each other and being present and local rather than trying to hyperstimulate ourselves into oblivion.
I'm just some dude though. It's not making it to the headlines.
Doesn't have to be on headlines. Even just hearing that gives me a bit more energy to fight actively against the post-useful developments of modern society. Every little bit helps.
>I would love to see more programmers refusing to work on AI.
That is just ridiculous. Modern neural networks are obviously an extremely useful tool.
I have a family. I work for a company that does stuff for the government.
I'd _rather_ be building and working on my cycling training app all day every day, but that doesn't make me any money, and probably never will.
All the majority of us can hope for is to build something that helps people and society, and hope that does enough good to counteract the morally grey in this world.
Nothing is ever black and white.
This is not effective.
Having a regulated profession that is held to some standards, like accountants, would actually work
Without unions and without a professional body individual action won’t be achieving anything
But the software developer who’se code handles personal information of 10 million million people should know that you don’t store them in plain text, which developers and business leaders at Virgin Media did not know, and if you click ‘forgot password’ they would send you a letter with you password In The Mail
How much of him being a hero is a coincidence? Did he refuse to sign the previous launches? Did NASA have reasons to believe that the launch could be successful? How much of a role does probability play here. I mean if someone literally tells you something isn't safe, especially the person who made it, you can't tell him it will work. There is somekind of bias here.
His decision would have been questioned after the fact, he would defer to information from levels below, and this would recurse until responsibility had dissipated beyond and any personal attribution. The same pattern happens in every org, every day (to decisions of mostly lesser affect).
The key point—at least from my read—were the follow up actions to highlight where information was intentionally ignored, prevent that dispersion of responsibility, and ensure it didn't happen again.
Unfortunately, while that specific problem did not happen again, the general cultural changes that were supposed to happen had been lost 15 years later. The loss of Columbia in 2003 was due to the same kind of poor decision making and problem solving process that was involved in the loss of Challenger.
So NASA probably didn’t look closely into the engineering, in particular when launch is tomorrow.
Yes, they did. NASA had been told by Thiokol the previous summer about the O-ring issue and that it could cause the loss of the Shuttle--and ignored the recommendation to ground the Shuttle until the issue was fixed. The night before the launch there was a conference call where the Thiokol engineers recommended not launching. Detailed engineering information was presented on that call--and it was information that had already been presented to NASA previously. NASA knew the engineering information and recommendation. They chose to ignore it.
The form he talked about was one that, if not signed, would mean that the launch would not happen. I can't remember if it was an internal form or not, but it doesn't really matter in that context.
Since NASA needed that form signed, he was under intense pressure to actually sign it both by NASA and his company. Someone else from the company not on site signed it.
Basically, the "powers that be" wanted the launch and overruled the concerns of the engineers. They forced the launch against better judgement.
(Think of the, "Oh, that nerd is always complaining, I'm going to ignore them because they aren't important," attitude.)
None. He knew the right thing to do and did it despite extreme pressure.
> Did he refuse to sign the previous launches?
I don't know about him personally, but Thiokol, at the behest of McDonald and other engineers, had sent a formal letter to NASA the previous summer warning about the O-ring issue and stating explicitly that an O-ring failure could lead to loss of vehicle and loss of life.
> Did NASA have reasons to believe that the launch could be successful?
Not valid ones, no. The launch took place because managers, at both NASA and Thiokol, ignored valid engineering recommendations. But more than that, NASA had already been ignoring, since the previous summer, valid engineering recommendations to ground the Shuttle until the O-ring issue was understood and fixed.
> I mean if someone literally tells you something isn't safe, especially the person who made it, you can't tell him it will work.
You literally can.
Sounds kinda familiar?
Dude got a lunch beer without a second though. (My man!)
He then gave a talk that afternoon talking about interrupting a closed session of the Challenger commission to gainsay a Thiokol VP. The VP in question testified to Congress that he wasn't aware of any launch risks. Macdonald stood up, went to the aisle, and said something to the effect of "Mr. Yeager, that is not true - this man was informed of the risks multiple times before the launch. I was the one that told him." (He was addressing Chuck Yeager, btw. Yeah, that Chuck Yeager.)
No mean feat to have the stones to interrupt a congressional hearing stacked with America's aviation and space heavyweights.
My understanding is that it was the NASA manager, Larry Mulloy, who had given the go for launch for the SRBs.
As much as his action were admirable, the most shocking thing about that story was how the politicians rallied to protect him after his demotion, forcing his company to keep and actually promote him. That's why I get both sad and angry when I hear the new mantra of "Government can't do anything, the markets have to regulate that problem."
Distribution of power is definitely important though, whether public or private. People concerned about government abuse is due to the fact that due to its nature, government power structures are more often centralised and without competitors by definition. There are monitors but they are often parts of the same system.
That's been the conservative line for 35+ years. How is that new?
It sounds like the most noteworthy part of his legacy is attempting to do the right thing, but with the wrong people.
I think this is meaningful to mention, because saying to do "the right things, at the right time, with the right people" is easy -- but harder is figuring out what that really means, and how do you achieve that state when you have incomplete control?
> but harder is figuring out what that really means
I think it is quite clear except the part about "right people"; if the people around you are not right, I would guess it is even more important to do the right thing. Obviously this comes at at a (potentially great) cost which is why it is easier said than done and why his actions are so admirable.
For startup founders, you can try to hire "the right people". (And share the equity appropriately.)
For job-seekers, when you're interviewing with them, you can ask yourself whether they're "the right people". (And don't get distracted by a Leetcode hazing, in what's supposed to be collegial information-sharing and -gathering by both parties.)
(One corporation though seems to withdraw from that language due to the attitude of the project and its representatives.)
Could tell what are the precise language / corporation / project, if you're comfortable with that of course?
Something that I find really frustrating is that it seems that there's an international "caste" of honest engineers who are ready, and have been ready for centuries if not millenia, to pull the metaphorical trigger on advancing human society to the next level. International rail systems, replacing all electrical generation with nuclear, creating safe and well-inspected commercial airplanes, etc.
Blocking that "caste" from uniting with each other and coordinating these projects are the Old Guard; the "local area warlords", although these days they may have different titles than they would have a thousand years ago. These people do not speak a language of technical accuracy, but rather their primary guiding principles are personal loyalty, as was common in old honor societies. They introduce graft, violence, corruption, dishonesty, and personal asset capture into these projects and keep them from coming to fruition. They would not sacrifice their lifestyles in order to introduce technical excellence into the system they're charged with managing, but instead think more about their workload, their salary, their personal obligations to their other (often dishonest) friends, and their career tracks.
It wouldn't even occur to me to worry more about a promotion than than the technical merit of a machine or system I was engaged with. I would never lie about something myself a colleague of mine said or did. For those reasons I will never be particularly competitive with the people who do become VPs and executive managers.
How many different people around the world, and especially that are on HackerNews, are in my exact situation? With the right funding and leadership could all quit our stupid fucking jobs building adtech or joining customer databases together or generating glorified Excel spreadsheets and instead be the International Railway Corps, or the International Nuclear Corps. And yet since we can't generate the cashflow necessary to satisfy the Local Area Warlords that own all the tooling facilities and the markets and the land, it will never be.
Sure, but they need to understand the risks, and be open about the choices they are making. Ideally at the time but certainly coving it up after it goes wrong is not acceptable.
I just keep waiting for that magical invisible hand to swoop in and fix this cluster f_ck... What could possibly be holding it up?
And then all of their government contracts should have been revoked.
> The NASA official simply said that Thiokol had some concerns but approved the launch. He neglected to say that the approval came only after Thiokol executives, under intense pressure from NASA officials, overruled the engineers.
My hero, but also Don Quixote. I'm a huge believer in Personal Integrity and Ethics, but I am painfully aware that this makes me a fairly hated minority (basically, people believe that I'm a stuck-up prig), especially in this crowd.
I was fortunate to find an employer that also believed in these values. They had many other faults, but deficient institutional Integrity was not one of them.
This doesn’t match my experience at all. In my experience, the average person I’ve worked with also believes in personality integrity and is guided by a sense of ethics. One company I worked for started doing something clearly unethical, albeit legal, and the resulting backlash and exodus of engineers (including me) was a nice confirmation that most people I work with won’t tolerate unethical companies.
I have worked with people who take the idea of ethics to such an unreasonable extreme that they develop an ability to find fault with nearly everything. They come up with ways to rationalize their personal preferences as being the only ethical option, and they start finding ways to claim things they don’t like violate their personal integrity. One example that comes to mind is the security person who wanted our logins to expire so frequently that we had to log in multiple times per day. He insisted that anything less was below his personal standards for security and it would violate his personal integrity to allow it. Of course everybody loathed him, but not because they lacked personal integrity or ethics.
If you find yourself being a “hated minority” or people thinking you’re a “stuck up pig” for having basic ethics, you’re keeping some strange company. I’d get out of there as soon as possible.
Actually, that's this community. I do understand. Money is the only metric that matters, here, as it's really an entrepreneur forum. Everyone wants to be rich, and they aren't particularly tolerant of anything that might interfere with that.
But I'm not going anywhere. It's actually fun, here. I learn new stuff, all the time.
Says who? Did I agree to that when I subscribed?
> Everyone wants to be rich,
Everyone? Like me too? Tell me more about that.
You in an earlier comment said that people believe that you are "a stuck-up prig". Are you sure it is due to your moral stance, and not because you are judgemental, and abrasive about it?
Perhaps if you would be less set in your mind about how you think everyone is you wouldn't come through as "a stuck-up prig". Maybe we would even find common grounds between us.
This place is surprisingly mixed in that regard given its origin; a significant number of comments I see about Apple, about OpenAI, about Paul Graham, are essentially anti-capitalist.
The vibe I get seems predominately hacker-vibe rather than entrepreneur-vibe.
That said, I'm also well aware of the "orange site bad" meme, so this vibe I get may be biased by which links' I find interesting enough to look at the discussions of.
The demoralizing part, is folks that are getting screwed by The Big Dogs, and totally reflect the behavior; even though TBD think of them as "subhuman."
I guess that it is a matter of definition.
I treat it as if it were a community, and that I am a member of that community, with rights and Responsibilities, thereof.
I know that lots of folks like to treat Internet (and, in some cases, IRL) communities as public toilets, but I'm not one of them. I feel that it is a privilege to hang out here, and don't want to piss in the punch bowl, so I'm rather careful about my interactions here.
I do find it a bit distressing, to see folks behaving like trolls, here. A lot of pretty heavy-duty folks participate on HN, but I guess the casual nature of the interactions, encourages folks to lose touch with that.
I think that it is really cool, that I could post a comment, and have an OG respond. I suspect that won't happen, too often, if I'm screeching and flinging poo.
You can identify that there may be a trend within a community without declaring that everyone in the community thinks the exact same way. And you could also be wrong about that trend because the majority is silent on the issue and you bump up against the vocal minority.
Perhaps you can elaborate on what a community is, and how HN differs from one.
It does require some common focus, and common agreement that the community is important.
I do believe that we have those, here. The "common focus" may not be immediately apparent, but I think everyone here shares a desire to be involved in technology; which can mean a few things, but I'll lay odds that we could find a definition that everyone could agree on.
It is possible. I guarantee it.
I don’t think most people expect you to quit on the spot and walk straight into unemployment.
At a previous job I saw unethical choices made by my boss, but the company as a whole wasn't doing anything wrong. One of my coworkers was asked to do something unethical and he refused, but he wasn't punished and wasn't forced to choose between his ethics and the job.
For instance, I joined a company that advertised itself as being fairly ethical (they even had a "no selling to military" type policy). However, after joining it was apparent that this wasn't the case. They really pushed transparent salaries, but then paid me way more than anyone else. There was a lot of sexism as well: despite one of my colleagues being just as skilled as I am, this colleague was given all the crap work because leadership didn't think they were as capable as I was. There was a lot of other stuff as well, but that's the big summary. I left after nine months.
The other company was similar, but it wasn't nearly as obvious at first. Over time it became very apparent that the founders cared more about boosting their own perception in the industry than they did the actual startup, and they also allowed the women in the company to be treated poorly. This company doesn't exist anymore.
I should mention that these were all startups I worked at, and I was always fairly highly positioned in the company. This meant I generally reported directly to the founders themselves. If it was something like a middle management issue I'd have tried to escalate it up to resolve it before just leaving, but if that doesn't work I'm financially stable enough to just leave.
In startups like that, company culture and the founders' behavior is nearly one-in-the-same.
That's sad you had to deal with that kind of stuff. Even in the bad jobs I've had, the bad bosses treated the employees equally poorly.
Speaking as a "security person", I passionately despise people like this because they make my life so much more difficult by poisoning the well. There are times in security where you need to drop the hammer, but it's precisely because of these situations that you need to build up the overall good will with your team of working with them. When you tell your team "this needs to be done immediately, and it's blocking", you need to have built up enough trust that they realize you're not throwing yet another TPS report at them, this time it's actually serious, and they do it immediately, as opposed to fighting/escalating.
And yes, like the original poster, most of them think they're the main character in an suspense-thriller where they're The Only Thing Saving Humanity From Itself, when really they're the stuck-up side relief character in someone else's romcom, at best.
That's an interesting read of what I posted.
Glad to have been of service!
Individual aspirations are not enough, if your org doesn't shape itself in a way to prevent bad outcomes, bad outcomes will happen.
Here’s to prigs!
I think Tesla is somewhat reckless with self driving, but we all need to agree humans aren't much better and don't generate any controversy.
At the current state of the art for self-driving, this simply is not true. Humans are much better, on average. That's why the vast majority of cars are still driven by humans.
The technology will keep improving, and at some point one would expect that it will be more reliable than humans. But it's significantly less reliable now.
PS: I'm not claiming that every single transport need can be solved by trains, but they do dramatically reduce the cost in human life. Yes, they have to be part of a mix of other solutions, such as denser housing. Yes, you can have bad actors that don't maintain their rail and underpay/understaff their engineers which leads to derailments, etc. I say this because the utopia of not having to drive, not caring about sleepiness, ill health, or intoxication, not having to finance or repair a vehicle or buy insurance, not renting parking spots, all that is available today without having to invent new lidar sensors or machine vision. You can just live in London or Tokyo.
Not for everyone, we didn't. Self-driving cars have the potential to serve people who don't want to restrict themselves to going places trains can take them.
> You can just live in London or Tokyo.
Not everyone either can or wants to live in such places. If I prefer to live in a less dense area and have a car, the risk is mine to take. And if at some point a self-driving car can drive me more reliably than I can drive myself, I will gladly let it do so.
I traveled there regularly, for over 20 years.
Their train system is the Eighth Wonder.
A lot of the reason, is cultural. Trains are a standard part of life. Most shows have significant scenes on commuter trains, as do ads. Probably wouldn’t apply to nations like the US.
If self-driving cars at their current level of reliability were as common as human drivers, they would be killing much more than a million people a year.
When I am satisfied that a self-driving car is more reliable than I am, I will have no problem letting it take me places instead of driving myself. But not until then.
Anyway, subways are awesome.
The right way asks for community buy in, follows safety procedures, is transparent and forthcoming about failures, is honest about capabilities and limitations.
The wrong way says “I can do what I want, I’m not asking permission, if you don’t like it sue me” The wrong way throws the safety playbook out the window and puts untrained operators in charge of untested deadly machines. The wrong way doesn’t ask for community input, obfuscates and dissembles when challenged, is capricious, vindictive, and ultimately (this is the most crucial part) not effective compared to the right way of doing things.
Given a choice between the safe thing to do and the thing that will please Musk, Tesla will always choose the latter.
This is what happens in the real world when you're a stuck up prig, not the Hollywood movie ending you've constructed in your head.
Same here, it's not paying well, but it feels refreshing to know that babies won't get thrown into mixers if you stop thinking for 10 minutes.
This is like when you tell an interviewer your great flaw is being too much of a perfectionist.
I have no idea why the tech industry is such a moral cesspool.
Ok, on the one hand, getting to play with cool robots, and eg using an actual forklift for debugging? Absolutely priceless, wouldn't trade it for the world.
But the ethical side of things? There's definitely ethics, don't get me wrong. Especially on the hardware side - necessary for safety after all. But the way software is sold and treated is ... different.
SpaceX's method is not "fuck around and find out". It's design, find out, iterate. From what I can tell from the outside, it seems very reasonable.
From another angle, showing how some of them had to run away from the toxic fumes: https://www.youtube.com/watch?v=EQ1j85VgALA
https://rumble.com/v4wxpje-challenger-astronauts-alive-deman...
The fact remains that these people the guy found look extremely similar, but correctly aged and have the same names. If it's not indicative of some bizarre conspiracy, it's still extremely weird a coincidence.
I'd have hoped someone could calculate some odds based on names and looks or something and make it make sense.