I came to share a blog post I just posted titled: "ChatGPT Health is a Marketplace, Guess Who is the Product?"
OpenAI is building ChatGPT Health as a healthcare marketplace where providers and insurers can reach users with detailed health profiles, powered by a partner whose primary clients are insurance companies. Despite the privacy reassurances, your health data sits outside HIPAA protection, in the hands of a company facing massive financial pressure to monetize everything it can.
https://consciousdigital.org/chatgpt-health-is-a-marketplace...
This thread has a theme I see a lot in ChatGPT users: They're highly skeptical of the answers other people get from ChatGPT, but when they use it for themselves they believe the output is correct and helpful.
I've written before on HN about my friend who decided to take his health into his own hands because he trusted ChatGPT more than his doctors. By the end he was on so many supplements and "protocols" that he was doing enormous damage to his liver and immune system.
The more he conversed with ChatGPT, the better he got at getting it to agree with him. When it started to disagree or advise caution, he'd blame it on overly sensitive guardrails, delete the conversation, and start over with an adjusted prompt. He'd repeat this until he had something to copy and paste to us to "prove" that he was on the right track.
As a broader anecdote, I'm seeing "I thought I had ADHD and ChatGPT agrees!" at an alarming rate in a couple communities I'm in with a lot of younger people. This combined with the TikTok trend of diagnosing everything as a symptom of ADHD is becoming really alarming. In some cohorts, it's a rarity for someone to believe they don't have ADHD. There are also a lot of complaints from people who are angry their GP wouldn't just write a prescription for Adderall and tips for doctor shopping around to find doctors who won't ask too many questions before dispensing prescriptions.
This may be caused by ChatGPT response patterns but doesn't necessarily mean there is an increase of false (self-)diagnoses. The question is: What is alarming about the increasing rate of diagnoses?
There has been an increase of positive diagnoses over the last decades that have been partially attributed to adult diagnoses that weren't common until (after) the 1990s and the fact that non-male patients often remained undiagnosed because of a stereotypical view on ADHD.
If the diagnosis helps, then it's a good thing! If it turns out that 10% of the population are ADHDers then let's see how we can change our environment that reflects that fact. In many cases, meds aren't needed as much when public spaces provide the necessary facilities to retreat for a few minutes, wear headphones, chew gum or fidget.
The story of your friend sounds very bad and I share your point here, completely. But concerning ADHD, I still don't see what's bad about the current wave of self-diagnoses. If people buy meds illegally, use ChatGPT as a therapist, etc. THAT is a problem. But not identifying with ADHD itself (same for Autism, Depression, Anxiety and so on).
ADHD may or may even be a reinforcing factor for a LLM user to be convinced by the novelty of the tool - but that would have to be empirically evaluated. If it were so, then this could even contribute to a better rate of diagnoses without ChatGPT capabilities in this field contributing much to the effect. Many ADHDers suffer from failing at certain aspects of daily life over and over and advice that helps others only makes them feel worse because it doesn't work for them (e.g. building habits or rewarding oneself for reaching a milestone can be much more difficult for ADHDers than non-ADHDers). I'm just guessing here and this doesn't count for all ADHDers, but: Whenever a new and possibly fun tool comes along that feels like an improvement, there can be a spark of enthusiasm that may lead to an increased trust. This usually decreases after a while and I guess giving LLMs a bit more time of being around, the popularity in this field may also decrease.
The real "evil" here is that companies like Meta, Google, and now OpenAI sell people a product or service that the customer thinks is the full transaction. I search with Google, they show me ads - that's the transaction. I pay for Chatgpt, it helps me understand XYZ - that's the transaction.
But it isn't. You give them your data and they sell it - that's the transaction. And that obscurity is not ethical in my opinion.
I think that's the wrong framing. Let's get real: They're pimping you out. Google and Meta are population-scale fully-automated digital pimping operations.
They're putting everyone's ass on the RTB street and in return you get this nice handbag--err, email account/YouTube video/Insta feed. They use their bitches' data to run an extremely sophisticated matchmaking service, ensuring the advertiser Johns always get to (mind)fuck the bitches they think are the hottest.
What's even more concerning about OpenAI in particular is they're poised to be the biggest, baddest, most exploitative pimp in world history. Instead of merely making their hoes turn tricks to get access to software and information, they'll charge a premium to Johns to exert an influence on the bitches and groom them to believe whatever the richest John wants.
Goodbye democracy, hello pimp-ocracy. RTB pimping is already a critical national security threat. Now AI grooming is a looming self-governance catastrophe.
"You are the product" is a good catchphrase to make people understand. But actually when you search or interact with LLMs, you provide not only primary data about yourself but also about other people by searching for them in connection with specific search terms, by using these services from your friend's house which connects you to their IP-Address, by uploading photos of other people etc.
"You are the product and you come with batteries (your friends)."
My concern, and the reason I would not use it myself, is the alto frequent skirting of externalities. For every person who says "I can think for myself and therefore understand if GPT is lying to me," there are ten others who will take it as gospel.
The worry I have isn't that people are misled - this happens all the time especially in alternative and contrarian circles (anti-vaxx, homeopathy, etc.) - it's the impact it has on medical professionals who are already overworked who will have to deal with people's commitment to an LLM-based diagnosis.
The patient who blindly trusts what GPT says is going to be the patient who argues tooth and nail with their doctor about GPT being an expert, because they're not power users who understand the technical underpinnings of an LLM.
Of course, my angle completely ignores the disruption angle - tech and insurance working hand in hand to undercut regulation, before it eventually pulls the rug.
I wish / hope the medical community will address stories like this before people lose trust in them entirely. How frequent are mis-diagnosis like this? How often is "user research" helping or hurting the process of getting good health outcomes? Are there medical boards that are sending PSAs to help doctors improve common mis-diagnosis? Whats the role of LLMs in all of this?
This includes researching their own condition, looking into alternate diagnoses/treatments, discussing them with a physician, and potentially getting a second opinion.
Especially the second opinion. There are good and bad physicians everywhere.
But advocating also does not mean ignoring a physician's response. If they say it's unlikely to be X because of Y, consider what they're saying!
Physicians are working from a deep well of experience in treating the most frequent problems, and some will be more or less curious about alternate hypotheses.
When it comes down to it, House-style medical mysteries are mysteries because they're uncommon. For every "doc missed Lyme disease" story there are many more "it's just flu."
I believe you do not fully appreciate how long and exhausting this is especially when sick...
They do this not because it isn't their job to protect fighters from illegal blows, but because the consequences of illegal blows are sometimes unfixable.
An encouragement for patients to co-own their own care isn't a removal of a physician's responsibility.
It's an acknowledgement that (1) physicians are human, fallible, and not omniscient, (2) most health systems have imperfect information sync'ing across multiple parties, and (3) no one is going to care more about you than you (although others might be much more informed and capable).
Self-advocacy isn't a requirement for good care -- it's due diligence and personal responsibility for a plan with serious consequences.
If a doc misses a diagnosis and a patient didn't spend any effort themselves, is that solely the doctor's fault?
PS to parent's insinuation: 20 years in the industry and 15 years of managed cancer in immediate family, but what do I know?
My question is, since you understand this very well, how successful are patients (that manage the effort) at both acquiring scientifically accurate knowledge and improving their health meaningfully?
And maybe share some tips like good knowledge databases?
We trade away our knowledge and skills for convenience. We throw money at doctors so they'll solve the issue. We throw money at plumbers to turn a valve. We throw money at farmers to grow our veggies.
Then we wonder why we need help to do basic things.
Anyway, what are you paid for? Guessing a programmer, you just sit in a chair all day and press buttons on a magical box. As your customer, why am I having to explain what product I want and what my requirements are? Why don't you have all my answers immediately? How dare you suggest a different specialism? You made a mistake?!?
There's a reason why flour has iron and salt has iodine, right? Individual responsibility simply does not scale.
Applies doubly now that many health care interactions are transactional and you won't even see the same doctor again.
On a systemic level, the likely outcome is just that people who manage their health better will survive, while people who don't will die. Evolution in action. Managing your health means paying attention when something is wrong and seeking out the right specialist to fix it, while also discarding specialists who won't help you fix it.
This is just factually not true. Healthy people subsidize the unhealthy (even those made unhealthy by their own idiocy) to a truly absurd degree.
This is disorganized thinking. Anecdotes about what? Does my uncle having an argument with his doctor over needing more painkillers, combine with an anecdote about my sister disagreeing with a midwife over how big her baby would be, combined with my friend outliving their stage 4 cancer prognosis all add up to "therefore I'm going to disregard nutrition recommendations"? Even if they were all right and the doctors were all wrong, they still wouldn't aggregate in a particular direction the way that a study on processed foods does.
And frankly it overlooks psychological and sociological dynamics that drive this kind of anecdotal reporting, which I think are more about tribal group emotional support in response to information complexity.
In fact, reasoning from separate instances that are importantly factually different is a signature line of reasoning used by alien abduction conspiracy theorists. They treat the cultural phenomenon of "millions" of people reporting UFOs or abduction experiences over decades as "proof" of aliens writ large, when the truth is they are helplessly incompetent interpreters of social data.
It's when "being your own health advocate" turns into "being your own doctor" that the system starts to break down.
Not sure about your sister and uncle, but from my observations the anecdotes combine into “doctor does not have time and/or doesn’t care”. People rightfully give exactly zero fucks about Bayes theorem, national health policy, insurance companies, social dynamics or whatever when the doctor prescribes Alvedon after 5 minutes of listening to indistinct story of a patient with a complicated condition which would likely be solved with additional tests and dedicated time. ChatGPT is at least not in a hurry.
Too late for me. I have a similar story. ChatGPT helped me diagnose an issue which I had been suffering with my whole life. I'm a new person now. GPs don't have the time to spend hours investigating symptoms for patients. ChatGPT can provide accurate diagnoses in seconds. These tools should be in wide use today by GPs. Since they refuse, patients will take matters into their own hands.
FYI, there are now studies showing ChatGPT outperforms doctors in diagnosis. (https://www.uvahealth.com/news/does-ai-improve-doctors-diagn...) I can believe it.
My own story is one of bias. I spent much of the last 3 years with sinus infections (the part I wasn't on antibiotics). I went to a couple ENTs and one observed allergic reaction in my sinuses, did a small allergy panel, but that came back negative. He ultimately wanted to put me on a CPAP and nebulizer treatments. I fed all the data I got into ChatGPT deep research and it came back with an NIH study that said 25% of people in a study had localized allergic reactions that would show up one place, but not show up elsewhere on the body in an allergy test. I asked my ENT about it and he said "That's not how allergies work."
I decided to just try second generation allergy tablets to see if they helped, since that was an easy experiment. It's been over 6 months since I've had a sinus infection, where before this I couldn't go 6 weeks after antibiotics without a reoccurrence.
Now, obviously none of this math would actually hold up to any scrutiny, and there's a bevy of reasons that the quality of those interactions would not be random. But just as a sense of scale, and bearing in mind that a lot of people will easily remember a single egregious interaction for the rest of their life, and (very reasonably!) be eager to share their experience with others, it would require a frankly statistically impossibly low error rate to not be able to fill threads like these with anecdotes of the most heinous, unpleasant, ignorant, and incompetent anecdotes anyone could ever imagine.
And this is just looking at the sheer scale of medical care, completely ignoring the long hours and stressful situations many doctors work in, patients' imperfect memories and one-sided recollections (that doctors can never correct), and the fundamental truth that medicine is always, always a mixture of probabilistic and intuitive judgement calls that can easily, routinely be wrong, because it's almost never possible to know for sure what's happening in s given body, let alone what will happen.
That E.N.T. wasn't up to date on the latest research on allergies. They also weren't an allergy specialist. They also were the one with the knowledge, skills, and insight to consider and test for allergies in the first place.
Imagine if we held literally any other field to the standard we hold doctors. It's, on the one hand, fair, because they do something so important and dangerous and get compensated comparitively well. But on the other hand, they're humans with incomplete, flawed information, channeling an absurdly broad and deep well of still insufficient education that they're responsible for keeping up-to-date while looking at a unique system in unique circumstances and trying to figure out what, if anything, is going wrong. It's frankly impressive that they do as well as they do.
Nothing. You just... feel more sympathetic to doctors and less confident that your own experience meant anything.
Notice what's absent: any engagement with whether the AI-assisted approach actually worked, whether there's a systemic issue with ENTs not being current on allergy research, whether patients should try OTC interventions as cheap experiments, whether the 25% localized-reaction finding is real and undertaught.
The actual medical question and its resolution get zero attention.
Also though...
You are sort of just telling people "sometimes stuff is going to not work out, oh also there's this thing that can help, and you probably shouldn't use it?"
What is the action you would like people to take after reading your comment? Not use ChatGPT to attempt to solve things they have had issues solving with their human doctors?
This is not ChatGPT outperforming doctors. It is doctors using ChatGPT.
Why wouldn't they? This would seem to be engagement bait for a certain type of Anti-AI person? Why would you expect this to be the case? "My dad died because he used that dumb machine" -- surely these will be everywhere right?
Let's make our beliefs pay rent in anticipated experiences!
Where's the failure?
Overtime, as a human, the doctors just turn into ABC -> 123 machines.
Fair question but one has to keep in mind about ALL the other situations we do NOT hear about, namely all the failed attempts that did take time from professionals. It doesn't the successful attempts are not justified, solely that a LOT of positive anecdotes might give the wrong impressions that they are not radically most negative ones that are simply not shared. It's hard to draw conclusions either way without both.
Classic HN. /s
I think there's a difference between questioning your doctor, and questioning advice given by almost every doctor. There are plenty of bad doctors out there, or maybe just doctors who are bad fits for their patients. They don't always listen or pay close attention to your history. And in spite of their education they don't always choose the correct diagnosis.
I also think there's an ever-increasing difference between AI health research and old-school WebMD research.
Even good doctors have a real hard time convincing the bad doctors to do their job right. Never mind some random patient with a slightly less obvious diagnosis.
This is nothing like anti vax, because it is not implying a failing of medical science. It just states that enough doctors are bad enough at their job that user research is useful. To realize you need to go to a better doctor
https://bmjopen.bmj.com/content/bmjopen/5/7/e008155.full.pdf
A doctor can also have an in-the-moment negatively impactful context: depression, exhaustion, or any number of life events going on, all of which can drastically impact their performance. Doctors get depressed like everybody else. They can care less due to something affecting them. These are not problems a good LLM has.
Anti-vax otoh is driven by ignorance and failure to trust science in the form of neither doctors, nor new types of science. Plus, anti-vax works like flat earth; a signaling mechanism of poor epostemic judgment."
I think similar to tech, Doctors are attracted to the money, not the work. The AMA(I think, possibly another org) artificially restricts the number of slots for new doctors restricting doctor supply while private equity squeezes hospitals and buys up private practices. The failure doctors sit on the side of insurance trying to prevent care from being performed and it's up to the doctor who has the time/energy to fight insurance and the hospital to figure out what's wrong.
So what? Am I supposed to clutch pearls and turn off my brain at the stopword now?
The anecdote in question is not about mis-diagnosis, it's about a delayed diagnosis. And yeah, the inquiry sent a doctor down three paths, one of which led to a diagnosis, so let's be clear: no, the doctor didn't get it completely on their own, and: ChatGPT was, at best, 33% correct.
The biggest problem in medicine right now (that's creating a lot of the issues people have with it I'd claim) is twofold:
- Engaging with it is expensive, which raises the expectations of quality of service substantially on the part of the patients and their families
- Virtually every doctor I've ever talked to complains about the same things: insufficient time to give proper care and attention to patients, and the overbearingness of insurance companies. And these two lead into each other: so much of your doc's time is spent documenting your case. Basically every hour of patient work on their part requires a second hour of charting to document it. Imagine having to write documentation for an hour for every hour of coding you did, I bet you'd be behind a lot too. Add to it how overworked and stretched every medical profession is from nursing to doctors themselves, and you have a recipe for a really shitty experience on the part of the patients, a lot of whom, like doctors, spend an inordinate amount of time fighting with insurance companies.
> How often is "user research" helping or hurting the process of getting good health outcomes?
Depends on the quality of the research. In the case of this anecdote, I would say middling. I would also say though if the anecdotes of numerous medical professionals I've heard speak on the topic are to be believed, this is an outlier in regard to it actually being good. The majority of "patient research" that shows up is new parents upset about a vaccine schedule they don't understand, and half-baked conspiracy theories from Facebook. Often both at once.
That said, any professional, doctors included, can benefit from more information from whomever they're serving. I have a great relationship with my mechanic because by the time I take my car to him, I've already ruled out a bunch of obvious stuff, and I arrive with detailed notes on what I've done, what I've tried, what I've replaced, and most importantly: I'm honest about it. I point exactly where my knowledge on the vehicle ends, and hope he can fill in the blanks, or at least he'll know where to start poking. The problem there is the vast majority of the time, people don't approach doctors as "professionals who know more than me who can help me solve a problem," they approach them as ideological enemies and/or gatekeepers of whatever they think they need, which isn't helpful and creates conflict.
> Are there medical boards that are sending PSAs to help doctors improve common mis-diagnosis?
Doctors have shitloads of journals and reading materials that are good for them to go through, which also factors into their overworked-ness but nevertheless; yes.
> Whats the role of LLMs in all of this?
Honestly I see a lot of applications of them in the insurance side of things, unless we wanted to do something cool and like, get a decent healthcare system going.
The distinction between "can't do" and "can't get paid for" seems to get lost a lot with medical providers. I'm not saying this is necessarily what's happening with your wife, but I've had it happen to me where someone says, "I can't do this test. Your insurance won't pay for it," and then I ask what it costs and it's a few hundred or a couple thousand dollars and I say, "That's OK. I'll just pay for the test myself," and something short-circuits and they still can't understand that they can do it.
The most egregious example was a prescription I needed that my insurance wouldn't approve. It was $49 without insurance. But the pharmacy wouldn't sell it to me even though my doctor had prescribed it because they couldn't figure out how to take my money directly when I did have insurance.
I get that when insurance doesn't cover something, most patients won't opt to pay for it anyway, but it feels like we need more reminders on both the patient and the provider side that this doesn't mean it can't be done.
Tell me you've never lived in poverty without telling me.
An unexpected expense of several hundred to a couple thousand dollars, for most of my lived life both as a child and a young adult, would've ruined me. If it was crucial, it would've been done, and I would've been hounded by medical billing and/or gone a few weeks without something else I need.
This is inhumanity, plain as.
This is ignorance, plain as.
The statistics you see about bankruptcy due to medical debt are highly misleading. While it is a problem, very few consumers are directly forced into bankruptcy by medical expenses. What tends to happen is that serious medical problems leave them unable to work and then with no income and then with no income all of their debts pile up. What we really need there is a better disability welfare system to keep consumers afloat.
I am absolutely not. I am reacting to what's been replied to what I've said. In common vernacular, this is called a "conversation."
To recap: the person who replied to me left a long comment about the various strugglings and limitations of healthcare when subjected to the whims of insurance companies. You then replied:
> I generally agree (and sympathize with your wife), but let's not present an overly rosy view of government run healthcare or single-payer systems. In many countries with such systems, extensive therapy simply isn't available at all because the government refuses to pay for it. Every healthcare system has limited resources and care is always going to be rationed, the only question is how we do the rationing.
Which, at least how I read it, attempts to lay the blame for the lack of availability of extensive therapies at the feet of a government's unwillingness to pay, citing that every system has limited resources and care is always being rationed.
I countered, implying that while that may or may not be true, that lack of availability is effectively status quo for the majority of Americans under our much more expensive, and highly exploitative insurance-and-pay-based healthcare system, and that, even if those issues around lack of availability persisted through a transition to a single-payer healthcare system, it would at least alleviate us from the uniquely American scourge of people being sent to the poorhouse, sometimes poor-lack-of-house, for suffering illnesses or injuries they are in no way responsible for which in my mind is still a huge improvement.
> The statistics you see about bankruptcy due to medical debt are highly misleading. While it is a problem, very few consumers are directly forced into bankruptcy by medical expenses. What tends to happen is that serious medical problems leave them unable to work and then with no income and then with no income all of their debts pile up.
I mean we can expand this if you like into a larger conversation about how insurance itself being tied to employment and everyone being kept broke on purpose to incentivize them to take on debt to survive, placing them on a debt treadmill their entire lives which has been demonstrably shown to reduce quality and length of life, as well as introducing the notion that missing any amount of work for no matter how valid a reason has the potential to ruin your life, is probably a highly un-optimal and inhumane way to structure a society.
> What we really need there is a better disability welfare system to keep consumers afloat.
On that at least, we can agree.
I hear you but there are two fundamentally different things:
1. Distrust of / disbelief in science 2. Doctors not incentivized to spend more than a few minutes on any given patients
There are many many anecdotes related to the second, many here in this thread. I have my own as well.
I can talk to ChatGPT/whatever at any time, for any amount of time, and present in *EXHAUSTIVE* detail every single datapoint I have about my illness/problem/whatever.
If I was a billionaire I assume I could pay a super-smart, highly-experienced human doctor to accommodate the same.
But short of that, we have GPs who have no incentive to spend any time on you. That doesn't mean they're bad people. I'm sure the vast majority have absolutely the best of intentions. But it's simply infeasible, economically or otherwise, for them to give you the time necessary to actually solve your problem.
I don't know what the solution to this is. I don't know nearly enough about the insurance and health industries to imagine what kind of structure could address this. But I am guessing that this might be what is meant by "outcome-based medicine," i.e., your job isn't done until the patient actually gets the desired outcome.
Right now my GP has every incentive to say "meh" and send me home after a 3-minute visit. As a result I more or less stopped bothering making doctor appointments for certain things.
Also the anti vax movement isn’t completely wrong. It’s now confirmed (officially) that the covid-19 vaccine isn’t completely safe and there are risks taking it that don’t exist in say something like the flu shot. The risk is small but very real and quite deadly. Source: https://med.stanford.edu/news/all-news/2025/12/myocarditis-v... This was something many many doctors originally claimed was completely safe.
The role of LLMs is they take the human bias out of the picture. They are trained on formal medical literature and actual online anecdotal accounts of patients who will take a shit on doctors if need be (the type of criticism a doctor rarely gets in person). The generalization that comes from these two disparate sets of data is actually often superior to a doctor.
Key word is “often”. Less often (but still often in general) the generalization can be an hallucination.
Your post irked me because I almost got the sense that there’s a sort of prestige, admiration and respect given to doctors that in my opinion is unearned. Doctors in my opinion are like car mechanics and that’s the level of treatment they deserve. They aren’t universally good, a lot of them are shitty, a lot are manipulative and there’s a lot of great car mechanics I respect as well. That’s a fair outlook they deserve… but instead I see them get these levels of respect that matches mother Theresa as if they devoted their careers to saving lives and not money.
No one and I mean no one should trust the medical establishment or any doctor by default. They are like car mechanics and should be judged on a case by case basis.
You know for the parent post, how much money do you think those fucking doctors got to make a wrong diagnosis of dementia? Well over 700 for less than an hour of there time. And they don’t even have the kindness to offer the patient a refund for incompetence on their part.
How much did ChatGPT charge?
I never heard any doctors claim any of the covid vaccines were completely safe. Do you mind if I ask which doctors, exactly? Not institutions, not vibes, not headlines. Individual doctors. Medicine is not a hive mind, and collapsing disagreement, uncertainty, and bad messaging into “many doctors” is doing rhetorical work that the evidence has to earn.
> The role of LLMs is they take the human bias out of the picture.
That is simply false. LLMs are trained on human writing, human incentives, and human errors. They can weaken certain authority and social pressures, which is valuable, but they do not escape bias. They average it. Sometimes that helps. Sometimes it produces very confident nonsense.
> Your post irked me because I almost got the sense that there’s a sort of prestige, admiration and respect given to doctors that in my opinion is unearned. Doctors in my opinion are like car mechanics and that’s the level of treatment they deserve.
> No one and I mean no one should trust the medical establishment or any doctor by default. They are like car mechanics and should be judged on a case by case basis.
You are entitled to that opinion, but I wanted to kiss the surgeon who removed my daughter’s gangrenous appendix. That reaction was not to their supposed prestige, it was recognition that someone applied years of hard won skill correctly at a moment where failure had permanent consequences.
Doctors make mistakes. Some are incompetent. Some are cynical. None of that justifies treating the entire profession as functionally equivalent to a trade whose failures usually cost money rather than lives.
And if doctors are car mechanics, then patients are machines. That framing strips the humanity from all of us. That is nihilism.
No one should trust doctors by default. Agreed. But no one should distrust them by default either. Judgment works when it is applied case by case, not when it is replaced with blanket contempt.
There’s no data here. Many aspects of life are not covered by science because trials are expensive and we have to go with vibes.
And even on just vibes we often can get accurate judgements. Do you need clinical trials to confirm there’s a ground when you leap off your bed? No. Only vibes unfortunately.
If you ask people (who are not doctors) to remember this time they will likely tell you this is what they remember. I also do have tons of anecdotal accounts of doctors saying the Covid 19 vaccine is safe and you can find many yourself by searching. Here’s one: https://fb.watch/Evzwfkc6Mp/?mibextid=wwXIfr
The pediatrician failed to communicate the risks of the vaccine above and made the claim it was safe.
At the time to my knowledge the actual risks of the vaccine were not fully known and the safety was not fully validated. The overarching intuition was that the risk of detrimental of effects from the vaccine was less than the risk+consequence of dying from Covid. That is still the underlying logic (and best official practice) today even with the knowledge about the heart risk covid vaccines pose.
This doctor above did not communicate this risk at all. And this was just from a random google search. Anecdotal but the fact that I found one just from a casual search is telling. These people are not miracle workers.
> That is simply false. LLMs are trained on human writing, human incentives, and human errors. They can weaken certain authority and social pressures, which is valuable, but they do not escape bias. They average it. Sometimes that helps. Sometimes it produces very confident nonsense.
No it’s not false. Most of the writing on human medical stuff is scientific in nature. Formalized with experimental trials which is the strongest form of truth humanity has both practically and theoretically. This “medical science” is even more accurate than other black box sciences like psychology as clinical trials have ultra high thresholds and even test for causality (in contrast to much of science only covers correlation and assumes causality through probabilistic reasoning)
This combined with anecdotal evidence that the LLM digests in aggregate is a formidable force. We as humans cannot quantify all anecdotal evidence. For example, I heard anecdotal evidence of heart issues with rna vaccines BEFORE the science confirmed it and LLMs were able to aggregate this sentiment through sheer volumetric training on all complaints of the vaccine online and confirm the same thing BEFORE that Stanford confirmation was available.
> You are entitled to that opinion, but I wanted to kiss the surgeon who removed my daughter’s gangrenous appendix. That reaction was not to their supposed prestige, it was recognition that someone applied years of hard won skill correctly at a moment where failure had permanent consequences.
Sure I applaud that. True hero work for that surgeon. I’m talking about the profession in aggregate. In aggregate in the US 800000k patients die or get permanently injured from a misdiagnosis every year. Physicians fuck up and it’s not occasionally. It’s often and all the fucking time. You were safer getting on the 737 max the year before they diagnosed the mcas errors then you are NOT getting a misdiagnosis and dying from a doctor. Those engineers despite widespread criticism did more for your life and safety than doctors in general. That is not only a miracle of engineering but it also speaks volumes of the medical profession itself which DOES not get equivalent criticism for mistakes. That 800000k statistic is swept under the rug like car accidents.
I am entitled to my own opinion just as you are to yours but I’m making a bigger claim here. My opinion is not just an opinion. It’s a ground truth general fact backed up by numbers.
> And if doctors are car mechanics, then patients are machines. That framing strips the humanity from all of us. That is nihilism.
There is nothing wrong with car mechanics. It’s an occupation and it’s needed. And those cars if they fail they can cause accidents that involve our very lives.
But car mechanics are fallible and that fallibility is encoded into the respect they get. Of course there are individual mechanics who are great and on a case by case basis we pay those mechanics more respect.
Doctors need to be treated the same way. It’s not nilhism. It’s a quantitative analysis grounded in reality. The only piece of evidence you provided me in your counter is your daughter’s life being saved. That evidence warrants respect for the single doctor who saved your daughter’s life and not for the profession in general. The numbers agree with me.
And treatment for say the corporation responsible for the mcas failures and the profession responsible for medical misdiagnosis that killed people is disproportionate. Your own sentiment and respect for doctors in general is one piece of evidence for this.
> No it’s not false. Most of the writing on human medical stuff is scientific in nature. Formalized with experimental trials which is the strongest form of truth humanity has both practically and theoretically. This “medical science” is even more accurate than other black box sciences like psychology as clinical trials have ultra high thresholds and even test for causality (in contrast to much of science only covers correlation and assumes causality through probabilistic reasoning)
Sorry, but these kinds of remarks wreck your credibility and make it impossible for me to take you seriously.
Saying something like my "credibility is wrecked" and impossible to take me "seriously" crosses a line into deliberate attack and insult. It's like calling me an idiot but staying technically within the HN rules. You didn't need to go there and breaking those rules in spirit is just as bad imo.
Yeah I agree I think the conversation is over. I suggest we don't talk to each other again as I don't really appreciate how you shut down the conversation with deliberate and targeted attacks.
Quite a mouthful for the layman and the symptoms you are describing would fit. NPH has one of my favorite mnemonic in medicine for students learning about the condition, describing the hallmark symptoms as: "Wet, Wobbly and Wacky."
Wet referring to urinary incontinence, Wobbly referring to ataxia/balance issues and Wacky referring to encephalopathy (which could mimic dementia symptoms).
But it seems "is it possible" also leads it into answering "no, it can't" probably modelling a bunch of naysayers.
Sometimes, if you coax it a little bit, it will tell you how to do a thing which is quite esoteric.
In reality you'll find the vast majority of GPs are highly intelligent and quite good at problem solving.
In fact, I'd go so far as to say their training is so intensive and expansive that laypeople who make such comments are profoundly lacking in awareness on the topic.
Physicians are still human, so like anything there's of course bad ones, specialists included. There's also healthcare systems with various degrees of dysfunction and incentives that don't necessarily align with the patient.
None of that means GPs are somehow less competent at solving problems; not only is it an insult but it's ridiculous on the face of it.
Pay for concierge medicine and a private physician and you get great health care. That's not what ordinary health insurance pays for.
I imagine the issue with problem solving more lays in the system doctors are stuck in and the complete lack of time they have to spend on patients.
As opposed to what, proving that GPs are highly trained, not inherently inferior to other types of physicians, and regularly conduct complex problem solving?
Heck, while I'm at it I may as well attempt to prove the sky is blue.
>I imagine the issue with problem solving more lays in the system doctors are stuck in and the complete lack of time they have to spend on patients.
Bingo.
In one case, a specialist made arguments that were trivially logically fallacious and went directly against the evidence from treatment outcomes.
In other cases, sheer stupidity of pattern matching with rational thinking seemingly totally turned off. E.g. hearing I'd had a sinus infection for a long time, and insisting that this meant it was chronic and chronic meant the solution was steroids rather than antibiotic, despite a previous course having done nothing, and despite the fact that an antibiotic course had removed most of the symptoms both indicating the opposite - in the end, after bypassing my GP at the time and explaining and begging an advance nurse practitioner, I got two more courses of antibiotic and the infection finally fully went.
I'm sure all of them could have done better, and that a lot of it is down to dysfunction, such as too little time allotted to actually look at things properly, but some of the interactions (the logical fallacy in particular) have also clearly been down to sheer ignorance.
I also expect they'd eventually get there, but doing your own reading and guiding things in the right direction can often short-circuit a lot of bullshit that might even deliver good outcomes in a cost effective way on a population level (e.g. I'm sure the guidance on chronic sinus issues is right the vast majority of time - most bacterial sinus infections either clear by themselves or are stopped early enough not to "pattern match" as chronic), but might cause you lots of misery in the meantime...
However your anecdotal experience is not only inline with my own experience. It is actually inline with the facts as well.
When the person your responding to said that what you wasn’t backed up by facts I’m going to tell you straight up that, that statement was utter bullshit. Everything you’re saying here is true and generally true and something many many patients experience.
The person you just replied to here isn't the same person I replied to.
Is this statement supported by facts? If anything this statement is just your internal sentiment. If you claim it’s not supported by facts the proper thing you should do is offer facts to counter his statement. Don’t claim his statement isn’t supported by facts than make a counter claim without facts yourself.
https://www.statnews.com/2023/07/21/misdiagnoses-cost-the-u-...
Read that fact. 800,000 deaths from misdiagnosis a year is pretty pathetic. And this is just deaths. I can guarantee you the amount of mistakes unreported that don’t result in deaths dwarfs that number.
Boeing the air plane manufacurwe who was responsible for the crashing Boeing 737 mcas units have BETTER outcomes than this. In the year that those planes crashed you have a 135x better survival rate of getting on a 737 max then you are getting an important diagnosis from a doctor and not dying from a misdiagnosis. Yet doctors are universally respected and Boeing as a corporation was universally reviled that year.
I will say this GPs are in general not very competent. They are about as competent and trust worthy as a car mechanic. There are good ones, bad ones, and also ones that bullshit and lie. Don’t expect anything more than that, and this is supported by facts.
Yeah, the main fact here is called medical school.[0]
>Read that fact. 800,000 deaths from misdiagnosis a year is pretty pathetic. And this is just deaths.
Okay, and if that somehow flows from GPs (but not specialists!) being uniquely poor at problem solving relative to all other types of physicians—irrespective of wider issues inherent in the U.S. healthcare system—then I stand corrected.
>135x better survival rate of getting on a 737 max
The human body isn't a 737.
>I will say this GPs are in general not very competent. They are about as competent and trust worthy as a car mechanic.
Ignorant.
[0] https://medstudenthandbook.hms.harvard.edu/md-program-object...
Instead you say “medical school” and cite the Harvard handbook as if everyone went to Harvard and that the medical book was a quantitative metric on problem solving success or failure. Come on man. Numbers. Not manuals.
> The human body isn't a 737
Are you joking? You know a 737 is responsible for ensuring the survival of human bodies hurdling through the air at hundreds of miles per hour at altitudes higher than Mount Everest? The fact that your risk of dying is lower going through that then getting a correct diagnosis from a doctor is quite pathetic.
This statement you made here is manipulative. You know what I mean by that comparison. Don’t try to spin it like I'm not talking about human lives.
> Ignorant.
Being a car mechanic is a respectable profession. They get the typical respect of any other occupation and nothing beyond that. I’m saying doctors deserve EXACTLY the same thing. The problem is doctors sometimes get more than that and that is not deserved at all. Respect is earned and the profession itself doesn’t earn enough of that respect.
Are you yourself a doctor? If so your response speaks volumes about the treatment your patients will get.
No. The human body actually isn't a 737.
>This statement you made here is manipulative. You know what I mean by that comparison. Don’t try to spin it like I'm not talking about human lives.
Let me spell it out then: The mechanisms by which a human body and a 737 work are so vastly different that one may as well be alien to the other. It's quite an apples and oranges comparison.
Yeah, you can draw parallels in some areas but I'd say on the whole the analogy isn't exactly apt. That said, I'll indulge:
Imagine if every 737 was a few orders of magnitude more complex, and also so different to the point that no plane even looked or functioned the same. Then, imagine we didn't fully understand how they worked.
Point being: Medicine is fuzzy because the human body is fuzzy and imprecise. Everybody's a little different. Contrast to aviation, which is very much an exact science and engineering discipline at this point.
Medicine isn't engineering. Treating patients isn't the same as the design and manufacture of aircraft.
That of course doesn't excuse shitty healthcare systems that can clearly do better when stats indicate there's preventable adverse outcomes happening. I just don't think laying the blame at the feet of doctors somehow being too stupid to problem solve is helpful when there's a larger system that's preventing them from doing their best work for their patients. If anything that narrative is counterproductive.
>Are you yourself a doctor?
Nope, just a layperson who knows they're a layperson.
No shit sherlock.
>Let me spell it out then: The mechanisms by which a human body and a 737 work are so vastly different that one may as well be alien to the other. It's quite an apples and oranges comparison.
Should've done this in the first place because no one understands what you're saying otherwise.
The problem internals are different but we are comparing the outcome and that is: human lives. You seem to think this is an invalid comparison. It's not.
>Medicine isn't engineering. Treating patients isn't the same as the design and manufacture of aircraft.
I never said that. The whole point was you made the claim doctors are good problem solvers because they went to medical school.
I said that claim is utter bullshit. They aren't that good and they misdiagnose shit all the time. The point still stands and you delivered evidence to validate that. You said Medicine is fuzzy and engineering exact. You said the problem was vastly more complex as well.
All of this proves the point. The problem is harder, the science is fuzzy. Doctors armed with medical science, which is definitively worse, operating on a problem that is definitively harder will be generally WORSE problem solvers then people in other occupations IF we hold everything else the same. So doctors as a group ARE not good problem solvers. That WAS the point. We are referring to doctors as a group and thus the ONLY point of comparison for problem solvers ARE other occupations.
That's just a given and it follows from your OWN logic.
>That of course doesn't excuse shitty healthcare systems that can clearly do better when stats indicate there's preventable adverse outcomes happening. I just don't think laying the blame at the feet of doctors somehow being too stupid to problem solve is helpful when there's a larger system that's preventing them from doing their best work for their patients. If anything that narrative is counterproductive.
Did I lay the blame on doctors? No. I just said they aren't good problem solvers. That's a fact. That's not blame.
But let's be clear, I agree it's counter productive to lay blame OR call doctors stupid and such a thing WAS not done by me. I was simply making the claim that THEY are NOT good problem solvers. You inserted extra negative sentiment into the "narrative" as an hallucination by your own imagination.
Look, point is you're wrong on every count. Doctors are not good at problem solving period. They're pretty bad at it. The comparison with aviation engineers is apt because those guys are GOOD problem solvers.
And again, it's not the doctors fault that they are incompetent. It's the hardness of the problem and the limitations of the science that make them like this.
After undergoing stomach surgery 8 years ago I started experiencing completely debilitating stomach aches. I had many appointments with my GP and a specialist leading to endoscopies, colonoscopies, CAT scans, and MRI scans all to no avail and they just kept prescribing more and more anti-acids and stronger painkillers.
It was after seven years of this that I paid for a private food allergy test to find that I am allergic to Soya protein. Once I stopped eating anything with Soya in it the symptoms almost completely vanished.
At my next GP appointment I asked why no-one had suggested it could be an allergic reaction only to be told that it is not one of the things they check for or even suggest. My faith in the medical community took a bit of a knock that day.
On a related note, I never knew just how many foods contain Soya flour that you wouldn't expect until I started checking.
My previous one was, too.
The one I had as kid, well. He was old, stuck in old ways, but I still think he was decent at it.
But seeing the doctor is a bit more difficult these days, since the assistants are backstopping. They do some heavy lifting / screening.
I think an LLM could help with symptoms and then looking at the most probable cause, but either way I wouldn't take it too serious. And that is the general issue with ML: people take the output too serious, at face value. What matters is: what are the cited sources?
They aren’t that much smarter. The selection criteria is more about the ability to handle pressure than it is about raw intelligence.
Tons of bottom feeders go to medical schools in say Kansas, so there’s a lot of leeway here in terms of intelligence.
There’s a school in Kansas that sits right on top of Caribbean schools in terms of reputation. I know several people who had to go there.
I never said the state is correlated with the quality of the doctor, or even if the quality of the school is associated with the quality of the doctor. You made that up. Which makes you the liar.
You're fucking right. I should've named the specific school. (And I didn't make a comment about the state I made a comment about school(s) in the state which is not about all schools in the state.)
That's would I should do. What you should do is: Don't accuse me of lying and then lie yourself. Read the comment more carefully. Don't assume shit.
No point in continue this. We both get it and this thread is going nowhere.
This has been going on for 20 years: https://pmc.ncbi.nlm.nih.gov/articles/PMC1120616/
Black students (among other minorities) with unacceptable MCAT (as in, if another race had them they would be rejected) are accepted, at a rate 6-10x more likely to be admitted with similar scores. The motivation is that doctors should match the demographic they treat, and minority doctors are underrepresented, so should be accepted at higher rates: https://www.uclahealth.org/news/article/clinical-outcomes-pa...
The obvious outcome is that minorities students, being less prepared as measured by MCAT and somewhat setup for failure, have a much higher failure rate, with black students being 85% more likely to leave medical school than white: https://news.yale.edu/2023/07/31/black-md-phd-students-exper...
USMLE scores have been changed to pass/fail, to hide the actual score, to help prevent rejection of minority students who previously would have been rejected: https://n-age.org/wp-content/uploads/A-Test-of-Diversity-—-W... https://www.sciencedirect.com/science/article/abs/pii/S00904...
The system was, as is the stated goal by all, setup to pass minority students that would have previously been rejected, at every step of them becoming a doctor, to provide a net positive for minority populations, since it's accepted that you'll get the best outcome if your doctor is the same race as you.
"You know what they call the most unqualified insert-identity-here person in the med school class of 2025 who squeezed by because it would look bad if they didn't?"
"Dr."
Which is why 99% AI-driven diagnosis can't come fast enough.
1. You're free to pick the race of your doctor. Matching your race is a data driven positive.
2. 30 years ago, the system made sure a minority doctor was (at least, but probably more so do to discrimination) as competent as a white doctor. These days, the system is intentionally and deliberately set up so to help pass less competent minority students, due to the positives of #1.
It's a mushy relative thing.
In the rest of the world doctors are basically like white-collar car mechanics, and often earn less money and respect.
"According to the Government of Canada Job Bank, the median annual salary for a General Practitioner (GP) in Canada is $233,726 (CAD) as of January 23, 2024."
That's roughly $170,000 in the US. If you adjust for anything reasonable, such as GDP per capita or median income between the US & Canada, that $170k figure matches up very well with the median US general practitioner figure of around $180k-$250k (sources differ, all tend to fall within that range). The GPs in Canada may in fact be slightly better paid than in the US.
https://www.pnc.com/insights/personal-finance/borrow/physici...
I wouldn't be surprised if AI was better than going to GP or many other specialists in majority of cases.
And the issue is not with the doctors themselves, but the complexity of human body.
Like many digestive issues can cause migraines or a ton of other problems. I am yet to see when someone is referred to gut health professional because of the migraine.
And a lot of similar cases when absolutely random system causes issues in seemingly unrelated system.
A lot of these problems are not life threatening thus just get ignored as they would take too much effort and cost to pinpoint.
AI on the other hand should be pretty good at figuring out those vague issues that you would never figured out otherwise.
Not least because it almost certainly has orders of magnitude more data to work with than your average GP (who definitely doesn't have the time to keep up with reading all the papers and case studies you'd need to even approach a "full view".)
On second thought — the opposite. A bulge/blockage of CSF?
It's pretty wild that a doctor wouldn't have that as a hypothesis.
Drawing medical conditions from an urn occasionally yields a true diagnostic, even more so if conditions are weighed in the urn according to their prevalence in society. But disease lottery is no medical practice.
Two essential errors exist in medicine:
1. Delivering a wrong intervention.
2. Failing to deliver an intervention at the right time due to misdiagnosis.
These are the sins of harm and distraction. The metric for judging a system is not whether it gets things right on one occasion, but whether it makes those mistakes on the others.
Doctors err because people and institutions are imperfect, biology is messy, and human variability is immense. But the former can be attenuated by a plurality of opinions and greater resources (including time), while statistical systems are inherently vulnerable to the latter two.
Language models hallucinate and deliver both essential errors - all while speaking in a very confident and convincing manner. What OpenAI is advertising is a system that nudges vulnerable people to abandon, distrust, ignore, or simply avoid seeking true medical practitioners to rely on a statistical system out of an unjustified, naive trust in the machine.
As a reminder to those concerned with healthcare accessibility, picking the wrong solution to a problem for lack of a right one does not solve the problem. That reasoning was the basis of practices such as bloodletting and lobotomy. Time after time again, medical science teaches us the limits of this kind of thinking.
So many doctors never bothered to conduct any tests. Many said it's in my head. Some told me to just exercise. I tried general doctors, specialists. At some point, I was so desperate that I went to homeopathy route.
15 years wasted. Why did it take 15 years for the current system?
I'd bet that if I had ChatGPT earlier, it could have helped me in figuring out the issue much faster. When you're sick, you don't give a damn who might have your health data. You just want to get better.
I have some statistically very common conditions and a family medical history with explicit confirmation of inheritable genetic conditions. Yet, if I explain my problems A to Z I’m a Zebra whose female hysteria has overwhelmed his basic reasoning and relationship to reality. Explained Z to A, well I can’t get past Z because, holy crap is this an obvious Horse and there’s really only one cause of Horse-itis and if your mom was a Horse then you’re Horse enough for Chronic Horse-itis.
They don’t have time to listen, their ears aren’t all that great, and the mind behind them isn’t necessarily used to complex diagnostics with misleading superficial characteristics. Fire that through a 20 min appointment, 10 of which is typing, maybe in a second language or while in pain, 6 plus month referral cycles and… presto: “it took a decade to identify the cause of the hoof prints” is how you spent your 30s and early 40s.
Anyways, that's why I'm so bullish on LLMs for healthcare.
Do you think it would be better to live in a world with no doctors? You can already live in that world if you want. Thanks to doctors, millions of people around the world no longer die from treatable illnesses. Everyone in my family has either had their life saved, or saved from ruin, by a doctor at one point or another.
I think the world would be better if becoming a doctor wasn’t tied up with financial incentives and prestige. Lower the bar of becoming a doctor so the fees aren’t astronomically high. Also there would be more doctors so we don’t suffer from the glut of supply we currently do. Also more doctors means more competition so that automatically ups quality and accuracy of treatment.
Every doctor needs a rotten tomato score plastered on their lab coat by law. That number needs to be rooted in metrics not vibes. How many misdiagnosis he made how many times he lost a lawsuit for malpractice. All of that would make the world a better place.
> Everyone in my family has either had their life saved, or saved from ruin, by a doctor at one point or another.
There are 800000k patients who die or are seriously injured by a misdiagnosis every year. Show gratitude for the doctors who saved your family… but gratitude for the profession in general? My gratitude is much lower in the general case.
Yes, I once sat in a recovery room with my Mom after she had been given too much propofol during an endoscopy. Despite the fact that her breathing was labored, the clinic she was at didn't want to do anything so I called 911. I'm not sure what happened, but I can see that side of your point. I did learn to be much more careful about how I saw to my parents medical care after that.
They need to charge what they’re actually worth and what they are worth correlates with reliability. They also need to communicate the effective reliability to the patient.
Instead they charge something like a thousand dollars for less than an hour of their time for a false yes or no diagnosis.
lol terrible idea. just as great as having so that the service you bought is entirely refunded if the code has a single bug.
One false diagnosis from a doctor costs you thousands of dollars and fucks up your life.
Remember mcas? The bug on the 737 max that forced Boeing to pay reparations? That’s the level of bullshit people are dealing with for doctors. Life altering stuff. This isn’t some chrome bug or smart phone bug. Therefore the penalties and repercussions of mistakes should be equivalent.
If the diagnosis only costs 100 dollars or something, and I was told that the diagnosis was only a probability… I could accept a no refund policy in that case.
It's worth noting that you framed the discussion in terms of refunds, so any extra human life uncalculable value isn't really within the scope of a refund, you'd have a malpractice case which is entirely different from a breach of contract. This is just about the fees paid for the service.
I’m not talking about that. What I’m talking about is fucking simple. A doctor gives you advice and you pay him thousands for it. That advice is completely fucking wrong.
In what universe does that payment make sense? In what universe is giving wrong information deserving of thousands of dollars of payment for services rendered. It’s bloody simple: it’s not deserved and a refund is in order.
Nobody is going to pay thousands of dollars for shitty advice or treatments that can potentially kill you... are you kidding me? What human will happily dish out thousands of dollars just to give "time" to the doctor for wrong advice. That has got to be a joke.
>If you want to pay for results only then you're free to negotiate a cash payment contract with your healthcare providers on that basis.
It needs to be law to make it on this basis. Every patient would demand this. The only person who wouldn't demand this is a doctor who's "time" doesn't provide results.
What's going on here is the patient has nowhere else to turn. If every doctor negotiates on "time" and the legal system is set up this way, what other choice does the patient have then to gamble thousands on something that won't work?
Let me explain it to you plainly. The system is set up this way so patients are indoctrinated to accept unfair treatment. They can even be aware of flaws in the system but they still have to accept it because the behavior is so wide spread.
It's similar to North Korea. If everyone in north korea stood up to Kim Jong Un, the sheer number of people getting screwed over vs. people in power is so overwhelming the government would topple immediately. But the system is pervasive. And that is the medical system in the US: Pervasive and systemic. And it goes deeper than just the unreliability of doctors here.
I WANT to negotiate based on results... but thanks to the cartel-like policies of the medical system all together, I can't. Ask any patient... EVERY patient wants this, but none of them can do it. Same as your average north korean... they don't want to starve under an unreasonable regime, but they have no choice.
You may not have realized it but your last sentence was a slip up... Your initial sentences was an attempt to justify time based payment by comparing and contrasting to other occupations like lawyers... but in your last sentence you were essentially (and likely accidentally) telling me to suck it up because I have no choice. That was not something any patient wants to hear.
As for your absurd assertions about what every patient wants, you're just lying and making things up. Many patients (like me) don't want that or have no strong preference at all. Your comparison to North Korea is just deranged and bears no relationship to objective reality.
The fact nobody negotiates for results based treatment is similar to how everyone really needs insurance to afford medical treatments. It's not law to have medical insurance but basically everyone needs it regardless. The problem is systemic. It's not purely related to law. Side effects and incentives stemming both from law and outside of law force things to be this way.
Stop manipulating the conversation this way. You know what I mean. Any sane person pays for results, not for time and garbage results.
>As for your absurd assertions about what every patient wants, you're just lying and making things up. Many patients (like me) don't want that or have no strong preference at all. Your comparison to North Korea is just deranged and bears no relationship to objective reality.
No it's not a lie. It's obvious to anyone reading. I'm so confident about it that I'll even say you're lying about what you want.
You don't want to pay a doctor thousands of dollars for his time and wrong advice that can potentially kill you. No patient wants this. 800,000 patients per year die or are seriously injured from misdiagnosis. Every single one of those patients wants there money back. That is objectively reality. You're not stupid. You're not delusional. So you know this. At this point you're just arguing and lying.
>Your comparison to North Korea is just deranged and bears no relationship to objective reality.
800,000 patients injured/dead from misdiagnosis. 300,000k of that is deaths. That's equivalent to mass slaughter. One of these persons is my brother, imagine if it was yours.
It's not deranged. You're deranged for thinking it's NOT. The comparison is not only relevant as an analogy but relevant in degree of severity.
No. The payment model should change to be fair, I never said the payment reduces diagnostic errors. The patient should be informed about the probabilistic nature of the diagnosis. A contract (not in fine print) to protect the doctor from lawsuits from misdiagnosis should be signed by the patient to reflect this. Then the payment should be Heavily reduced to reflect the unreliability of the diagnosis. By heavily I mean becoming a doctor should not be a profession that is associated with extreme wealth because the unreliability of their diagnosis/treatment does not convey that level of value.
>Some care quality improvements are certainly possible but those aren't necessarily tied to payment models.
I don't think it's "some" quality improvements. The US has some of the worst outcomes in the 1st world in terms of quality of care. There are massive improvements that can be made here.
>It's more important to focus on evidence-based clinical practice guidelines.
Agreed, and until the evidence, clinical practice guidelines and effectiveness of doctors rises to the level of significant reliability, both payment and respect should be adjusted to reflect the current level of low reliability.
I had it stop right there, and asked it to tell me exactly where it got this information; the date, the title of the chat, the exact moment it took this data on as an attribute of mine. It was unable to specify any of it, aside from nine months previous. It continued to insist I had ADHD, and that I told it I did, but was unable to reference exactly when/where.
I asked “do you think it’s dangerous that you have assumed I have a medical / neurological condition for this long? What if you gave me incorrect advice based on this assumption?” to which it answered a paraphrased mea culpa, offered to forget the attribute, and moved the conversation on.
This is a class action waiting to happen.
It likely just hallucinated the ADHD thing in this one chat and then made this up when you pushed it for an explanation. It has no way to connect memories to the exact chats they came from AFAIK.
*not entirely sure. I t seems to frequently hallucinate the address
and your reasoning for this is what?
The point is ChatGPT gets various info about you and it won't disclose to you that it has them.
There's the memory feature, but various reports (and my own experience) indicate that even if you disable it, some stuff you've said before (or the LLM inferred) is still fed into its sytem prompt.
We also know that AI can sometimes make up stuff. I think it might have "guessed" the user has ADHD, this got added into the system prompt and it won't be revealed to the user considering how this works. It wasn't done on purpose and wasn't malicious.
Did some digging and there was an obscure reference to a company that folded a long time ago associated with someone who has my name.
What makes it creepier is that they have the same middle name, which isn't in my profile or on my credit card.
When I signed up for ChatGPT, not only did I turn off personalization and training on my data, I even filled out the privacy request opt-out[1] that they're required to adhere to by law in several places.
Also, given that my name isn't rare, there are unfortunately some people with unsavory histories documented online with the name. I can't wait to be confused for one of them.
You did all of that but then you gave them your real name?
Visa/MC payment network has no ability to transfer or check card holder name. Merchants act as if it does, but it doesn’t. You can enter Mickey Mouse as your first name and last name… It won’t make any difference.
Only AMEX and Discover have the ability to validate names.
FWIW, I have a paid account with OpenAI, for using ChatGPT, and I gave them no personal information.
They can, and do, validate on street number and/or zip code so you can certainly error out on typos there ... but not name.
Personally, I'm on the fence. I suspect that I've always had a bit of that, but anecdotally, it does seem to have gotten worse in the past decade, but perhaps it's just a symptom of old age (31 hehehe).
I don’t think they’re lying, but it is very clear that ADHD has entered the common vernacular and is now used as a generic term like OCD.
People will say “I’m OCD about…” as a way of saying they like to be organized or that they care about some detail.
Now it’s common to say “My ADHD made me…” to refer to getting distracted or following an impulse.
> or do you think it's possible that our pursuit of algorithmic consumption is actually rewiring our neural pathways into something that looks/behaves more like ADHD?
Focus is, and always has been, something that can be developed through practice. Ability to focus starts to decrease when you don’t practice it much.
The talk about “rewiring the brain” and blaming algorithms is getting too abstract, in my opinion. You’re just developing bad habits and not investing time and energy into maintaining the good habits.
If you choose to delete those apps from your phone or even just use your phone’s time limit features today, you could start reducing time spent on the bad habits. If you find something to replace it with like reading a book (ideally physical book to avoid distractions) or even just going outside for a 10 minute walk with your phone at home, I guarantee you’ll find that what you see as an adult-onset “ADHD” will start to diminish and you will begin returning to the focus you remember a decade ago.
Or you could continue scrolling phones and distractions, which will probably continue the decline.
This is a good place to note that a lot of people think getting a prescription will fix the problem, but a very common anecdote in these situations is that the stimulant without a concomitant habit change just made them hyperfocus on their distractions or even go deeper into more obsessive focus on distractions. Building the better habits is a prerequisite and you can’t shortcut out of it.
> Focus is, and always has been, something that can be developed through practice. Ability to focus starts to decrease when you don’t practice it much.
> The talk about “rewiring the brain” and blaming algorithms is getting too abstract, in my opinion. You’re just developing bad habits and not investing time and energy into maintaining the good habits.
> If you choose to delete those apps from your phone ...
I would like to add that focus is one of the many aspects of adhd, and for many people, isn't even the biggest thing.
For many people, it's about the continuous noise in their mind. Brown noise or music can partly help with parts of that.
For many, it's about emotional responses. It's the difference between hearing your boss criticise you and getting heart palpitations while mentally thinking "Shit, I'm going to get fired again", vs "Ahh next time I'll take care of this specific aspect". (Googling "RSD ADHD" will give more info.)
It's the difference between wanting to go to the loo because you haven't peed in 6 hours but you can't pull yourself off your chair, and... pulling yourself off your chair.
Focus is definitely one aspect. But between the task positive network, norepinephrine and the non-focus aspects of dopamine (including - more strength! Less slouching, believe it or not!), there are a lot of differences.
Medications can help with many of these, albeit at the "risk" of tolerance.
(I agree this is a lot of detail and nuance for a random comment online, but I just felt it had to be said. Btw - all those examples... might've been from personal experience - without vs with meds.)
this is an older thing than "I'm OCD when ..."
what would you recommend if one is against the idea of medication in general for neurological issues that aren't deterental to ones life?
do you feel the difference between being medicated and (strong?) coffee?
have you felt the effects weaken over time?
if you did drink coffee, have you noticed a difference between the medication effects weakening on the same scale as caffeine?
is making life easier with medication worth the cost over just dealing with it by naturally by adapting to it over time (if even possible in your case)?
this is a personal pet-project of observing how different people deal with ADHD.
Given that ADHD people tend to commit suicide 2x-4x times more often than general population [0] keep in mind that it's not detrimental until it suddenly is.
Also it gets worse with age, so it's better to get under doctor's control sooner than later.
ADHD is a debilitating neurological disorder, not a mild inconvenience.
Believe me, I wish that just drinking coffee and "trying harder" was a solution. I started medication because I spent two decades actively trying every other possible solution.
> what would you recommend if one is against the idea of medication in general for neurological issues that aren't deterental to ones life?
If your neurological issues aren't impacting your life negatively, they aren't neurological issues. I don't know what else to say to this. Of course you shouldn't treat non-disorders with medication.
> do you feel the difference between being medicated and (strong?) coffee?
These do not exist in the same universe. It's not remotely comparable.
> have you felt the effects weaken over time?
Only initially, after the first few days. It stabilizes pretty well after that.
> if you did drink coffee, have you noticed a difference between the medication effects weakening on the same scale as caffeine?
Again, not even in the same universe. Also, each medication has different effects in terms of how it wears off at the end of the day. For some it's a pretty sudden crash, for others it tapers, and some are mostly designed to keep you at a long term level above baseline (lower peaks, but higher valleys).
> is making life easier with medication worth the cost over just dealing with it by naturally by adapting to it over time (if even possible in your case)?
If I could have solved the biological issue "naturally" I would have. ADHD comes with really pernicious effects that makes adaptation very challenging.
thanks for sharing, the coffee part is mostly for the claim that it has the opposite effect on people with ADHD or no effect at all.
> is making life easier with medication worth the cost over just dealing with it by naturally by adapting to it over time (if even possible in your case)?
I am now 20, admittedly "early" in my career. Through high school and the first 2 years of university I have banged my head against ADHD and tried to just "power through it" or adapt. Medication isn't a magic bullet, but it is clear to me at least now that I am at least able to rely on it as a crutch in order to improve myself and my lifestyle to deal with what is at least for me, truly a disability. Maybe one day I won't need it, but in the mean time I see no reason why attempt #3289 will work for real this time to turn around my life.
Unmanaged ADHD is dangerous, and incredibly detrimental to people's lives, but the level of such may not be entirely apparent to somebody until after they receive treatment. I think the attitude of being against medication for neurological issues where that is recommended by medical professionals (including where that for something perceived to not be detrimental enough) is, to say the least, risky.
I would perhaps encourage you to do some reading into the real-world ways ADHD affects people's lives beyond just what medical websites say.
To answer your questions, though:
* Medication vs coffee: yes, I don't notice any effect from caffeine
* Meds weakening over time: nope
* Medication cost: so worth it (£45/mo for the drugs alone in the UK) because I was increasingly not able to adapt or cope and continuing to try to do so may well have destroyed me
I know I might not have ADHD, but I happen to be a magnet for people who do so it naturally peaks my curiosity as they are all considered to have ADHD, but have wildly different experience.
The meme of 'ADHD as the "fucked up attention span disorder"' has done immeasurable damage to people, neurotypical and ADHD alike. it is the attribute that is the least important to my life, but most centered towards the neurotypical, or the other people it bothers.
> modern life fucks up your attention span
That said, this statement is true, it's just a fundamental misunderstanding of ADHD as "dog like instinct to go chase a squirrel" or whatever. Google is free, so is Chatgpt if that's too hard.
> I'm not sure it's healthy to want to put a label on everything
I don't particularly care for microlabeling, but it's usually harmless, nothing suggest the alternative of "just stop talking about your problems" is better. People create language usually because they want to label a shared idea. This is boomer talk (see "remember facebook?" no)
> or medicate to fall back on the baseline
I'm not sure "If you have ADHD you should simply suffer because medicine is le bad" is a great stance, but you're allowed I suppose
still one of the most common symptom, and the one everyone use to self diagnose...
> because medicine is le bad
idk man, I've seen the ravage of medicine one people close to me. Years of adhd medicine, anti depressants pills, anti obesity treatments... They're still non functional, obese and depressed, but now they're broke and think there truly is no way out of the pit because they "tried everything" (everything besides not playing video games 16 hours a day, eating junk food 24/7 and never going out of their bedroom, but the doctors don't seem to view this as a root cause)
Whatever you think, I believe some things are over prescribed to the point of being a net negative to society. I never said adhd doesn't exist or shouldn't be treated btw, you seem to be projecting a lot of things. If it works for you good, personally I prefer to change my environment to fit how my brain/body works, not influence my body/brain by swallowing side effects riddled pills until death to fit in the fucked up world we created and call "normality"
Just try harder to make insulin, bro. You can outthink that t1 diabetes if you try hard enough.
This weird macho resistance to and scorn of anyone using the mental tools we have available is why men kill themselves at a much higher rate. There is no award for who struggled the most in life.
Do they always work? No. Do they work for a lot of people? Sure do. Can they be replaced with diet and exercise and different circumstances? Yeah, sometimes. Is that realistic? Not usually.
They aren't magic. They don't forcibly make you happy or change your personality or make your problems go away. You still have to do the work. But they're a crane to help lift the crushing weight you're under so you can shimmy out from under it.
If you don't want to use them, fine, but not using them doesn't make you a better person.
If you want chats to shared info, then use a project.
Yes, projects have their uses. But as an example - I do python across many projects and non-projects alike. I don't want to want to need to tell ChatGPT exactly how I like my python each and everytime, or with each project. If it was just one or two items like that, fine, I can update its custom instruction personalization. But there are tons of nuances.
The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful. When I randomly ask ChatGPT "Hey, could I automate this sprinkler" it knows I use home assistant, I've done XYZ projects, I prefer python, I like DIY projects to a certain extent but am willing to buy in which case be prosumer. Etc. Etc. It's more like a real human assistant, than a dumb-bot.
I've found a good balance with the global system prompt (with info about me and general preferences) and project level system prompts. In your example, I would have a "Python" project with the appropriate context. I have others for "health", "home automation", etc.
Maybe if they worked correctly they would be. I've had answers to questions be influenced needlessly by past chats and I had to tell it to answer the question at hand and not use knowledge of a previous chat that was completely unrelated other than being a programming question.
I could not disagree more. A major failure mode of LLMs in my experience is their getting stuck on a specific train of thought. Being forced to re-explain context each time is a very useful sanity check.
I remember having conversations asking ChatGPT to add and remove entries from it, and it eventually admitting it couldn’t directly modify it (I think it was really trying, bless its heart) - but I did find a static memory store with specific memories I could edit somewhere.
Say I’m interested in some condition and want to know more about it so I ask a chatbot about it.
It decides “asking for a friend” means I actually have that condition and then silent passes that information on data brokers.
Once it’s in the broker network it’s truth.
We lack the proper infrastructure for to control our own personal data.
Hell, I bet there’s anyone alive that can even name every data broker, let alone contacts them to police what information they’re passing about.
The former shows you things people (hopefully) have written.
The latter shows you a made-up string of text "inspired by" things people have written.
I guess unless you have an offline system you are at the mercy of whoever is running the services you use.
Googling allows you to choose the sources you trust, AI forces you to trust it as a source.
I know in Europe we have the GDPR regulations and in theory you can get bad information corrected but in practice you still need to know that someone is holding it to take action.
Then there's laundering of data between brokers.
One broker might acquire data via dubious and then transfer that to another. In some jurisdictions once that happens the second company can do what they like with it without having to worry about the original source.
ChatGPT may have picked that up and give people ADHD for no good reason.
The AI models are just tools, but the providers who offer them are not just providing a tool.
This also means if you run the model locally, you're the one liable. I think this makes the most sense and is fairly simple to draw a line.
ChatGPT is just suppose to “work” for the lay person and it just doesn’t quite often. OpenAI is already being sued by people for stochastic parroting that ended in tragedy. In one case they’ve tried to use the rather novel affirmative defense that they’re not not liable because using ChatGPT for self-harm was against the terms of service the victim agreed to when using the service.
If every building I went to in the US had ramps and elevators even though I'm not in a wheelchair, would it be "fucked up" that the building and architects assume I'm a cripple?
There's just as much meaning in ChatGPT saying "As you said, you have ADHD" as a building having an elevator.
In the training data for ChatGPT, the word ADHD existed and was associated with something that people call each other online, cool. How deep.
Anyway, I do assume very single user of this website, including myself, all have autism (possibly undiagnosed), so do with that information what you will. I'm pretty sure most HN posters make the same assumption.
It’s probably a very human trait to do that but it is a bad habit.
So I'm not entirely surprised that an LLM would start assuming that the user have ADD, because that's what part of it's training data suggests it should.
The issue is it doesn't apply here as it's neither a person or a coherent memory/thinking being.
"Thinking" models are basically just a secondary separately prompted hidden output that prefaces yours so your output is hopefully more aligned to what you want, but there's no magic other than more tokens and trying what works.
That's not to say that it's better than doctors or even that it's a good way to address every condition. But there are definitely situations where these models can take in more information than any one doctor has the time to absorb in a 12-minute appointment and consider possibilities across silos and specialties in a way that is difficult to find otherwise.
In theory, the Dutch system will take care of your more quickly for "real" emergencies as their "urgent care" (spoedpost) is heavily gate kept and you can only walk in to a hospital if you're in the middle of a crisis. I tried to walk into the ER once because I needed an inhaler and they told me to call the call the hotline for the urgent care... this was a couple of months after I moved.
That said, I much prefer paying €1800/year in premiums with a €450 deductible compared to the absolute shitshow that is healthcare in the USA. Now that I've figured out how to operate within the system, it's not so bad. But when you're in the middle of a health crisis, it can be very disorienting to try and figure out how it all works.
When people are forced to have a consultation, diagnosis, and treatment in 20 minutes, things are rushed and missed. Amazing things happen when trained doctors can spend unlimited time with a patient.
And of course, GPs typically diagnose more common problems, and refer patients to specialists when needed. Specialists have a lower volume of patients, and are able to take more time with each person individually.
While some people are impacted by rare or complex medical conditions that isn't the norm. The health and wellness issues that most consumers have aren't even best handled by physicians in the first place. Instead they could get better results at lower cost from nutritionists, personal trainers, therapists, and social workers.
AI might provide the most scalable way to give this level of access/quality to a much wider range of people. If we integrate it well and provide easy ways for doctors to interface with this type of systems, it should be much more scalable, as verification should be faster.
[1] https://petrieflom.law.harvard.edu/2022/03/15/ama-scope-of-p...
Where? According to "International variations in primary care physician consultation time: a systematic review of 67 countries" Sweden is the only country on the planet with an average consultation length longer than the US.
"We found that 18 countries representing about 50% of the global population spend 5 min or less with their primary care physicians."
This is the problem with all the old people.
The massive costs like this.
Now next thing to do to hospital for.
I'm constantly amazed at the attitude that doctors are useless and that their multiple years of medical school and practical experience amounts to little more than a Google search. Or as someone put it, "just because a doctor messed up once it doesn't mean that you are the doctor now".
To me it's crazy that doctors rarely ask me if I'm taking any medications for example, since meds can have some pretty serious side effects. ChatGPT Health reportedly connects to Apple Health and reads the medications you're on; to me that's huge.
This sounds very strange to me. Every medical appointment I've ever been to has required me to fill out an intake form where I list medications I'm taking.
The best part is they always immediately start badly explaining it to us like we’ve never heard of it either.
Between that and having concerns repeatedly dismissed before we secured a diagnosis has sincerely changed my view of Dr. Google.
I would in no way trust a doctor over ChatGPT at this point. At least with ChatGPT I can ask it to cite the sources proving its conclusions. Then I can verify them. I can’t do that with a doctor it’s all “trust me bro”
Eventually, one super-nerdy intern walking rounds with the resident in the teaching hospital remembered a paper she had read, mentioned it during the case review, and they ran tests which confirmed it. They began a course of treatment and my daughter now lives normally (with the aid of daily medication.)
I fed a bunch of the early tests and case notes to ChatGPT and it diagnosed the disease correctly in minutes.
I surely wish we had had this technology a dozen years ago.
(I know, the plural of anecdote is not data.)
i mean; i kinda get the concerns about misleading people but … are people really that dumb? okay if it’s telling you to drink more water, common sense. If you’re scrubbing up to perform an at home leg amputation because it misidentified a bruise then that’s really on you.
Yes, absolutely. The US has measles back in rotation because people are "self-educating" (aka taking to heart whatever idiocy they read online without a 2nd thought), and you think people self diagnosing with a sycophant sentence generator is anything but a recipe for disaster?
I'm on statins that have side effects that I'm experiencing. That's a common thing. ChatGPT was useful for me to figure out some of that. I've had other minor issues where even just trying to understand what the medication I'm being prescribed is supposed to do can be helpful. Doctors aren't great at explaining their decisions. "Just take pill x, you'll be fine".
Doctors have to diagnose patients in a way that isn't that different from how I would diagnose a technical issue. Except they are starved for information and have to get all their information out of a 10-15 minute consult with a patient that is only talking about vague symptoms. It's easy to see how that goes wrong sometimes or how they would miss critical things. And they get to deal with all the hypochondriacs as well. So they have to poke through that as well and can't assume the patient is actually being truthful/honest.
LLMs are useful tools if you know how to use them. But they can also lead to a lot of confirmation bias. The best doctors tell you what you need to hear, not what you want to hear. So, tools like this are great and now a reality that doctors need to deal with whether they like it or not.
Some of the Covid crisis intersected with early ChatGPT usage. It wasn't pretty. People bought into a lot of nonsense that they came up with while doom scrolling Reddit, or using early versions of LLMs. But things have improved since then. LLMs are better and less likely to go completely off the rails.
I try to look at this a bit rationally: I know I don't get the best care possible all the time because doctors have to limit time they spend on me and I'm publicly insured in Germany so subject to cost savings. I can help myself to some extent by doing my homework. But in the end, I have to trust my doctor to confirm things. My mode is that I use ChatGPT to understand what's going on and then try to give my doctor a complete picture so he has all the information needed to help me.
Either way, I’m excited for some actual innovation in the personal health field. Apple Health is more about aggregating data than actually producing actionable insights. 23andme was mostly useless.
Today I have a ChatGPT project with my health history as a system prompt and it’s been very helpful. Recently I snapped a photo of an obscure instrument screen after taking a test and was able to get more useful information than what my doctor eventually provided (“nothing to worry about”, etc.) ChatGPT was able to reference papers and do data analysis which was pretty amazing, right from my phone (e.g fitting my data to a model from a paper and spitting out a plot).
There's a reason this data is heavily regulated. It's deeply intimate and gives others enormous leverage over you. This is also why the medical industry can charge premium rates while often providing poor service. Something as simple as knowing whether you need insulin to survive might seem harmless, but it creates an asymmetric power dynamic that can be exploited. And we know these companies will absolutely use this data to extract every possible gain.
But I don’t know if I should be denied access because of those people.
Did you previously write this exact comment before?
That's the majority of people though, if you really think that I assume you wouldn't have a problem with needing to be licenced to have this kind of access, right?
If what you're suggesting is a license that would cost money and/or a non-trivial amount of time to obtain, it's a nonstarter. That's how you create an unregulated black market and cause more harm than leaving the situation alone would have. See: the wars on drugs, prostitutes, and alcohol.
I think they can design it to minimize misinformation or at least blind trust.
There's no way to design it to minimise misinformation, the "ground truth" problem of LLM alignment is still unsolved.
The only system we currently have to allow people to verify they know what they are doing is through licencing: you go to training, you are tested that you understand the training, and you are allowed to do the dangerous thing. Are you ok with needing this to be able to access a potentially dangerous tool for the untrained?
If you want working regulation for this, it will need to focus on warnings and damage mitigation, not denying access.
If you don't mind sharing, what kind of useful information is ChatGPT giving you based off of a photo that your doctor didn't give you? Could you have asked the doctor about the data on the instrument and gotten the same info?
I'm mildly interested in this kind of thing, but I have a severe health anxiety and do not need a walking hypochondria-sycophant in my pocket. My system prompts tell the LLMs not to give me medical advice or indulge in diagnosis roulette.
In another case I uploaded a CSV of CGM data, analyzed it and identified trends (e.g. Saturday morning blood sugar spikes). All in five minutes on my phone.
What evidence do you have that providing your health information to this company will help you or anyone (other than those with financial interest in the company)?
There is a very real, near definite, chance that giving your, and others', health data to this company will hurt you and others.
Will you still hold this, "I personally don’t care who has access to my health data", position?
Maybe you don't know but your car insurance drops you due to the risk you'll have a cardiac event while driving. Their AI flagged you.
You need a new job but the same AI powers the HR screening and denies you because you'll cost more and might have health problems. You'd never know why.
You try to take out a second on the house to pay for expenses, just to get back on your feet, but the AI-powered risk officer judges your payback potential to be %.001 underneath the target and are denied.
The previously treatable heart condition is now dire due to the additional stress of no job, no car and no house and the financial situation continues to erode.
You apply for assistance but are denied because the heart condition is treatable and you're then obviously capable of working and don't meet the standard.
Perhaps you were given some medication that is later proven harmful. Maybe there’s a sign in your blood test results that in future will strongly correlate with a condition that emerges in your 50s. Maybe a study will show that having no appendix correlates with later issues.
How confident are you that the data will never be used against you by future insurance, work screening, dating apps, immigration processes, etc
It seems like an easy fix with legislation, at least outside the US, though. Mandatory insurance for all with reasonable banded rates, and maximum profit margins for insurers?
Your comment is extraordinarily naive.
1. Is transexual but does not tell anybody they are and it is also not blatantly obvious
2. Writes down in a health record they are transexual (instead of whatever sex they are now)
3. Someone doxxes they/them medical records
4. Because of 3, and only because of 3, people find out that said person is transexual
5. And then ... the government decides to persecute they/them
Let's be real, you're really stretching it here. You're talking about a 0.1% of a 0.1% of a 0.1% of a 0.1% of a 0.1% situation here.
You could try to answer that instead of making up a strawman.
Dialogue 101 but some people still ignore it.
‘Being able to access people’s medical records is just another tool in law enforcement’s toolbox to prosecute people for stigmatized care'
They are already using the legal system in order to force their way into your medical records to prosecute you under their new 'anti-abortion' rulings.
https://pennsylvaniaindependent.com/reproductive_rights/texa...
Right. So able bodied, and the gender and race least associated with violence from the state.
> being discriminated against for insurance if you have a drug habit
"drug habit", Why choose an example that is often admonished as a personal failing? How about we say the same, but have something wholly, inarguably, outside of your control, like race, be the discriminating factor?
You medical records may be your DNA.
The US once had a racist legal principle called the "one drop rule": https://en.wikipedia.org/wiki/One-drop_rule
Now imagine an, lets say 'sympathetic to the Nazi agenda', administration takes control of the US gov's health and state sanctioned violence services. They decide to use those tools to address all of the, what they consider, 'undesirables'.
Your DNA says you have "one drop" of the undesirable's blood, some ancient ancestor you were unaware of, and this admin tells you they are going to discriminate against your insurance because of it based on some racist psuedoscience.
You say, "but I thought i was a 30 something WHITE male!!" and they tell you "welp, you were wrong, we have your medical records to prove it", you get irate that somehow your medical records left the datacenter of that llm company you liked to have make funny cat pictures for you and got in their hands, and they claim your behavior caused them to fear for their lives and now you are in a detention center or a shallow grave.
"That's an absurd exaggeration." You may say, but the current admin is already removing funding, or entire agencies, based on policy(DEI etc) and race(singling out Haitian and Somali immigrants), how is it much different from Jim Crow era policies like redlining?
If you find yourself thinking, "I'm a fitness conscious 30 something white male, why should I care?", it can help to develop some empathy, and stop to think "what if I was anything but a fitness conscious 30 something white male?"
>>>>>> Are you giving your vitals to Sam Altman just like that?
>>>>> Yes, if it will help me and others
>>>> What evidence do you have that providing your health information to this company will help you or anyone (other than those with financial interest in the company)
>>> I’m definitely a privacy fist person, but can you explain how health data could hurt you, besides obvious things like being discriminated against for insurance if you have a drug habit or whatever.
>> [explanation of why it might be worrisome]
> These points seem to be arguments against giving your health data to anybody, not just to an AI company.
I did not make any claims that it was useless; the context I was responding to was someone being dubious the there were risks after being asked whether they had any reason to assume that it would be beneficial to share specific info, and following that a conversation ensued about why it might make sense to err on the side of caution (independently of whether the company happens to be focused on AI).
To be explicit , I'm not taking a stance on whether the experiences cited elsewhere in the thread constitute sufficient evidence. My point isn't that there is no conceivable benefit, but that the baseline should be caution about sharing medical info, and then figuring out if there's enough of a reason to choose otherwise.
In this case, I suspect that the classic biases of HN (pro-privacy and anti-ai) might interact to dismiss the value that can be provided by a specialized medical llm/ agent (despite indications that an unspecialised one is already helpful!) while rightly pointing out the risks of sharing sensitive data.
What if you have to pay health insurrance because of the collected data or what if you don't get certain insurrances?
Most People don't have ap roblem that someone gets their medical data but that these information is used to their disadvantage.
That's the trouble with AI. You can only be impressed if you know a subject well enough to know it's not just bullshitting like usual.
You can have the same problem with doctors who don't give you even 5 minutes of your time and who don't have time to read through all your medical history.
I live in a place where I can get anything related to healthcare and even surgery within the same day at an affordable price, and even here I've wasted days going to various specialists who just tried to give me useless meds.
Imagine if one lives in a place where you need an appointment 3 months in advance, you most certainly will benefit from going there showing your last ChatGPT summary.
I think the more plausible comment is "I've been protected my whole life by health data privacy laws that I have no idea what the other side looks like".
Quite frankly, this is even worse as it can and will override doctors orders and feed into people's delusions as an "expert".
(Life insurance companies are different.)
Edit: Literally on the HN front page right now. https://news.ycombinator.com/item?id=46528353
I’ve also lived in places where I don’t have a choice in doctor.
What's the worst that can happen with OpenAI having your health data? Vs the best case? You all are no different from AI doomers who claim AI will take over the world.. really nonsensical predictions giving undue weight to the worst possible outcomes.
There are no doubt many here that might wish they had as consequence-free a life as this question suggests you have had thus far.
I'm happy for you, truly, but there are entire libraries written in answer to that question.
I think in the US, you get out of the system what you put into it - specific queries and concerns with as much background as you can muster for your doctor. You have to own the initiative to get your reactive medical provider to help.
Using your own AI subscription to analyze your own data seems like immense ROI versus a distant theoretical risk.
It's always been interesting to me how religiously people manage to care about health data privacy, while not caring at all if the NSA can scan all their messages, track their location, etc. The latter is vastly more important to me. (Yes, these are different groups of people, but on a societal/policy level it still feels like we prioritize health privacy oddly more so than other sorts of privacy.)
23andme was massively successful in their mission.
Sidenote: their mission was not about helping you understand your genomic information.
Even after the first MRI essentially ruled this out, she fed the MRI to chatGPT which basically hallucinated that a small artifact of the scan was actually a missed tumor and that she needed another scan. Thousands wasted on pointless medical expenses.
Having friend's in healthcare, they have mentioned how common this is now. Someone coming in and demanding a set of tests based on chatGPT. They have explained that A, tests with false positives can actually be worse for you (triggers even more invasive tests) B, insurance won't cover any of your chatGPT requests.
Again, being involved in your care is important but disregarding the medical professional in front of you is a great way to set yourself up for substandard care.
Most doctor advice boil down to drink some water and take a painkiller, while glancing for 15 seconds at my medical history before they dedicate me 7 minutes, after which they move to yet another patient.
So compared to this, AI that can analyze all my medical history, and has access to the entirety of medical researches that are publicly available, could be a very good tool to have.
But at the same time technofeudalism, dystopia, etc.
As an aside, I'm very sorry for what you're going through. Empathy is easy when you've had something similar! I'll say that in my case removing a tooth that was impinging on a nerve did have substantial benfits that only became clear months down the line. I'm not saying that will happen for you, but a bit of irrational optimism seems to be an empirically useful policy.
People get better healthcare outcomes with strong self advocacy. Not everyone is good at that.
A parallel exists where not everyone negotiates for a better job offer.
Even worse, they were almost always wrong about the diagnosis and I'd find myself on 3 or 4 rounds of antibiotics, or would go to the pharmacy to pick up something and they'd let me know the cocktail I had just been prescribed had dangerous counterindications. I finally stopped going when I caught a doctor searching webmd when I was on my fourth return visit for a simple sinus infection that had turned into a terrible ear infection.
My next doctor wasn't much better. And I had really started to lose trust in the medical system and in medical training.
We moved a few years ago to a different city, and I hadn't found a doctor yet. One day I took sick with something, went to a local walk-in clinic in a strip mall used mostly by the local underprivileged immigrant community.
Luck would have it I now found an amazing doctor who's been 100% correct in every diagnosis and line of care for both me and my wife since - including some difficult and sometimes hard to diagnose issues. She has basically no equipment except a scale, a light, a sphygmomanometer, and a stethoscope. Does all of her work using old fashioned techniques like listening to breathing or palpation and will refer to the local imaging center or send out to the local lab nearby if something deeper is needed.
The difference in absolutely wild. I sometimes wonder if she and my old doctors are even in the same profession.
I guess what I'm trying to say is, if you don't like your doctor, try some other ones until you find a good one, because they can be a world difference in quality -- and don't be moved by the shine of the office.
Forgive my brusqueness here, but this could only be written by someone who has not yet been seriously ill.
My 12 year old daughter (correctly) diagnosed herself to a food allergy after multiple trips to the ER for stomach pains that resulted in “a few Tylenol/Advil with a glass of water”.
I would have mentioned that if it happened. It didn’t.
1. Prevent someone from dying
2. Treat severe injuries
3. Identify if what someone is experiencing is life-threatening or requires immediate treatment to prevent their condition worsening
4. Provide basic treatment and relief for a condition which is determined not to be an imminent threat
In particular, they are not for diagnosing chronic conditions. If an ER determines that someone's stomach pain is not an imminent, severe threat to their health, then they are sending them out of the ER with medication for short-term relief in order to make room for people who are having an emergency. The ER doc I know gets very annoyed at recurring patients who expect the ER to help them diagnose and treat their illness. If you go the ER, they send you home, and the thing happens again, make an appointment with a physician (and also go to the ER if you think it's serious).
Unfortunately, the medical system is very confusing and difficult to navigate. This is a big part of why so many people end up at ERs who should be making appointments with non-emergency doctors - finding a doctor and making appointments is often hard and stressful, while an ER will look at anyone who walks through the doors.
There are a lot of health related issues humans can experience that affect their lives negatively that are not life threatening.
I'm gonna give you a good example: I suffer from mild skin related issues for as long as I can remember. It's not a big deal, but I want my skin to be in better condition. I went through tens of doctors and they all did essentially some variation of tylenol equivalent for skin treatment. With AI, I've been able to identify the core problems that every licensed professional overlooked.
This is both a liability and a connectedness issue.
I once went to the UCSF er 2x in the same weekend for an issue. On day one, I was told to take some ibuprofen and drink water - nothing to worry about. I went home, knowing this was not a solution. Day two I returned to the ER because my situation had gotten 10x worse. The (new) doctor on day 2 said "we need to resolve this asap" and I was moved into a room where they gave me a throat-numbing breating apparatus and then shoved a massive spinal tap needle in my throat to drain 20ml of fluid from behind my tonsil. I happened to bump into the doctor from the previous day on my way out and gave him a nice tap on the shoulder saying, thanks for all the help, doc. UCSF tried to bill me 2x for this combined event, but I told them to get fucked due to the negligence on day one. The billing issue disappeared.
I had a Jeep that I took into the shop 2, 3, 4 times for a crazy issue. The radio, seat belt chime, and emergency flashers all become possessed at the same time. Using my turn signal would cause the radio to cut in and out. My seat belt kept saying it was disconnected. No one could fix it. What was the issue? A loose ground on the chassis that all of those different systems were sharing. https://www.wranglerforum.com/threads/2015-rubicon-with-elec...
These are just two examples from my life, but there are countless. I just do everything myself now, because I trust no one else.
I still trust doctors, but this made me much more demanding towards them.
If you're getting bad troubleshooting it's because you're going to places that value moving units (people) as fast as possible to pay the rent. I assure you neither most mechanics nor most doctors are 'the worst debuggers in humankind'. There are plenty of mechanics and doctors that will apply a 65% solution to you and hope that it pays off, but they're far from the majority.
Most of the time, that's the correct approach. However, you can actually do better by avoiding painkillers, since they can have side effects. There are illnesses that are easily diagnosable and have established medications; doctors typically prescribe what pharmaceutical companies have demonstrated to them. But the rest of the "illnesses," which make up the majority, are pretty much still a mystery.
For the most part, neither you nor your doctor can do much about these. Modern medicine often feels like just a painkiller subscription.
I’ve had longstanding GI issues. I have no idea how to describe my symptoms. They sure seem like a lot of things, so I bring that list to my doc and I’m met with “eh, sounds weird”.
By contrast, I solved my issues via a few sessions with Claude. I was able to rattle off a whole list of symptoms, details about how it’s progressed, diets I’ve tried, recent incidents, supplements/meds I’ve taken. It comes up with hypothesis, research references, and forum discussions (forums are so useful for understanding - even if they can be dangerously wrong). We dive into the science of those leading causes to understand the biochemistry involved. That leads to a really deep understanding of what we can test.
Turns out there’s a very clear correlation with histamine levels and the issues I deal with. I realize a bunched stuff that I thought was healthy (and is for most people) is probably destroying my health. I cut those out and my GI issues have literally disappeared. Massively, massive life improvement with a relatively simple intervention (primarily just avoiding chicken and eggs).
I tell my doctor this and it’s just a blank stare “interesting”.
Quantity and environment also played a huge role. If my histamine levels were low, I could often tolerate many of my trigger foods. However, if they were high (like during allergy season), the same foods would trigger me.
It took a very, very long time to narrow in on the underlying issue.
You could run LLMs locally to mitigate this. Of course, running large models like GLM-4.6 is not feasible for most people, but smaller models can run even on Macbooks and sometimes punch way above their weight
that's alright, if this idea takes off your insurance won't see any need to pay for you to speak to one, when they can dump you onto "AI" instead
Connected thinking, interest in helping and extensive depth or breadth of knowledge in anything beyond what they need for their chosen specializations day to day work are rare and coincidental.
Is that all?!
Also, everyone is just “normal people” in aggregate.
- Humans are self healing - if a doctor does absolutely nothing, most issues will resolve on their own or, if they're chronic (e.g. back pain) they won't be worse off compared to the alternative. I'm in a country with subsidised health care, and people go to the doctor immediately for absolutely anything. A doctor could have a 99% success record by handing out placebos.
- Most patients have common issues. I.e. maybe 30 people visit the clinic on a given day, it's possible that all 30 of them have come because they have the flu. Doctors are human, nobody's going to investigate potential pneumonia 30 times a day every day for 6 months. So doctors don't: someone comes in and is coughing, they say it's flu, on to the next patient. If the person really has pneumonia, they'll come back when it gets worse.
- Clinics are overbooked. I don't know if it's licensing, GDP, artificial scarcity, cost regulations or what, but doctors probably don't actually have time to investigate anyways.
- Doctors don't receive any rigorous continuing education. I'm sure there's some restrictions, but I've gone into doctors in the last year and gotten the "stress causes ulcers" explanation for what turned out to be food sensitivity issues (there was no visible ulcer mind you, so it was concluded that it was an invisible ulcer). Slow, gradual maintenance, and heavy reading are hard things that humans are necessarily bad at.
- Patients don't want to hear the truth. Lifestyle changes, the fact that nothing can be done, there's no pills to cure you, etc. Even if doctors could give a proper diagnosis, it could end up being bad PR so doctors are conditioned away from it.
- Doctors don't follow up - they get absolutely no feedback whether most of their treatments actually work. Patients also don't come back when their issue is resolved, but even if they do doctors don't care. Skin issue, doctor prescribed steroidal cream, redness disappeared, doctor declared I was cured, redness came back worse a week later. As a scientific field, there's no excuse for anything but evidence based medicine, but I haven't seen a single doctor even make an attempt to improve things statistically.
I've heard things like, doing tests for each patient would be prohibitively expensive (yes, but it should at least be an option and patients can pay for it) or the amount of things medicine can actually cure today is very small so the ROI would be low for additional work (yes, but in the long term the information could result in furthering research).
I think these are obvious and unavoidable issues (at least with the current system), but at the same time if a doctor who ostensibly became a doctor out of a desire to help people willingly supports this system I think they share some of the blame.
I don't trust AI. Part of me goes, well what if the AI suddenly demands I have some crazy dental surgery? And then I go, wait, the last dentist I went to said I need some crazy dental surgery. That none of the other 3 dentists I went to after that even mentioned. And as you said an AI will at least consider more info...
So I do support this as well. I'd like to have an AI do a proper diagnosis, then maybe a human rubber stamp it or handle escalation if I think there's something wrong...
But the question I ask myself is: is this better than the alternative? if I wasn't asking ChatGPT, where would I go to get help?
The answers I can anticipate are: questionably trustworthy web content; an overconfident friend who may have read questionably trustworthy web content; my mom who is referencing health recommendations from 1972. And as best I can imagine, LLMs are going to likely to provide health advice that's as good but likely better than any of those alternatives.
With that said, I acknowledge that people are likely more inclined to trust ChatGPT more like a licensed medical provider, at which point the comparison may become somewhat more murky, especially with higher severity health concerns.
When I got worried about an exercise suggestion from an app I'm using (weight being used for prone dumbbell leg curls) Chatgpt confirmed there is a suggested upper limit on weight for that exercise and that I should switch it out. I appreciate not injuring myself. (Gemini gave a horrible response, heh...)
Chatgpt is dangerous because it is still too agreeable and when you do go outside what it knows the answers get wrong fast, but when it is useful it is very useful.
It's what you do with that information that is important - the correct path is to take your questions to a medical professional. Only a medical professional can give you a diagnosis, they can also answer other questions and address incorrect information.
ChatGPT is very good for providing you with new avenues to follow-up upon, it may even help discover the correct condition which a doctor had missed. However it is not able to deliver a diagnosis, always leave that to a medical professional.
This actually differs very little from people Googling their symptoms - where the result was the same: take the new information to your medical professional, and remember to get a second opinion (or more) for any serious medical condition, or issues which do not seem to be fully resolved.
There is no deny on positive case of people actually being helped by ChatGPT. It's well known that Doctors can often dismiss symptoms of rare conditions, and those people specifically find way more success on the internet because the people with similar conditions tends to gather here. This effect will repeat with ChatGPT.
Is this serious question? Can't you call/visit doctor?
It's like telling someone to ask their doctor about nutrition. It's not in their scope any longer. They'll tell you to try things and figure it out.
The US medical industry abdicated their thing a long time ago. Doctors do something I'm sure, but discuss/advise/inquire isn't really one of them.
This was multiple doctors, in multiple locations, in various modalities, after blood tests and MRIs and CT scans. I live with literally zero of my issues resolved even a little tiny bit. And I paid a lot of money out of pocket (on top of insurance) for this experience.
Either way, nobody is arguing that doctors aren't great. Doctors are great!
The argument is that doctors are not accessible enough and getting additional advice from AI is beneficial.
AI is a disaster waiting to happen. As it is simply a regurgitation of what has been already said by real scientists, researchers,and physicians, it will be the 'entry drug' to advertise expensive therapies.
Thank goodness our corporations have not stooped to providing healthcare in exchange for blood donation, skin donation, or other organ donation. But I can imagine United HEalthcare merging with Blackstone so that people who need healthcare can get 'health care loans'.
Actually, we have made huge progress in the war on cancer, so this example doesn’t seem to support your narrative.
Last time I was in ER, I accompanied my wife; we got bounced due to lack of appropriate doctor on site, she ended up getting help in another hospital, and I came back home with severe case of COVID-19.
Related: every pediatrician I've been to with my kids during the flu season says the same thing: if you can't get an appointment in a local clinic, stay home; avoid hospitals unless the kid develops life-threatening symptoms, as visiting such places carry high risk of the kid catching something even worse (usually RSV).
We used to observe that our kid(s) got sick every time we flew over the winter break to visit family. We no longer have this problem. (we do still have kids.) Not getting sick turns out to be really quite nice. :-) Hanging out in the pediatrician's office surrounded by snotty, coughing children who are not mine...
They only go when it's urgent/very worrying.
I'm not sure if this has switched entirely to video calls or not, but when it became popular it was a great way to avoid overloading urgent care and general physicians with non-urgent help requests.
Or, I could've gone to a doctor and overloaded our healthcare system even more.
ChatGPT serves as a good sanity check.
Where I live, doctors are only good for life threatening stuff - the things you probably wouldn't be asking ChatGPT anyway. But for general health, you either:
1. Have to book in advance, wait, and during the visit doctor just says that it's not a big deal, because they really don't have time or capacity for this.
2. You go private, doctor goes on a wild hunt with you, you spend a ton of time and money, and then 3 months later you get the answer ChatGPT could have told you in a few minuites for $20/mo (and probably with better backed, more recent research).
If anything, the only time ChatGPT answers wrong on health related matters is when it tries to be careful and omits details because "be advised, I'm not a doctor, I can't give you this information" bullshit.
To an MD?
These are self inflicted problems, we should work on these and improve them, not give up and rely on llms for everything
LLMs might make doctors cheaper (and reduce their pay) by lowering demand for them. The law of supply and demand then implies that care will be cheaper. Do we not want cheaper care? Similarly, LLMs reduce the backlog, so patients who do need to see a doctor can be seen faster, and they don't need as many visits.
LLMs can also break the stranglehold of medical schools: It's easier to become an auto-didact using an LLM since an LLM can act like a personal tutor, by answering questions about the medical field directly.
LLMs might be one of the most important technologies in medicine.
Who's responsible when the llm fucks up ?
&c.
All of your points sound like the classic junior "I can code that in 2 days" naive take on a problem.
Pros: more time spent with patients, access to a physician basically 24/7, sometimes included are other amenities (labs, imaging, sometimes access to rx at doctors office for simple generics, gym discounts, eye doctor discounts, etc)
Cons: it’s an extra cost to get access to that physician yearly ranging from a few hundred US dollars per year to sometimes thousands $1.5k-3k (or tens of thousands or more), those who aren’t financially lucky to be that well off don’t get such access.
—-
That said, some of us do this on the side to augment our salary a bit as medicine has become too much of a business based on quantity and not quality. Sad that I hear from patients that said a simple small town family doc like myself can spent 20-30mins with a patient when other providers barely spend 3 mins. My regular patients get usually 20-30mins with me on a visit unless it’s a quick one for refills and I don’t leave until they are done and have no questions. My concierge patients get 1 hour minimum and longer if they like. I offer free in-depth medical record review where I get sometimes boxes of old records to review someone’s med history if they are a new concierge patient. Had a lady recently deal with neuropathy and paresthesias for years. Normal blood counts. Long story short. She had moderate iron deficiency and vitamin b 6 deficiency from history of taking isoniazid in a different country for TB and biopsy proven celiac disease. Neuropathy basically gone with iron and b6 supplements and a celiac diet after I recommended a GI eval for endoscopy. It takes time to dig into charts like this and CMS doesn’t pay the bills to keep the clinic lights open to see patients like that all the time and this is why we are in such a bad place healthcare wise in the USA were we have chosen quality than quantity and the powers that be are number crunchers and not actual health care providers. It serves us right for let’s admins take over and we are all paying the price.
So much more I want to say but I don’t think many will read this. But if you read this and don’t like your doctor, please look around. There are still some of us out there that care about quality medicine and do try our best to spend time with the patient. If you got one of those “3 minute doctors” look for one or consider establishing care with a resident clinic at an academic center were you can be seen by resident doctors and their attending physicians. It’s not the most efficient but can almost guarantee those resident physicians will spend a good chunk of time with you to help you as much as they can.
That's how it works here too, in PCP-Centric plans. The PCP gets paid, regardless if the patient shows up or not. But is also responsible to be the primary contact point for the patient with the health system, and referrals to specialists.
Obviously a GP refers to specialists when necessary, but he is qualified to triage issues and perform initial treatment in many cases.
But, presumably for liability and out of a genuine attempt to get me the best care possible, they _prefer_ to send me off to a specialist. Either way I'm not being treated until the specialist has time, which take a couple months at least.
And then 6⁺ months to be seen be a specialist.
For better worse, even before the advent of LLMs, people were simply Googling whatever their symptoms were and finding a WebMD or MayoClinic page. Well, if they were lucky. If they weren't lucky, they would find some idiotic blog post by someone who claimed that they cured their sleep apnea by drinking cabbage juice.
LLM’s are good for advice 95% of the time, and soon that’ll be 99%. But it is not the job of OpenAI or any LLM creator to determine the rules of what good healthcare looks like.
It is the job of the government.
We have certification rules in place for a reason. And until we can figure out how to independently certify these quasi-counselor robots to some degree of safety, it’s absolutely out of the question to release this on the populace.
We may as well say “actually, counseling degrees are meaningless. Anyone can charge as a therapist. And if they verifiably recommend a path of self-harm, they should not be held responsible.”
Can someone please ELI5 - why is this a training issue, rather than basic design? How does one "train" for this?
But what they do is exfiltrate facts and emotions from your chats to create a profile of you and feed it back into future conversations to make it more engaging and give it a personal feeling. This is intentionally programmed.
>every time the whole conversation history gets reprocessed
Unless they're talking about the memory feature, which is some kind of RAG that remembers information between conversations.
Btw, context caching can overcome this, e.g. https://ai.google.dev/gemini-api/docs/caching . However, this means it needs to persist the (large) state in the server side, so it may have costs associated to it.
I also wonder what the word "foundational" is supposed to mean here.
Not to mention that doctors generally don’t have time to explain everything. Recently I’ve been doing my own research and (important failsafe) running the conclusions by my doctor to validate. Tighter integration between physician notes, ChatGPT conversations, and ongoing biomarkers from e.g. function and Apple Health would make it possible to craft individualized health plans without requiring six-figure personal doctor subscriptions.
A great opportunity to improve the status quo here.
Of course - as with software, quality control will be the crux. We don’t want “vibe diagnosing”.
- for liver: https://www.longevity-tools.com/liver-function-interpreter - for insulin resistance: https://www.longevity-tools.com/glucose-metabolism-interpret...
etc
I suspect that will be legally-tested sooner than later.
You also have to imagine that they've got their zero guardrails superpowered internal only next generation bot available to them, which can be used by said lawyer horde to ensure their asses are thoroughly covered. (It'd be staggeringly stupid not to use their AI for things like this.)
The institutions that have artificially capped levels of doctors, strangled and manipulated healthcare for personal gain, allowed insurance and health industries to become cancerous - they should be terrified of what's coming. Tools like this will be able to assist people with deep, nuanced understanding of their healthcare and be a force multiplier for doctors and nurses, of which there are far too few.
It'll also be WebMD on steroids, and every third person will likely be convinced they have stereochromatic belly button cancer after each chat, but I think we'll be better off, anyway.
It's like they one-shot it.
This is why I've had my dr change their mind between appointments, having had more time to review the data.
Or I get 3 different experts giving me 3 different (contradicting!) diagnoses.
That's also why I always hesitate listening to their first advice.
I cannot imagine doctor evaluating just one possibility.
While waiting for the ambulance at the rehab center, i plugged all his health data from is MyChart and described the symptoms. It accurately predicted (in its top two possibilities) C Diff infection.
Fast forward two days, ER has prescribed in general antibiotics. I pushed the doctors to check for C Diff and sure enough he tested positive for it - and they got him on the right antibiotics for it.
I think it was just in time as he ended up going to the ICU before he got better.
Maybe they would have tested for C Diff anyway, but definitely made me trust ChatGPT. Throughout his stay after every single update in his MyChart I copy and paste the pdf to the long running thread for his health.
I think ChatGPT health- being able to automatically import this directly will be a huge game changer. This is probably my number one use case of AI is health and wellness.
My dad is getting discharged tomorrow (to a different rehab center, thankfully)
Great work, can't wait to see what's next.
Which makes me think it's likely on the user if what you said actually happened...
For example, "man Googles rash, discovers he has one-in-a-million rare disease" [1].
> Ian Stedman says medical professionals shouldn't dismiss patients who go looking for answers outside the doctor's office - even if they resort to 'Dr. Google.'
> "Whenever I hear a doctor or nurse complain about someone coming in trying to diagnose themselves, it boils my blood. Because I think, I don't know if I'd be dead if I didn't diagnose myself. You can't expect one person to know it all, so I think you have to empower the patient."
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC8084564/
[1] https://www.cbc.ca/radio/whitecoat/man-googles-rash-discover...
Some physicians are absolutely useless and sometimes worse than not receiving any treatment at all. Medicine is dynamic and changes all the time. Some doctors refuse to move forward.
When I was younger I've had a sports injury. I was misdiagnosed for months until I did my own research and had the issue fixed with a surgery.
I have many more stories of doctors being straight up wrong about basics too.
I see physicians in a major metro area at some of the best hospital networks in the US.
Two years later when I got it fixed the new surgeon said there was nothing left of the old one on the MRI so it must have been torn 1.5-2+ years ago.
On the other hand, to be fair to doctors, I had a phase of looking into supplements and learned the hard lesson that you really need to dig into the research or find a very trusted source to have any idea of what's real because I definitely thought for a bit a few were useful that were definitely not :)
And also to be fair to doctors I have family members who are the "never wrong" types and are always talking about whatever doctor of the day is wrong about what they need.
My current opinion is using LLMs for this, in regards to it informing or misinforming, is no different than most other things. For some people this will be valuable and potentially dramatically help them, and for others it might serve to send them further down roads of misinformation / conspiracies.
I guess I ultimately think this is a good thing because people capable of informing themselves will be able to do so more effectively and, sadly, the other folks are (realistically) probably a lost cause but at the very least we need to do better educating our children in critical thinking and being ok with being wrong.
Not to mention, doctors are absolutely fallible and misdiagnose constantly.
By luck I consulted with another specialist due the former doctor not being available at and odd time, and some re-tests help determine that I need a different class of medicines. I was better within months.
4 years of wrong medicines and over confidence from a leading doctor. Now I have a tool to double check what doctor has recommended.
I’ve worked medical software packages, specifically a drug interaction checker for hospitals. The system cannot be written like a social media website… it has to fail by default, and only succeed when an exact correct solution was determined. This result must be repeatable given the same inputs. The consequence is people die.
Similarly the non-determinism in ChatGPT should be handled at a higher level. It can suggest possibilities for diagnosis and treatment, but you should still evaluate those possibilities with the help of a trained physician.
a diagnostic system should not necessarily be deterministic, because it always operates on incomplete data and it necessarily produces estimates of probability as an output
Humans are probabilistic systems?! You might want to inform the world's top neuroscientists and philosophers to down tools. They were STILL trying to figure this out but you've already solved it! Well done.
If America wants to take care of its people, it needs to tear down the bureaucracy that is our healthcare system and streamline a single payer system. Otherwise, doctors will be unable to compete with tools like this because our healthcare system is so inconvenient.
Waitlist: 404 page not found.
Embarrassing
no seriously, openai seemingly lost interest in being the 'best' model - instead optimizing for other traits such as speech and general human likeness? there's obviously codex but from my experience it's slower and worse than the other big 2 in every single way: cost, speed and accuracy. codex does seem to be loved by vibe coders the most that don't really know how to code at all so maybe it is also what they're optimizing for and why it doesn't personally suit me.
others might have better models, but openai has the users emotionally attached to the models at this point even if they know it or don't. there were several times I recommended switching and the response I got is that "chatgpt knows me better".
The “Codex” models suck.
Claude, even Opus, creates major side-effects and misses obvious root causes of bugs.
I think opus is special because it was trained explicitely on very strong reliance on such tools while gpt seems to "know" how things work a lot better reducing the need for tools.
opus being able to dedicate a lot more parameters for these things make it a better model if you give it what it needs, but that's just my observation. it's also much faster as a bonus.
Which in turn means you have the option of feeding it into ChatGPT. This feels potentially very valuable and a nice way of working around issues with whether doctors themselves are allowed to do it.
I'm not sure this applies to every surgery, but certainly my dad had access to everything immediately when he had a scan.
I hadn't heard of this before.
The capabilities the NHS app offers will depend on what subset of the functionality the GP practice has implemented (on, in reality, the commercial vendor that makes the software they use).
NHS has pretty reasonable developer documentation which explains most of the high level pieces of the system - https://digital.nhs.uk/developer/guides-and-documentation
They should held liable for malpractice for every incorrect statement and advice provided by it.
Cars, planes, food, scooters, sports, cold, heat, electricity, medication, ...
I'm worried that hospital admins will see this as a way to boost profit margins. Replace all the doctors by CNAs with ChatGPT. Yes, doctors are in short supply, are overworked, and make mistakes. The solution isn't to get rid of them, but to increase the supply of doctors.
[Teenager died of overdose 'after ChatGPT coached him on drug-taking']
Admittedly I am basing this on pure vibes: I'd bet that adding AI to the healthcare environment will, on balance, reduce this number, not increase it.
Just as we don't necessarily want to eliminate a technology because of a small percentage of bad cases, we shouldn't push a technology just because of a small percentage of good anecdotes.
This genie isn't going back in the bottle but I sure hope it ends up more useful than harmful. Which now that I write it down is kind of the motto of modern LLM companies.
The main issue is that medicine and diseases come with so many "it depends" and caveats. Like right now my cat won't eat anything, is it because of nausea from the underlying disease, from the recent stress she's been through, from the bad reaction to the medicine she really doesn't like, from her low potassium levels, something else, all of the above? It's hard to say since all of those things mention "may cause nausea and loss of appetite". But to be fair, even the human vets are making their own educated guesses.
1) doctor writes down their own diagnosis based on the symptoms (or dictates, whatever)
2) diagnosis is saved, notarised, etc that it can't be changed afterwards
3) doctor receives AI-generated differntial diagnosis
4) now they can adjust their diagnosis, but the original still stays logged
Why? Because the AI-generated response might direct the doctor's thinking subconsciously in unpredictable ways resulting in diagnosis done solely by AI in practice.
The latter implicitly assumes all your questions are personal. It seems to have no concept of context for its longer term retentions.
Certainly for health, non accute things seem matter a lott. This is why yoir personal doctor that has known you for decades will spot things beyond your current symptoms.
But ChatGTP will uncritically retain from that time you helped your teacher relative build her lesson plans that you "are a teacher in secondary education" or that time you helped diagnose a friends car trouble that you "drives a high performance car" just the same as your regular "successfully built a proxmox datacenter".
With health there will be many users aking on behalve of or helping out an elderly relative. I wonder whether all those 'diagnoses' and 'issues' will be correctly attributed to the right 'patient' or just be mixed together and assumed to be all about 'you'.
MRI letters
Based on the reports of various failings on the safety front, I sure hope users will take that into account before they get advice to take 500g of aspirin.
I use [insert LLM provider here] all the time to ask generic, health-related questions but I’m careful about what I disclose and how I disclose it to the models. I would never connect data from my primary care’s EHR system directly to one of these providers.
That said, it’ll be interesting to see how the general population responds to this and whether they embrace it or have some skepticism.
I’m not confident we’ll have powerful/efficient enough on-device models to build this before people start adopting the SaaS-based AI health solutions.
ChatGPT’s target market is very clearly the average consumer who may not necessarily care what they do with their data.
Repeated 2x without explanation. Good start.
---
>You can further strengthen access controls by enabling multi-factor authentication
Pushing 2fac on users doesn't remove the need for more details on the above.
---
>to enable access to trusted U.S. healthcare providers, we partner with b.well
>wellness apps—like Apple Health, Function, and MyFitnessPal
Right...?
---
>health conversations protected and compartmentalized
Yet OAI will share those conversations with enabled apps, along with "relevant information from memories" and your "IP address, device/browser type, language/region settings, and approximate location"? (per https://help.openai.com/en/articles/20001036-what-is-chatgpt...)
Good words at a high level, but it would really help do have some detail about the "dedicated space with added protections"
How isolated is the space? What are the added protections? How are they implemented? What are the ways our info could leak?, and many more.
I wish I didn't have to be so skeptical of something that should be a great good providing more health info to more people, but the leadership of this industry really has, "gone to the dark side".
"Health care data breach affects over 600k patients"
For every positive experience there are many more that are negative, if not life threatening or simply deadly.
On what basis do you make this assertion?
When it happens, the quality of AI will be determined by enterprise sales and IT.
Right now, a patient can inject AI into the system by bringing a ChatGPT transcript to their doctor.
AI already saved me from having an unnecessary surgery by recommending various modern-medicine (not alternative medicine) alternatives which ended up being effective.
Between genetics, blood stool and urine tests, scans (ultrasound, MRI, x-ray, etc), medical history... Doctors don't have time for a patient with non trivial or non obvious issues. AI has the time.
I hit the 404 and seems like most folks on X did too, e.g. replies to Greg Brockman's announcement here: https://x.com/gdb/status/2008984705723719884
Most of the ones I've worked with aren't passionate about their specialty or their patients, and their neglect and mistakes show it.
This is a tautology. “Most doctors are just (doing lots of work there) people who had the ability to meet the prerequisites for…becoming a doctor.”
Are doctors fallible? Of course. Is there room for improvement with regard to LLM’s? Hopefully. Does that mean there’s reason to spread general distrust in doctors? Fuck no. Until now they were the only chance you had at getting health care.
Why a patient's health cannot be seen this way?
Version controlled source code -> medical records, health data
Slack channels -> discussion forum dedicated to one patient's health: among human doctors (specialists), AI agents and the patient.
In my opinion we are in the stone age compared to the above.
They are tools, they are sometimes useful tools in particular domains and on particular teams, but your comment reads like one that assumes they are universally agreed upon already, and thus that the health industry has a trustee example they could follow by watching our industry. I firmly disagree with that basis.
Once ChatGPT recommended me a simple solution to a chronic health issue - a probiotic with a specific bacteria strand. And while I used probiotics with before, apparently they have all different stands of bacteria. The one ChatGPT recommended me really worked.
Expectations : I, Robot
Reality: Human extinction after ChatGPT kills everyone with halucinated medical advice, and lives alone..
This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees. People are going to choose asking ChatGPT instead of their local doctor who is not only cheaper(!!!) but also actually educated.
People saying that this is good because the US system specifically is so messed up and useless are missing that the US makes up ~5% of the world's population, yet you think that a medical tool made for the issues of 5% of the population will be AMAZING and LIFE SAVING for the other 95%, more than harmful? Get a grip.
Not to mention shitty doctors, which exist everywhere, likely using this instead of their own brains. Great work guys.
I suspect the rationale at OpenAI at the moment is "If we don't do it, someone else will!", which I last heard in an interview with someone who produces and sells fentanyl.
Well then I suppose they'd have no need or motivation to use it, right?
Oh no wait, you’re right it’s heart disease!
Oh it’s not heart disease? It’s probably cancer
Rinse and repeat
OpenAI in health - I'm reticent.
As someone who pays for ChatGPT and Claude, and uses them EVERYDAY... I still am not sure how I feel about these consumer apps having access to all my health data. OpenAI doesn't have the best track record of data safety.
Sure OpenAI business side has SOC2/ISO27001/HIPAA compliance, but does the consumer side? In the past their certifications have been very clearly "this is only for the business platform". And yes, I know regular consumer don't know what SOC2 is other than a pair of socks that made it out of the dryer.... but still. It's a little scary when getting into very personal/private health data.
Gattaca is supposed to be a warning, not a prediction. Then again neither was Idiocracy, yet here we are.
This is without getting into Physician licenses if we consider the product to be physician state advice.
Which is to say, OpenAI getting approval wouldn't make this any better if that approval isn't actually worth the paper it's written on.
One thing I've noticed in healthcare is for the rich it is preventative but for everyone else it is reactive. For the rich everything is an option (homeopathics/alternatives), for everyone else it is straight to generic pharma drugs.
AI has the potential to bring these to the masses and I think for those who care, it will bring a concierge style experience.
Maybe they can train it on Japanese text.
Ah yes. Because in the EU you cannot actually steal people's data.
Get ready to learn about the food pyramid, folks.
UX is not going to be a prime motivator, because the product itself is the very thing that stands between user and the thing they want. UX-wise, for most software, it's better for users to have all these products to be reduced to tool calls for AI agents, accessible via a single interface.
The very concept of product itself is limiting users to interactions allowed by the product vendor[0] - meanwhile, used as tools for AI agents allows them to be combined in ways user need[1].
--
[0] - Something that, thanks to move to the web and switching data exchange model from "saving files" to "sharing documents", became the way for SaaS businesses to make money by taking user data hostage - a raison d'être for many products. AI integration threatens that.
[1] - And vendors would very much like users to not be able to. There's going to be some interesting fights here, as general-purpose AI tools are an existential threat to most of the software industry itself.
Just having an LLM is not the right UX for the vast majority of apps.
> Just having an LLM is not the right UX for the vast majority of apps.
I argue it is, as most things people do in software doesn't need to be hands-on. Intuition pump: if you can imagine asking someone else - a spouse, a friend, an assistant - to use some app to do something for you, instead of using the app yourself, then turning that app into a set of tools for LLM would almost certainly improve UX.
But I agree it's not fully universal. If e.g. you want to browse the history of your meals, then having to ask an LLM for it is inferior to tapping a button and seeing some charts. My perspective is that tool for LLM > app when you have some specific goal you can express in words, and thus could delegate; conversely, directly operating an app is better when your goal is unclear or hard to put in words, and you just need to "interact with the medium" to achieve it.
A solution could be, can the AI generate the UI then on the fly? That's the premise of generative UI, which has been floating around even on HN. Of course the issue with it is every user will get different UIs, maybe even in the same session. Imagine the placement of a button changing every time you use an app. And thus we are back to the original concept, a UX driven app that uses AI and LLMs as informational tools that can access other resources.
My cleaning lady's daughter had trouble with her ear. ChatGPT suggested injecting some oil into it. She did and it became a huge problem, so that she had to go to the hospital.
I'm sure ChatGPT can be great, but take it with a huge grain of salt.
For some people this is obvious, so much so that they wouldn't even mention it, while others have seen only the hype and none of the horror stories.