The example in the article is an in house developed "A.I." to help radiologists assess images. Digging a bit deeper it seems they are using mostly old CNN type architectures with a few million parameters.[1]
I think it still remains to be seen what a 1T+ parameter transformer trained specifically for radiology will do. I think anyone would be confident that a locally run CNN will not hold a candle to it.
[1]https://mayo-radiology-informatics-lab.github.io/MIDeL/index...
Let's assume for now that it's true that AI can't do a certain subset of your work. Your profession won't be eliminated from the earth, that's true. But if 80% of your work can be done by AI, 80% of your work will be done by AI. There will still be humans kept around for that remaining 20%, but fewer of them will be needed.
AI does not need to eliminate an entire profession for your employment in that profession to be eliminated. Roughly speaking, if it can do half of your job, then about 1/2 fewer humans are needed, give or take some communication overhead.
This article is surprisingly accurate. I fully expect to finish my career without being 'replaced' by AI.
Happy to debate/answer questions :-)
I imagine, given the training involved, the job involves more than just looking at pictures? This is what I would like to see explained.
The analogy would be the "95% of code is written by AI" stat that gets trotted out, replacing code with image evaluation. Yes AI will write the code but someone has to tell the AI what to write which is the tricky part.
On the other hand, how much of your confidence in not being replaced stems from AI not being able to do the work, and how much from legal/societal issues (a human needing to be legally responsible for the diagnoses)? Honestly the description in the article of what a radiologist does "Radiologists do far more than study images. They advise other doctors and surgeons, talk to patients, write reports and analyze medical records. After identifying a suspect cluster of tissue in an organ, they interpret what it might mean for an individual patient with a particular medical history, tapping years of experience" doesn't strike me as anything impossible for AI within a few years, now that models are multimodal and they can work with increasing amounts of text (e.g. medical histories).
Like there are times already where I’ve put off or not sought medical care because of the hassle involved.
If I could just waltz into the office and get an appointment and have an issue seen to same day I would probably do it more often.
Askl to your question on where my confidence stems from, there are both legal reasons and 'not being able to do the work' reasons.
Legal is easy, the most powerful lobby in most states are trial attorneys. They simply won't allow a situation where liability cannot be attached to medical practice. Somebody is getting sued.
As to what I do day to day, I don't think I'm just matching patterns. I believe what I do takes general intelligence. Therefore, when AI can do my job, it can do everyone else's job as well.
About that, I think the AMA is ultimately going to be a victim of its own success. It achieved its goal of creating a shortage of medical professionals and enriching the existing ones. I don't think any of their careers are in danger.
However, long term, I think magic (in the form of sufficiently advanced technology) is going to become cost effective at the prices that the health care market is operating at. First the medical professionals will become wholly dependent on it, then everyone will ask why we need to pay these people eye-watering sums of money to ask the computers questions when we can do that ourselves, for free.
For context, generative AI music is basically unlistenable. I’ve yet to come across a convincing song, let alone 30 seconds worth of viable material. The AI tools can help musicians in their workflow, but they have no concept of human emotion or expression and it shows. Interpreting a radiology problem is more like an art form than a jigsaw puzzle, otherwise it would’ve been automated long ago (like a simple blood test). Like you note, the legal system in the US prides itself on “accountability” (said tongue in cheek) and AI suffers no consequences.
Just look how well AI worked in the United Healthcare deployment involving medical care and money. Hint: stock is still falling.
This one pops into my head every couple months:
https://youtube.com/watch?v=4gYStWmO1jQ
It's not really my genre, so my judgment is perhaps clouded. Also, I find the dumb lyrics entertaining and they were probably written by a human (though obviously an AI could be prompted to do just as well). I am a fan of unique character in vocals and I love that it pronounces "A-R-A" as "ah-ahr-ah", but the little bridge at 1:40 does nothing for me.
Which is ironic given how much variation in output quality there is based on the judgement of the person using the LLM (work scope, domain, prompt quality, etc.)
[0] https://www.siemens-healthineers.com/en-us/radiotherapy/soft...
My personal opinion is that a lot of Medical professionals are simply gatekeeping at this point of time and using legal definitions to keep changing goalposts.
However this is a theme that will keep on repeating in all domains and I do feel that gradual change is better than sudden, disruptive change.
This is a really interesting point that I haven't considered. Namely, regulatory arbitrage is going to yield enormous benefits in the medical AI space. The sheer amount of data needed to train the model requires data centralization the west has no desire to move toward. But if China does crack the nut, it seems like it will necessarily create an upheaval in the west, whether we like it or not.
The other doctors will still be there for you to sue.
Will you be able to source a radioactive source for your x-rays?
DIY radiation therapy would be a whole new level.
Healthcare-grade x-ray tubes are not something you easily can obtain without a license.
- If the law allows AI to replace you.
- If the hospital/company thinks [AI cost + AI-caused law suits] will be less expensive that [your salary + law suites caused by you].
I'm almost in the same situation as you are. I have 22 years left until retirement and I'm thinking I should change my career before I'm too old to do it.Can you please edit out swipes like that from your HN posts? (Prepending "respectfully" doesn't help much.) This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
The rest of your comment is just fine of course.
And, I didn't say I would never be replaced. I said I would finish my career, which is approximately 10 more years at this point.
If AI gets to the point where it is truly replacing radiologists and programmers wholesale, it is difficult to tell anyone what to do about it today, because that's essentially on the other side of the singularity from here. Who knows what the answer will be?
(Ironically, the author of that paper, being also a science fiction author, is also responsible for morphing the singularity into "the rapture for nerds" in his own sci-fi writing. But I find the original paper's definition to have more utility in the current world.)
[1]: https://accelerating.org/articles/comingtechsingularity
It's really not a matter of "full replacement or bust".
For example, let's say I'm looking at a chest x-ray. There is a pneumonia at the left lung base and I am clever enough to notice it. 'Aha', I think, congratulating myself at making the diagnosis and figuring out why the patient is short of breath.
But, in this example, I stop looking closely at the X-ray after noticing the pneumonia, so I miss a pneumothorax at the right lung apex.
I have made a mistake radiologists call 'satisfaction of search'.
My 'search' for the patient's problem was 'satisfied' by finding the pneumonia, and because I am human and therefore fundamentally flawed, I stopped looking for a second clinically relevant diagnosis.
An AI module that detects a pneumothorax is not prone to this type of error. So it sees something I did not. But it doesn't see something that I can't see. I just didn't look.
https://www.npr.org/sections/health-shots/2013/02/11/1714096...
I'm skeptical to the claim that AI isn't prone to this sort of error, though. AI loves the easy answer.
Ah, now I have a name for it.
When I've chased a bug and fixed a problem I found that would cause the observed problem behavior, but haven't yet proven the behavior is corrected, I'm always careful to specify that "I fixed a problem, but I don't know if I fixed the problem". Seems similar: found and fixed a bug that could explain the issue, but that doesn't mean there's not another one that, independently, would also cause the same observed problem.
That is, the models spot pathologies that 99.9999% of rads would spot anyway if not overworked, tired, or in a hurry. But, addressing the implication of your question, the value is actually in spotting a pathology that 99.9999% of rads would never spot. In all my years developing medical imaging startups and software, I've never seen it happen.
I don't expect to see it in my lifetime.
I agree with almost everything you've said here.
Except 'not in my lifetime', because I plan on living for a very long time, and who knows what those computer nerds will come up with eventually ;-)
> Radiologists do far more than study images. They advise other doctors and surgeons, talk to patients, write reports and analyze medical records. After identifying a suspect cluster of tissue in an organ, they interpret what it might mean for an individual patient with a particular medical history, tapping years of experience.
AI will do that more efficiently, and probably already does. "tapping years of experience" is just data in training set.
> A.I. can also automatically identify images showing the highest probability of an abnormal growth, essentially telling the radiologist, “Look here first.” Another program scans images for blood clots in the heart or lungs, even when the medical focus may be elsewhere. > “A.I. is everywhere in our workflow now,” Dr. Baffour said. > “Five years from now, it will be malpractice not to use A.I.,” he said. “But it will be humans and A.I. working together.”
Maybe you'll be able to happily retire because inertia, but overall it looks like elevator operator job.
What's so special about radiology?
However, it's my opinion that my job takes general intelligence, not just pattern matching.
Therefore, when I lose my job to AI, so does everyone else.
On the other hand, a lot of jobs take general intelligence. You’re right about that too.
It’s difficult to guess the specifics of your life, but: maybe you’ve engaged a real estate agent. Some people use no real estate agent. Some have a robo agent. No AI involved. Maybe you have written a will. Some people go online and spend $500 on templates from Trust & Will, others spend $3,000 on a lawyer to fill in the templates for them, some don’t do any of that at all. Even in medicine, you know, a pharma rep has to go and convince someone to add their thing to the guidelines, and you can look back at the time between the study and adoption as, well people were intelligent and there was demand, but doctors were not doing so and so thing due to lack of essentially sales. I mean you don’t have to be generally intelligent to know that flossing is good for you, and yet: so few people floss! That would maybe not put tons of dentists out of business. But people are routinely doing (or not doing) professional services stuff not for any good (or bad) reason at all.
Clearly the thing going on in the professional services economy isn’t about general intelligence - there’s already lots of stuff that is NOT happening long before AI changes the game. It’s all cultural.
If you’ve gotten this far without knowing what I am talking about… listen, who knows what’s going to happen? Clearly a lot of behavior is not for any good reason.
How do you know where the ball is going to go for culture? Personally I think it’s a kind of arrogant position: “I’m a member of the guild, and from my POV, if my profession is replaced, so is everyone else’s.” Arrogance is not an attractive culture, it’s an adversarial one! And you could say inertia, and yet: look who’s running the HHS! There are kids right now, that I know in my real life, who look like you or me, who went to fancy Ivy League school, and they are vaccine skeptical. What about inertia and general intelligence then? So I’ll just say, you know, putting yourself out here on this forum, being all like, “I will AMA, I am the voice,” and then to be so arrogant: you are your own evidence for why maybe it won’t last 10 years.
Been going to RSNA for longer than you've been a radiologist. In all that time, I've never come across an AI that I felt was fit for purpose.
I wholeheartedly agree with you.
Many many reasons for this, and I'm happy to chime in from the tech side of things and fill in any blanks outside your knowledge domain.
In order to "crack" radiology, the AI companies would need to launch an enormous data collection program involving thousands of hospitals across the world. Every time you got an MRI or X-Ray, you would sign some disclosure form that allowed your images to be anonymously submitted to the central data repository. This kind of project is very easy to describe, but very difficult to execute.
Everyday I see something on a scan yhat I've never seen before. And, possibly, no one has ever seen before. There is tremendous variation in human anatomy and pathology.
So what do I do? I use general intelligence. I talk to the patient. I talk to the referring doctor. I compare with other studies, across modalities and time.
I reason. I synthesize. I think.
So my point is, basically, radiology takes AGI.
Even a tiny hospital with radiology services will produce many thousands of images with accompanying descriptions every year. And you are allowed to anonymize and do research on these things in many places as neither image nor accompanying description is a personal identifier.
So this is yet another Hinton-ish prediction, any time soon radiologist are going dodo. This time LLMs will crack the nut that image recognition have failed at for 20 years.
Where LLMs have succeeded is in doing hot takes that miss the mark, they should be really good at cornering the "prematurely predicting demise of radiologist"-market
Let's say a major healthcare leak occurred, involving millions of images and associated doctor notes, diagnostics, etc... would this help advance the field or is it some algorithmic issue?
Wonder what other forecasts of doom he is wrong about :|.
It does not have common sense.
Now think about how much of software development is typing out the code vs talking to people, getting a clear definition of the problem, debugging, etc. (I would love an LLM that could debug problems in production — but all they can do is tell me stuff I already know). Then layer on that there are far more ideas for what should be built than you have time to actually build in every organization I’ve ever worked in.
I’m not worried about my job. I’m more worried my coworkers won’t realize what a great tool this is and my company will be left in the dust.
Do you have any links to research or work being done on computer vision that leads you to this conclusion? Would love to check it out!
The most recent of which you mentioned, Transformers, is used by both LLMs and image synthesis/understanding. The parent posits that while image understanding lags behind LLMs, this may not continue. Given the current state of Transformers, I'm not sure I follow the argument?
Which gives credence to your theory that people aren't bringing much to the table.
Still not clear that the already superhuman capabilities of AI won't still fully supplement radiologist interpretive skills with every additional bit of training data that comes in.