- AI models identify rare diseases faster than many experienced clinicians
- The systems achieve correct or close diagnoses in the majority of difficult cases
- Models analyze symptoms and test data using structured reasoning processes
A new generation of AI tools claims to outperform experienced clinicians in diagnosing rare and complex diseases.
These reasoning models can process long chains of symptoms, test results, and clinical notes, then propose or refine the correct diagnosis more quickly than many human specialists.
Some researchers say this is a profound technological change that will reshape medicine, especially in cases where the correct diagnosis is not obvious, even after careful evaluation.
Article continues below
AI models tackle difficult diagnoses
“We are seeing a very profound technological change that will reshape medicine,” Arjun Manrai of Harvard University said at a news conference.
Yet serious questions remain about whether these systems can bear the full weight of real-world clinical uncertainty.
In a major study, researchers tested a cutting-edge AI reasoning model on a mix of manual-style cases and real-world patient data from a Boston emergency department.
The model analyzed step-by-step descriptions of symptoms, test orders and results, just like clinicians do.
He listed possible diagnoses more often than human doctors and included the true diagnosis, or something very close, in about 80% of difficult cases.
For a transplant patient with subtle signs of a life-threatening infection, the model raised appropriate suspicions about a day before the clinical team.
Researchers say the technology is particularly effective at analyzing broad patterns of rare diseases that individual doctors may rarely encounter.
However, the studies rely on curated patient descriptions rather than raw, chaotic emergency room environments.
Models respond to the information provided to them, not the jumble of overlapping priorities and incomplete data seen in real-world clinics.
Why uncertainty remains a problem
Despite the capability of these AI reasoning models, critics point out that clinical reasoning is more than simple step-by-step logic over a clear textual summary.
“When we talk about clinical reasoning, it doesn’t mean the same thing as model reasoning,” says Arya Rao of Harvard Medical School, who was not involved in the study.
“These models have been optimized to do this kind of sequential thinking that we call reasoning, but it’s not at all the same as how we teach medical students to reason.”
Doctors often have to consider several uncertain possibilities at once, then update them as new data comes in.
AI models tend to cling to a single strong explanation and update their answers in shaky ways as new facts emerge.
A team that tested 21 different AI systems found that even the best reasoning models struggled when considering multiple uncertain diagnoses simultaneously.
The team argued that large language models are not yet ready to make independent decisions in medical settings.
They are at best useful for a second opinion or to highlight rare conditions that clinicians might initially overlook.
Experts emphasize that human doctors remain essential for interpreting context, talking to patients and assessing risks in real time.
Technology can help prevent missed diagnoses in some settings, but it introduces new risks if used without careful oversight and appropriate safeguards.
Via Scientific news
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds.




