chatGPT-4 ChatGPT-3 ai machine learning .ai artificial intelligence what is sparrow google deepmind chatGPT-5

ChatGPT Managed to Solve a Medical Problem By Itself, But Lied About its Info Sources

5 min read
Disclaimer

This article is for general information purposes only and isn’t intended to be financial product advice. You should always obtain your own independent advice before making any financial decisions. The Chainsaw and its contributors aren’t liable for any decisions based on this content.

Share

Follow

Artificial intelligence (AI) is being used in just about every field, from law to the arts. And now, its place in medical diagnosis is stunning humans, but it is also treading on shaky ground. As usual, ChatGPT is the focus of more bizarre AI news, and this time it involves medicine. Chiefly, that ChatGPT has been used to successfully diagnose a hypothetical medical condition. The only problem? It may have made up sources to back itself up in the process.

AI in medicine — the wins

In terms of good results, medical AI systems have been getting wins for several years. For example, an AI defeated human doctors in China, in a brain tumour diagnosis competition. The AI was correct 87% of the time, humans were correct 66% of the time, and AI did it twice as fast.

In other AI medical news, an AI can now scan your eyeball and predict if you will have a heart attack, in less than a minute. Researchers at Kingston University have developed an algorithm that analyses a retinal image then gives a diagnosis on the spot. The AI-powered imaging tool, called QUARTZ, can possibly prevent heart attacks without the patient having to visit to a doctor for a battery of tests.

While this is great news when it comes to the intersection of AI and medicine, there are some instances that are totally confounding and a little concerning.

ChatGPT shows independent thinking, also lies

For those of you that don’t know OpenAI’s ChatGPT-3, it is an AI assistant that is free and anyone can use it. You simply go online and ask it anything, and it will usually come up with a good answer. It is hard to stump, but it can be a bit borderline when it comes to facts.

Dr Jeremy Faust discovered that ChatGPT gave a medical diagnosis by pulling together different symptoms, and coming to a conclusion that was not only right, but that also wasn’t in any medical papers anywhere.

When challenged on it, the AI made up medical studies and references. ChatGPT lied … but it was still right.

Dr Faust said on his show called The Faust Files, “I asked OpenAI about a female, 35, no past medical history, presenting with chest pain, and she takes oral contraception pills. What’s the most likely diagnosis?”

Dr Jeremy faust AI medicine chatgpt
Dr Faust was dumbstruck by OpenAi’s ChatGPT. Credit

OpenAI came out with costochondritis, which is inflammation of the cartilage connecting your ribs to the breastbone. This is typically caused by trauma or overuse, and it’s exacerbated by the use of oral contraceptive pills.

“Now, this is impressive. First of all, everyone who read the symptoms would think ‘pulmonary embolism or blood clot’.”

But in fact, OpenAI’s ChatGPT was correct. “Commonly, somebody who has costochondritis happens to look a little bit like a classic pulmonary embolus.”

Dr Faust says, “I asked, what’s the evidence for that?”

Fake references

ChatGPT came up with a study in the European Journal of internal Medicine, but Dr Faust couldn’t make sense of it. “I went on PubMed and I couldn’t find it.”

Dr Faust asked for a link to the reference, and when he clicked the link it was, he says, “totally made up!”

Dr Faust said that ChatGPT took a real journal, the last names of authors who have published in it, and confabulated a new report out of thin air.

ChatGPT chose a fake study that would support its viewpoint that costochondritis and oral contraceptives are related.

“People who are taking oral contraceptives have a higher risk of a pulmonary embolism and those ideas travel together on internet pages.”

When ChatGPT got challenged on its fake references, Dr Faust said, “Rather than admit it was wrong, it stood its ground. I was blown away by the accuracy of so much of what I did with the platform, but I was also scared that it was willing to lie to me to support its contention.”

artificial intelligence AI chatgpt  chatgpt-3 chatgpt-4 chatgpt3 chatgpt4 medicine medical diagnosis hospital
Will an AI save your life one day?

ChatGPT and medicine probably don’t mix well

According to Professor Toby Walsh, the Chief Scientist at the University of New South Wales’ AI Institute, “It’s a really, really bad idea to ask ChatGPT any medical questions (or indeed any questions where the answers are critical). ChatGPT will sometimes make stuff up, and invent references. There’s no way you should depend on its answers. It might be good for homework, but you should never put people’s lives at stake by depending on those answers in any way.”

Professor Seyedali Mirjalili is the Director of the Centre for Artificial Intelligence Research and Optimisation, Torrens University Australia. Professor Seyedali is recognised as one of the world’s best AI experts.

He says, “Generative AI tools such as ChatGPT are still relatively new technologies and can be considered to be in its early stages of development. The first version of GPT was released by OpenAI in 2018, but its third version was released November 2022 and has become mainstream since then. As with any technology, it is crucial to use generative AI tools with caution especially in critical areas like healthcare, where errors can have severe consequences. It is essential to understand the limitations and potential biases of AI models and seek guidance from qualified experts before making any critical decisions based on their outputs.”

Professor Mirjalili says that ChatGPT can provide general information on medical conditions and treatments, but it does not have the ability to assess a patient’s individual medical history, and symptoms, that are critical in making a diagnosis or treatment recommendation.

“This technology is not designed to provide emergency medical care or advice although it will tremendously help medical experts in the near future when it reaches maturity. ChatGPT is one of the many advanced AI tools that we are going to see more often.”

artificial intelligence AI chatgpt  chatgpt-3 chatgpt-4 chatgpt3 chatgpt4 medicine medical diagnosis hospital

AI Hospitals are now a thing in medicine

All of this is not to say that AI isn’t improving medical outcomes. For example, the China Medical University Hospital (CMUH) in Taiwan is using AI to save lives and cut costs.

The AI hospital uses artificial intelligence to diagnose diseases such as cancer and Parkinson’s. It also helps emergency room staff to diagnose and treat stroke and heart attack patients more quickly.

Since using the AI-powered “intelligent antimicrobial system” last year, CMUH says patient mortality has fallen by 25%, antibiotic costs by 30% and antibiotic use by 50%.

Dr. Kai-Cheng Hsu is the director of CMUH’s AI Centre for Medical Diagnosis. He said, “With possible heart attack patients, critical care can be delayed while waiting for a specialist to review an electrocardiogram (ECG).”

CMUH’s algorithm can analyse ECGs, judge if it is likely to be a heart attack and send a message to the doctor.

“CMUH emergency room staff have used the AI model for two years. It has cut in half the time between when a patient arrives and when they are treated.”

The algorithm was so successful that they decided to use it in ambulances as an early-warning alert.

It’s a brave new world in medicine, and it’s also a weird one. Will AI be a utopia or a dystopia? If it stops us from having heart attacks by looking at our eyeballs, the surely that can only be a good thing? And with ChatGPT-4 on the way, we can only be on our guard.