NEW YORK, Dec. 25 (Xinhua) -- Over the past year, millions of people have started being treated by U.S. health providers using artificial intelligence (AI) for repetitive clinical work. The hope is that it will make doctors less stressed out, speed up treatment and possibly spot mistakes, reported The Washington Post on Wednesday.
"Medicine, traditionally a conservative, evidence-based profession, is adopting AI at the hyper speed of Silicon Valley," noted the report. "These AI tools are being widely adopted in clinics even as doctors are still testing when they're a good idea, a waste of time or even dangerous."
The harm of generative AI, notorious for "hallucinations," producing bad information is often difficult to see, but in medicine the danger is stark. One study found that out of 382 test medical questions, ChatGPT gave an "inappropriate" answer on 20 percent. A doctor using the AI to draft communications could inadvertently pass along bad advice, according to the report.
Another study found that chatbots can echo doctors' own biases, such as the racist assumption that Black people can tolerate more pain than white people. Transcription software, too, has been shown to invent things that no one ever said, it added. ■