Untested AI could lead to healthcare errors and patient harm, WHO warns 

The World Health Organization is calling for caution in using artificial intelligence-generated large language model tools (LLMs) such as ChatGPT, Bard, Bert and others that imitate understanding, processing and human communication. 

LLMs increasing use for health-related purposes raises concerns for patient safety, WHO said. The precipitous adoption of untested systems could lead to errors by healthcare workers and cause harm to patients. 

Read the full post on News Feed