Carefully examining the risks involved is imperative while using artificial intelligence (AI) tools such as ChatGPT, Bard, and Bert in healthcare, the World Health Organisation (WHO) said on Tuesday.
While the WHO is enthusiastic about the appropriate use of technologies, including the generated AI tools to support health-care professionals, patients, researchers and scientists, "there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with large language model tools (LLMs)", it said.
LLMs include ChatGPT, Bard, Bert and others that imitate understanding, processing, and producing human communication.
"This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation," the global health body said in a statement.
"It is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people's health and reduce inequity," it added.
The WHO said that "precipitous adoption of untested systems could lead to errors by health-care workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies".
More than crypto, I am interested in artificial intelligence: Elon Musk
Italy orders OpenAI to stop processing users' data else face fine
Not considering law to regulate AI growth in country: IT Ministry
Safeguards needed to ensure AI systems not misused: UN Ambassador Kamboj
US lawmakers propose bill to ban AI from launching nuclear weapons
Govt launches Sanchar Saathi, AI-based portal to detect telecom frauds
Apple unveils tools for cognitive, vision accessibility in its products
Apple may announce its long-awaited AR headset at WWDC in June: Report
OnMobile Global launches SaaS-based gamification platform 'Gamize'
Google's pre-written texts to tackle suicide, aid people to ask for help
The WHO's concerns against the AI tools include that data used to train the AI models may be biased, thus generating misleading or inaccurate information which could pose risks to health, equity and inclusiveness.
The LLMs are also likely to generate responses that can appear authoritative and plausible to an end user and these responses may also be completely incorrect or contain serious errors, especially for health-related responses.
Further, the WHO said that AI may not protect sensitive data (including health data), it can misuse data to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content.
"WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine -- whether by individuals, care providers or health system administrators and policy-makers," the statement said.
(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)