The advancement of artificial intelligence (AI) could have tangible and profound real-world health risks and health professionals must warn the world about the potential dangers, a group of academics wrote, as clamors grow for work on the technology to be halted.
The academics wrote in the BMJ Global Health journal that time was running out to take action because corporations, the military and governments were working so fast to develop AI tools.
AI exploded into the public consciousness last year with ChatGPT, a bot capable of generating tracts of coherent text from short prompts.
The wild popularity of the bot sparked a race between tech giants like Google and Microsoft to embed AI in everything from spreadsheets to search tools, and prompted investors to pour money into AI startups.
But the health academics pointed to a range of threats, including powerful AI surveillance systems being developed in dozens of countries, killer robots and disinformation.
For healthcare, they wrote, people with darker skin were at serious risk of harm or reduced care because the datasets used to "train" AI algorithms were often biased.
They argued that "the window of opportunity to avoid serious and potentially existential harms is closing."
The authors, led by Frederik Federspiel of the London School of Hygiene and Tropical Medicine and David McCoy of the United Nations University in Kuala Lumpur, wrote that global cooperation would be needed.
"Healthcare professionals have a key role in raising awareness and sounding the alarm on the risks and threats posed by AI," they wrote in an analysis piece.
"If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances."
Concern about the direction of AI research is prompting alarm even among those at the center of the field.
Earlier this month, computer scientist Geoffrey Hinton, often dubbed "the godfather of AI," quit his job at Google to warn of the "profound risks to society and humanity" of the technology.
In March, billionaire Elon Musk – whose Tesla carmaker deploys AI systems – and hundreds of experts called for a pause in AI development to allow time to make sure the technology was safe and properly regulated.