A group of current and former employees from AI companies, including Microsoft-backed (MSFT.O) OpenAI and Alphabet’s (GOOGL.O) Google DeepMind, expressed concerns on Tuesday about the risks posed by emerging AI technology.
In an open letter, 11 current and former OpenAI employees, along with one current and one former Google DeepMind employee, stated that financial motives within AI companies hinder effective oversight. “We do not believe bespoke structures of corporate governance are sufficient to change this,” the letter added.
The letter further warns of risks from unregulated AI, such as the spread of misinformation, the loss of independent AI systems, and the exacerbation of existing inequalities, which could lead to “human extinction.”
Researchers have identified instances where image generators from companies like OpenAI and Microsoft produced photos containing voting-related disinformation, despite policies against such content. The letter states that AI companies have “weak obligations” to share information with governments about their systems’ capabilities and limitations, and that these firms cannot be trusted to do so voluntarily.
This open letter is the latest in a series of warnings about the safety concerns surrounding generative AI technology, which can rapidly and inexpensively create human-like text, imagery, and audio.
The group has called on AI firms to establish processes for current and former employees to raise risk-related concerns without enforcing confidentiality agreements that prohibit criticism. Separately, on Thursday, the Sam Altman-led firm announced it had disrupted five covert influence operations that attempted to use its AI models for “deceptive activity” online.