Researchers Urge Meta, Google, OpenAI To Allow Independent Investigations Into Their Systems
In a recent development, more than 100 leading artificial intelligence (AI) researchers have signed an open letter, urging generative AI companies to grant them access to their systems.
The researchers argue that the companies’ stringent rules are hindering safety testing of AI tools used by millions of consumers.
What Happened: The letter, signed by prominent figures in AI research, policy, and law, including Stanford University’s Percy Liang and Pulitzer Prize-winning journalist Julia Angwin, calls out companies such as OpenAI, Meta (NASDAQ:META), Anthropic, Google (NASDAQ:GOOGL), and Midjourney. It asks these firms to provide a legal and technical safe harbor for researchers to scrutinize their products, reported The Washington Post on Tuesday.
The researchers argue that the companies’ policies, designed to prevent misuse of AI systems, are discouraging independent research. They fear being banned or sued if they attempt to safety-test AI models without the companies’ approval.
One of the letter’s signatories, Deb Raji, a Mozilla fellow known for her pioneering research into auditing AI models, highlighted the issue, saying, “Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable.”
Why It Matters: The call for transparency in AI systems comes amid a global push for AI regulation. In January, the Indian government was reported to be working on new AI rules to control bias and regulate AI companies. The proposed amendments would require platforms using AI algorithms or language models to train machines to be free of any “bias.”
Meanwhile, the EU’s proposed AI Act could potentially shift the future of AI into the hands of the US. The leaked document of the AI Act, according to Euractiv, shows that the timeline for compliance is tight and could be costly, potentially impacting start-up companies.