Seven of the world’s leading tech firms working on artificial intelligence are meeting with the Biden Administration today where they will agree to a set of new commitments intended to manage potential risks posed by AI.
The new voluntary agreements include commitments to external, third-party testing prior to releasing an AI product and the development of watermarking systems to inform the public when a piece of audio or video material was generated using AI systems. The Biden administration said these voluntary commitments, which essentially amount to self-policing by tech firms, mark just the first of several steps needed to properly manage AI risk.
A White House official told Gizmodo that President Biden will meet with leaders from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to discuss the new commitments which are rooted around three core principles of safety, security, and trust. On the safety side, the companies will agree to run their models through internal and external testing prior to their release. The companies will also agree to share findings on safety and attempts to circumvent safeguards across industry, government, and civil society.
For security, the companies are committing to invest in additional cybersecurity and insider threat safeguards and agree to support third parties in discovering and reporting vulnerabilities to their systems. Perhaps most interestingly, the tech firms say they will all develop technical mechanisms like watermarks to ensure users know when content is AI-generated. A White House official speaking on the phone said these commitments were intended to push back against the threat of deepfakes and build trust among the public. Similarly, the AI-makers have agreed in advance to prioritize research showing the risks of bias, privacy, and discrimination their products can pose.
Meta President of Global Affairs Nick Clegg, who is among the leaders meeting at the White House, praised the new commitment which he described as an “important first in ensuring responsible guardrails,” are created for AI development.
“AI should benefit the whole of society,” Clegg said in a statement. “For that to happen, these powerful new technologies need to be built and deployed responsibly. As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society.”
If any of that sounds familiar it’s because several of these companies are already working towards the very same goals this agreement outlines. OpenAI, for example, previously gave researchers advanced versions of GTP-4 to “red-team” the models and attempt to pressure them into making harmful statements in order to improve them. Microsoft has also already pledged to watermark AI-generated images and videos. It’s unclear if any of these companies could face any sort of penalty for reneging on any of these commitments.
The White House official speaking with Gizmodo and other reporters said the commitments were intended to bring each of these seven companies together under the same set of agreements. And while this does not officially affect the hundreds of other smaller companies working on AI systems, the White House hopes the baseline set here could encourage others in the industry to follow a similar path.
The official also revealed the Biden Administration is in the process of developing an executive order to ensure safe and secure AI, but wouldn’t provide details on what exactly that order will entail. Biden’s administration is also working with Congress to support bipartisan AI legislation they hope could strike a balance between safeguarding AI and leaving room for innovation.
“This is the next step but certainly not the last step,” the official added.
Update, 07/21/23, 10:46 a.m. EST: Added statement from Nick Clegg.