White House secures safety commitments from 7 AI companies
Home/CyberSecurity / White House secures safety commitments from 7 AI companies
White House secures safety commitments from 7 AI companies

Dive Brief:

  • Seven leading AI companies have committed to building secure systems and increasing transparency regarding model behavior, The White House announced Friday
  • Amazon, Anthropic, Google, Meta, Microsoft, OpenAI and ML startup Inflection pledged to submit their systems to testing by independent experts, The White House said. The group of companies will also invest in cybersecurity and insider threat safeguards, prioritize research on AI’s societal risks and publicly report AI systems’ capabilities and limitations. 
  • The Biden-Harris Administration said it is drafting an executive order and will pursue legislation to promote responsible innovation.

Dive Insight:

As AI has evolved and taken center stage, governments around the world have taken steps to ensure responsible development and implementation of the technology while trying to avoid hindering innovation.

The seven companies committed to conduct internal and external testing of their AI systems before release to guard against cybersecurity risk. This pledge includes investments in cybersecurity safeguards to defend proprietary and unreleased model weights prior to release.

The agreement also requires the companies to engage third parties for vulnerability discovery and reporting in AI systems. While some issues may persist after an AI system is released, officials said robust reporting can help organizations find and fix vulnerabilities more quickly.

The Biden-Harris Administration worked with a laundry list of countries on the voluntary commitments, including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE and the UK.

The White House convened a group of senior officials and technology industry executives in May when it initially announced a public evaluation of AI systems. At the time, Anthropic, Google, Hugging Face, Microsoft and NVIDIA agreed to the evaluation, which will take place in Las Vegas in August. 

Microsoft has also committed to several other standards to promote safe, secure and trustworthy AI systems, the company announced Friday. The company pledged to implement the National Institute of Standards and Technology AI risk management framework as well as robust reliability and safety practices for high-risk models and applications.

Oversight for AI companies has become a regulatory focus. Senate Majority Leader Chuck Schumer, D-NY, unveiled the SAFE Innovation Framework for AI policy in June, urging Congress to act on the evolving technology. Schumer plans to host a series of AI forums in the fall.

“If we don’t program these algorithms to align with our values, they could be used to undermine our democratic foundations, especially our electoral processes,” Schumer said in June during a speech in Washington at the Center for Strategic and International Studies. “If we don’t set the norms for AI’s proper uses, others will.”

Schumer’s SAFE framework calls for security, accountability, protecting foundations and explainability. 

“I commend President Biden for his leadership in securing important commitments on the development of AI,” Schumer said in a statement shared on Twitter following the White House’s announcement Friday. “To maintain our lead, harness the potential and tackle the challenges of AI effectively requires legislation to build and expand on the actions President Biden is taking today.”



Source link