• AiNews.com
  • Posts
  • Biden Administration Hosts Global AI Safety Meeting in San Francisco

Biden Administration Hosts Global AI Safety Meeting in San Francisco

Realistic image showing an international AI safety meeting in San Francisco. A group of people in formal attire sit around a conference table with flags from multiple countries, including the USA, EU, UK, Japan, Canada, and Australia. The backdrop features the San Francisco skyline with digital overlays of AI icons and safety symbols, emphasizing the global collaboration on AI safety and development

Image Source: ChatGPT-4o

Biden Administration Hosts Global AI Safety Meeting in San Francisco

Government scientists and AI experts from nine countries and the European Union will convene in San Francisco shortly after the U.S. elections to discuss the safe development of AI technologies and strategies to mitigate potential risks. The two-day gathering, organized by the Biden administration, is scheduled for November 20-21.

Advancing AI Safety Measures

The upcoming event follows a year after the AI Safety Summit in the United Kingdom, where delegates committed to collaborating on AI safety. This meeting aims to build on those discussions, with a focus on addressing the most pressing AI risks, such as malicious use and synthetic content.

“This will be the first get-down-to-work meeting after the UK summit and a May follow-up in South Korea,” said U.S. Commerce Secretary Gina Raimondo. The network of safety institutes formed from these previous gatherings will play a pivotal role in the upcoming discussions.

Key Topics on the Agenda

The meeting will tackle urgent issues like the rise of AI-generated misinformation and the challenge of identifying when an AI system's capabilities warrant regulatory oversight. “We're going to think about how do we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors,” Raimondo said.

The event will also lay the groundwork for a broader AI summit in Paris in February, just weeks after the U.S. presidential election between Vice President Kamala Harris and former President Donald Trump. Harris has been instrumental in shaping the U.S. approach to AI risks.

Global Collaboration on AI Safety

Co-hosted by the U.S. Commerce Department and the State Department, the meeting will include representatives from the newly established national AI safety institutes in the U.S., UK, Australia, Canada, France, Japan, Kenya, South Korea, Singapore, and the European Union. Notably absent from the list of participants is China, though Raimondo indicated that additional countries may still join.

“I think that there are certain risks that we are aligned in wanting to avoid, like AIs applied to nuclear weapons, AIs applied to bioterrorism,” Raimondo noted, emphasizing the importance of international cooperation in preventing these threats.

U.S. and EU Lead in AI Regulation

While many governments have committed to AI safety, their regulatory approaches vary. The European Union has implemented comprehensive AI legislation, imposing strict regulations on high-risk AI applications. In contrast, President Biden's executive order on AI, signed last October, mandates that developers of the most advanced AI systems share safety test results with the government and sets standards for safe and secure AI tool deployment.

San Francisco-based OpenAI, the company behind ChatGPT, has also taken steps toward responsible AI use. It provided early access to its latest model, known as o1, to the U.S. and UK national AI safety institutes before its public release. The new model, which exhibits advanced reasoning capabilities, has been classified as posing a “medium risk” in the context of weapons of mass destruction.

Moving Beyond Voluntary AI Regulation

Since the surge in generative AI’s popularity in late 2022, the Biden administration has urged AI companies to voluntarily commit to rigorous testing of their most powerful models. “That is the right model,” Raimondo said, adding that the voluntary system may not be sufficient moving forward. “We need Congress to take action.”

While tech companies generally support the idea of AI regulation, some are concerned that stringent rules could hinder innovation. In California, Governor Gavin Newsom recently signed legislation to combat political deepfakes ahead of the 2024 election but has yet to decide on a more contentious bill that would regulate extremely powerful AI models that could pose significant risks if developed.