• AiNews.com
  • Posts
  • Apple Joins White House’s AI Safety Commitment, Plans AI Integration

Apple Joins White House’s AI Safety Commitment, Plans AI Integration

An image illustrating Apple's commitment to AI safety, featuring elements such as the Apple logo, the White House, and symbols of AI technology like neural networks and data encryption. The image depicts a handshake between a representative of Apple and a government official, symbolizing the partnership. Background elements include a digital shield representing security and trust, with a professional color scheme that includes blues and whites to emphasize safety and collaboration. Both individuals in the handshake are fully visible.

Apple Joins White House’s AI Safety Commitment, Plans AI Integration

Apple has signed the White House’s voluntary commitment to develop safe, secure, and trustworthy artificial intelligence (AI), according to a press release on Friday. The company plans to integrate its generative AI offering, Apple Intelligence, into its core products, bringing generative AI to its 2 billion users.

Joining Industry Leaders

Apple joins 15 other technology companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, in committing to the White House’s guidelines for developing generative AI. This commitment, made in July 2023, marks Apple's official stance on deeply integrating AI into iOS. At WWDC in June, Apple announced its commitment to generative AI, starting with a partnership to embed ChatGPT in the iPhone. This move signals Apple's willingness to comply with the White House’s AI guidelines, potentially to gain favor before any future regulatory challenges.

Evaluating the Commitment’s Impact

While Apple's voluntary commitment to AI safety may lack immediate enforceability, it represents a significant initial step. The White House views this as the “first step” toward developing safe, secure, and trustworthy AI. The second step involved President Biden’s AI executive order in October, and several AI regulation bills are currently progressing through federal and state legislatures.

Key Commitments and Responsibilities

Under the commitment, AI companies agree to:

  • Red-Team Testing: Conduct adversarial testing on AI models before public release and share the findings with the public.

  • Confidentiality of AI Model Weights: Treat unreleased AI model weights as confidential and work on them in secure environments with limited access.

  • Content Labeling Systems: Develop labeling systems, such as watermarking, to help users identify AI-generated content.

Future Regulatory Developments

The Department of Commerce is set to release a report on the benefits, risks, and implications of open-source foundation models. Open-source AI has become a contentious regulatory issue, with some advocating for limited access to powerful AI model weights to ensure safety, which could impact the AI startup and research ecosystem. The White House’s stance on this issue could significantly influence the broader AI industry.

Progress on AI Initiatives

The White House noted substantial progress on tasks outlined in the October executive order. Federal agencies have made over 200 AI-related hires, awarded computational resources to more than 80 research teams, and released multiple frameworks for AI development.