• AiNews.com
  • Posts
  • OpenAI Partners with Broadcom and TSMC for First In-House AI Chip

OpenAI Partners with Broadcom and TSMC for First In-House AI Chip

Futuristic chip design lab for OpenAI with digital screens displaying custom AI chip schematics developed in collaboration with Broadcom and TSMC. The setup highlights data processing and cloud integration with symbols representing AMD, Nvidia, and OpenAI, emphasizing AI infrastructure and hardware innovation in a sleek, high-tech environment

Image Source: ChatGPT-4o

OpenAI Partners with Broadcom and TSMC for First In-House AI Chip

OpenAI has embarked on developing its first custom AI chip in collaboration with Broadcom and Taiwan Semiconductor Manufacturing Company (TSMC). This project aims to diversify OpenAI’s chip supply and reduce reliance on Nvidia’s GPUs, which currently dominate the market for AI processing. Sources indicate that OpenAI has opted for this in-house approach instead of establishing a costly network of chip factories, or "foundries."

Scaling Back Ambitions, Focusing on Custom Chip Design

Initially, OpenAI explored building its own foundry network to handle all stages of chip production, but due to high costs and long timelines, the company decided to narrow its focus to chip design. With Broadcom’s support, OpenAI is working on an AI inference chip, targeting a 2026 launch, with plans to use TSMC’s facilities for manufacturing.

Broadcom’s Role: Broadcom assists with both the chip design and ensuring high-speed data transfer across interconnected chips—a crucial feature for AI models that rely on simultaneous processing by large numbers of chips.

AMD Chips via Azure: OpenAI will also begin using AMD’s MI300X chips through Microsoft’s Azure platform, marking a shift from the industry’s heavy reliance on Nvidia GPUs.

OpenAI’s Multi-Pronged Chip Strategy

OpenAI has assembled a team of approximately 20 chip experts, including former Google engineers who worked on Tensor Processing Units (TPUs). This chip strategy aims to support OpenAI’s continued growth in generative AI by addressing infrastructure costs and supply chain issues. With soaring demand for compute power to train and operate AI models like ChatGPT, OpenAI’s decision to develop its own chips while maintaining strategic relationships with Nvidia, Broadcom, and TSMC highlights a careful balancing act.

Broader Impact on AI Chip Market: Currently, Nvidia’s GPUs account for over 80% of the market, but supply shortages and rising prices have encouraged companies like OpenAI, Microsoft, and Meta to explore alternatives. OpenAI’s use of AMD’s new MI300X chips, which launched in late 2023, signals competition in the GPU space as companies seek to diversify.

Costs and Challenges in AI Infrastructure

Training and deploying models like ChatGPT require substantial resources, contributing to OpenAI’s projected $5 billion loss for the year, despite an estimated revenue of $3.7 billion. Compute expenses—including hardware, cloud services, and electricity—are among the highest costs, driving OpenAI’s push for more cost-effective chip solutions. Maintaining a working relationship with Nvidia remains important for OpenAI to access Nvidia’s latest Blackwell chips, even as it seeks greater chip independence.

Looking Ahead

OpenAI’s collaboration with Broadcom and TSMC signifies a new chapter in its approach to AI hardware. By creating a custom chip and diversifying its suppliers, OpenAI aims to stabilize its infrastructure costs and build resilience in a rapidly growing market. If successful, the in-house chip strategy could set a precedent for other AI companies seeking optimized infrastructure in a high-demand industry.