- AiNews.com
- Posts
- OpenAI's 'Strawberry' Project Aims to Enhance AI Reasoning
OpenAI's 'Strawberry' Project Aims to Enhance AI Reasoning
OpenAI's 'Strawberry' Project Aims to Enhance AI Reasoning
OpenAI, the creator of ChatGPT, is developing an innovative approach to its AI models under a project called “Strawberry,” according to internal documents and a source familiar with the matter. This project aims to enhance the advanced reasoning capabilities of its models, a critical area for the Microsoft-backed startup.
Project Details and Secrecy
Internal OpenAI documents reviewed by Reuters in May reveal teams are actively working on Strawberry. The exact timeline for the project’s public availability remains unknown, as does the specific date of the internal document. Descriptions of Strawberry remain highly confidential, even within the company.
The project aims to develop AI models capable of not just generating answers but also planning and navigating the internet autonomously for “deep research,” a concept yet to be achieved by current AI models. This approach could significantly enhance the AI’s ability to perform complex tasks and conduct thorough research independently.
Official Statement and Industry Context
Asked about Strawberry and the details reported in this story, an OpenAI spokesperson said: “We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time.” The spokesperson did not directly address questions about Strawberry.
Previously known as Q*, the Strawberry project has shown potential, with earlier demos reportedly capable of solving difficult science and math problems beyond the reach of commercially available models. One source mentioned internal tests where an AI model scored over 90% on a MATH dataset, though it’s unclear if this was related to Strawberry.
Demonstrations and Capabilities
At a recent internal all-hands meeting, OpenAI demonstrated a research project showcasing new human-like reasoning skills, though it’s not confirmed if this was Strawberry. The innovation aims to dramatically improve AI models’ reasoning abilities through a specialized processing method applied post-training on large datasets.
AI researchers believe that enhancing reasoning is key to achieving human or super-human-level intelligence in AI. While current large language models excel at summarizing texts and composing prose, they struggle with common sense problems and logical reasoning, often producing incorrect information.
Reasoning and Long-Term Goals
Improving reasoning in AI models is seen as essential for enabling them to make scientific discoveries and build new software applications. OpenAI CEO Sam Altman highlighted the importance of reasoning abilities in AI progress earlier this year. Other tech giants like Google, Meta, and Microsoft are also exploring ways to enhance reasoning in AI models, though opinions on the feasibility of incorporating long-term planning vary.
Strawberry's Approach and Potential
Strawberry aims to address these challenges by employing a specialized post-training process to fine-tune AI models, making them capable of performing complex tasks over extended periods. The method bears similarities to Stanford's "Self-Taught Reasoner" (STaR), which allows AI models to iteratively improve by creating their own training data.
One of Strawberry’s goals is to enable AI models to perform “long-horizon tasks” (LHT), which require planning and executing a series of actions over time. OpenAI plans to evaluate these models using a “deep-research” dataset, though details about the dataset remain undisclosed.
The project also includes testing AI capabilities in web browsing and conducting research autonomously with the help of a “computer-using agent” (CUA). OpenAI intends to further assess its models’ ability to handle tasks typically performed by software and machine learning engineers.