- AiNews.com
- Posts
- Google Launches Gemini 1.5 Pro & Flash for Developers
Google Launches Gemini 1.5 Pro & Flash for Developers
Image Source: ChatGPT-4o
Google Launches Gemini 1.5 Pro & Flash for Developers
Google has released two stable versions of its Gemini 1.5 API models—Gemini 1.5 Pro and Gemini 1.5 Flash—promising improved performance and lower costs for developers. The new models are available as part of the Gemini API and Google AI Studio, with expanded access to Google Cloud customers through Vertex AI.
Enhanced Features and Performance
The updated Gemini 1.5 Pro and Flash models, launched on September 24, offer significant advancements over their predecessors. The new models demonstrate improvements in areas such as code generation, mathematics, reasoning, and video analysis. Developers will experience a 50% price reduction for Gemini 1.5 Pro on prompts under 128K tokens, with 2x higher rate limits on 1.5 Flash and 3x higher limits on 1.5 Pro.
Key Improvements and Cost Efficiency
Key enhancements include faster output, reduced latency, and updated filter settings. These changes aim to make the Gemini 1.5 models more efficient and cost-effective for production use. With a 7% increase in performance on the MMLU-Pro benchmark, and a 20% improvement on MATH and HiddenMath benchmarks, these models deliver superior results across a variety of tasks, including vision and Python code generation.
Expanded Capabilities for Developers
The Gemini 1.5 models are equipped to handle complex use cases, such as synthesizing information from extensive documents, answering questions about large code repositories, and processing long-form videos. Google has also introduced the Gemini-1.5-Flash-8B experimental model, which provides additional performance boosts in text and multimodal applications.
Increased Accessibility and Developer Support
To support more developers, Google is increasing the paid tier rate limits for Gemini 1.5 Flash to 2,000 RPM and for 1.5 Pro to 1,000 RPM, effective October 1. The update also includes a 64% reduction in input token costs, a 52% reduction in output token costs, and a 64% reduction in cached token costs for Gemini 1.5 Pro.
Focus on Safety and Customization
Google remains committed to model safety and reliability. The latest Gemini models are equipped with improved content safety measures and a concise response style based on developer feedback. While safety filters are not applied by default, developers have the option to configure them as needed.
Future of Gemini Models
Google’s iterative approach with the Gemini 1.5 series continues to push the boundaries of what’s possible for AI developers. The company’s focus on reducing costs and increasing performance aims to make it easier for developers to build faster, smarter, and more affordable applications.
For more details on the latest Gemini 1.5 models and how to migrate to them, visit the Google Blog page.