• AiNews.com
  • Posts
  • Magic Unveils LTM-2-mini, Partners with Google & NVIDIA

Magic Unveils LTM-2-mini, Partners with Google & NVIDIA

A futuristic digital environment featuring a dynamic flow of data streams representing 100 million tokens, symbolized by cascading lines of code, text blocks, and interconnected nodes. The centerpiece of the image is a powerful AI brain-like structure, surrounded by glowing symbols of books (representing novels), code (representing lines of code), and other data types. In the background, subtle representations of Google Cloud and NVIDIA logos are integrated to signify the partnership in developing advanced AI supercomputers. The image embodies the concept of massive data processing and real-time learning

Image Source: ChatGPT-4o

Magic Unveils LTM-2-mini, Partners with Google & NVIDIA

Magic has unveiled its latest AI model, LTM-2-mini, capable of processing 100 million tokens of context. This achievement is equivalent to handling about 10 million lines of code or 750 novels in a single inference. The model significantly surpasses current standards, processing 50 times more context than existing AI models.

Breakthrough in Contextual Processing

LTM-2-mini introduces an innovative algorithm that processes long sequences of data with remarkable efficiency—1000 times more efficiently than the current top-performing AI models. This advancement opens up new possibilities for AI, particularly in understanding and reasoning over vast amounts of data in real-time.

Revolutionizing Software Development with Ultra-Long Context

Magic’s focus is on applying these ultra-long context models to software development. By allowing AI models to consider all code, documentation, and libraries in context—even those not publicly available—Magic aims to revolutionize code synthesis and other software development tasks. This approach could significantly improve the accuracy and functionality of AI-generated code.

Challenges with Current Long Context Evaluations

Traditional long context evaluations, such as the Needle In A Haystack test, have limitations that can skew results. These tests often rely on identifying a semantically recognizable element within a large context, which doesn’t accurately reflect real-world tasks. Magic addresses these issues with its HashHop evaluation, a method designed to eliminate semantic hints and better measure a model's ability to store and retrieve vast amounts of information.

Introduction of HashHop for Improved Model Evaluation

HashHop is a new evaluation method created by Magic to overcome the shortcomings of existing long context evaluations. By using random and incompressible hash pairs, HashHop challenges models to store and retrieve maximum information content without relying on semantic shortcuts. This method provides a more accurate measure of a model’s capability in handling complex tasks like variable assignments or library imports. If you would like to use HashHop, you can find it on GitHub.

LTM-2-mini: A Step Towards AI with Near-Perfect Recall

Magic’s LTM-2-mini model has demonstrated its capabilities through real-world applications, such as creating a calculator using a custom in-context GUI framework and implementing a password strength meter for an open-source project without human intervention. These breakthroughs in context length allow AI to process and reason over dense and complex data, paving the way for AI assistants with near-perfect recall and memory.

Building Next-Gen AI Supercomputers on Google Cloud

Magic is currently training a larger version of the LTM-2 model on a newly built supercomputer and has plans to construct two more supercomputers on Google Cloud—Magic-G4 and Magic-G5. These supercomputers will be powered by NVIDIA's H100 and GB200 GPUs, with the capacity to scale up to tens of thousands of Blackwell GPUs, ensuring the computational power needed for training and deploying large-scale AI models.

To read more details and to see their performance graphs, please visit Magic’s website.

Leadership Perspectives on the Partnership

Eric Steinberger, CEO & Co-founder of Magic, said, “We are excited to partner with Google and NVIDIA to build our next-gen AI supercomputer on Google Cloud. NVIDIA’s GB200 NLV72 system will greatly improve inference and training efficiency for our models, and Google Cloud offers us the fastest timeline to scale, and a rich ecosystem of cloud services.”

Leaders from Google Cloud and NVIDIA also highlighted the critical role of AI supercomputers in advancing the capabilities of large language models and the impact these innovations will have on the future of AI.

Substantial Funding to Fuel AI Innovation

Magic has secured a total of $465 million in funding, including a recent $320 million investment round led by notable investors such as Eric Schmidt, Jane Street, Sequoia, and Atlassian. This funding will support the continued development of advanced AI models and infrastructure, further solidifying Magic’s position as a leader in the AI industry.