• AiNews.com
  • Posts
  • Microsoft Unveils AI Tool to Correct Hallucinations: Experts React

Microsoft Unveils AI Tool to Correct Hallucinations: Experts React

Digital illustration depicting AI technology being corrected, with symbols of text correction and digital tools surrounding a holographic AI interface. A robotic hand is shown making adjustments to a section of text, representing Microsoft's new tool 'Correction.' The background features symbols of grounding and validation, emphasizing the alignment of AI outputs with factual sources. The overall theme is modern and high-tech, reflecting the efforts to improve AI reliability and accuracy

Image Source: ChatGPT-4o

Microsoft Unveils AI Tool to Correct Hallucinations: Experts React

Microsoft has announced a new service called "Correction," designed to revise factually incorrect AI-generated text, knowns as hallucinations. The tool, part of Microsoft’s Azure AI Content Safety API, aims to improve the accuracy of text generated by AI models such as Meta’s Llama and OpenAI’s GPT-4o.

How Correction Works: A Two-Model Approach

Correction uses two models to detect and correct errors. The first, a classifier model, identifies potentially incorrect or fabricated parts of AI-generated content. The second, a language model, aligns these sections with “grounding documents” to correct the inaccuracies. This process is designed to enhance the reliability of AI outputs in fields like medicine, where accuracy is critical.

Google’s Response: A Similar Approach to Grounding AI

Earlier this summer, Google introduced a comparable feature in its Vertex AI platform. This tool allows users to “ground” their models by leveraging data from third-party providers, their own datasets, or even Google Search. This approach aims to enhance the accuracy of AI outputs by anchoring them to reliable sources, similar to Microsoft’s Correction tool.

Challenges and Limitations of Correcting AI Hallucinations

Despite Microsoft’s claims, experts remain cautious. AI systems, by nature, do not understand information as humans do; they generate text based on patterns in their training data. This can lead to "hallucinations"—false or misleading content. Os Keyes, a PhD candidate studying the ethical impact of emerging tech, argues that these hallucinations are intrinsic to how AI models function, making them difficult to fully eliminate.

Microsoft's Push to Improve AI Credibility

The new tool aims to reduce user dissatisfaction and mitigate reputational risks associated with AI-generated content. However, as AI expert & research fello at Queen Mary University Mike Cook points out, the service may give users a false sense of security, leading them to trust AI outputs more than they should. The feature is free up to 5,000 text records per month, but charges 38 cents per 1000 text records beyond that, highlighting a business angle to Microsoft’s strategy.

The Broader Implications for AI in Business

Microsoft is under pressure to prove the value of its AI investments. The tech giant has spent nearly $19 billion on AI-related projects this year alone, yet has seen limited revenue gains. Concerns over accuracy and hallucinations have caused some businesses to pause deployments of Microsoft’s AI products, like the Microsoft 365 Copilot, due to performance and cost issues.

Balancing Innovation with Responsibility

While Microsoft’s Correction tool may address some concerns, experts like Mike Cook believe the industry needs to focus on understanding AI’s limitations before widespread deployment. Cook warns that deploying AI without fully grasping its strengths and weaknesses is like “building the landing gear and parachutes while on the way to the destination.”

Navigating the Future of AI with Caution

As AI technology continues to evolve, the challenge for companies like Microsoft is to balance innovation with responsible development. Correction is a step towards improving AI reliability, but it also underscores the complexities and risks associated with integrating AI into critical applications.