• AiNews.com
  • Posts
  • Microsoft Patents System to Reduce AI Hallucinations with Sources

Microsoft Patents System to Reduce AI Hallucinations with Sources

A digital illustration of Microsoft's proposed "response-augmenting system" (RAS) for AI, featuring a central AI chatbot with "RAS" displayed on its interface. The chatbot is surrounded by icons representing key functions, including "Knowledge Sources" for gathering external data, "Feedback" for user input, "Accuracy Checks" to verify information, and "AI Safety" for ensuring responsible use. A human hand is shown pointing to one of the icons, symbolizing interaction and control over AI responses. The background has a high-tech, professional look, with graphs and data visualizations highlighting Microsoft’s focus on improving AI accuracy and reducing hallucinations.

Image Source: ChatGPT-4o

Microsoft Patents System to Reduce AI Hallucinations with Sources

Microsoft has applied for a patent aimed at addressing AI "hallucinations"—instances where AI models generate incorrect or misleading information. The patent, titled "Interacting with a Language Model using External Knowledge and Feedback," was filed with the US Patent and Trademark Office (USPTO) last year and made public on October 31.

How Microsoft’s Proposed Fix Works

The core of Microsoft’s proposed solution is a "response-augmenting system" (RAS) designed to enhance AI responses by automatically pulling in additional information from external sources. When an AI model receives a user query, the RAS could search the web or a dataset for relevant information, helping to ensure that answers are based on reliable sources. If it detects that the AI’s response lacks supporting information, the system would classify the response as "not useful."

The RAS could also notify users if it deems an answer questionable, giving them the option to provide feedback. Importantly, this system wouldn’t require developers to fine-tune or retrain their existing AI models, making it a potentially accessible solution for reducing AI-generated falsehoods.

Potential Applications in Microsoft’s AI Ecosystem

Though still under review, the patent could bring a valuable addition to Microsoft’s suite of AI tools, such as Copilot. However, Microsoft clarified to PCMag that this patent is separate from its existing Azure AI Content Safety tool, which offers backend fact-checking for business AI chatbots. Content Safety assesses whether AI responses are "grounded" or "ungrounded" before presenting them to users. While this doesn’t fully eliminate the risk of false information, it adds an extra layer of verification to ensure the AI’s claims are supported by actual data, providing answers only when there are existing sources to back them up.

Addressing AI's Hallucination Problem

AI hallucinations have become a pressing issue in generative AI, leading to inaccurate or even bizarre responses that undermine user trust. High-profile examples from AI systems like Google’s AI Overviews and X’s Grok AI have shown the risks, with blunders ranging from suggesting glue on pizza to spreading misinformation.

A Growing Need for Reliable AI

Despite the ongoing issues, tech giants like Microsoft, Google, and Meta are pressing forward with AI advancements, even exploring nuclear power options to meet the energy demands of expanding AI infrastructure.

Looking Ahead

Microsoft’s proposed response-augmenting system represents a proactive approach to making AI models more reliable and transparent. If successful, this patent could reduce the spread of misinformation in AI responses and enhance user trust in AI-powered tools.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.