• AiNews.com
  • Posts
  • MIT's New Method Enhances AI Model Trust

MIT's New Method Enhances AI Model Trust

An illustration of a machine-learning model with improved uncertainty estimates. The image features a medical professional using a digital device to assess a model's confidence in a medical diagnosis. The screen displays a medical image with a confidence percentage and uncertainty indicators. Background elements include neural network graphics and data streams representing AI technology, emphasizing the importance of accurate uncertainty estimates in high-stakes settings like healthcare

MIT's New Method Enhances AI Model Trust

Since machine-learning models can sometimes make incorrect predictions, researchers often program them to indicate their level of confidence in a given decision. This capability is especially important in high-stakes settings, such as medical image analysis or job application filtering, where false predictions can have significant consequences.

MIT's New Approach

MIT researchers have developed a new method to improve uncertainty estimates in machine-learning models. This technique not only generates more accurate uncertainty estimates but also does so more efficiently. The scalability of this method means it can be applied to large deep-learning models used in critical areas like healthcare.

Benefits for End Users

This new method provides end users with better information to determine whether to trust a model's predictions. This is particularly beneficial for users without extensive machine-learning expertise.

“It is easy to see these models perform really well in scenarios where they are very good, and then assume they will be just as good in other scenarios. This makes it especially important to push this kind of work that seeks to better calibrate the uncertainty of these models to make sure they align with human notions of uncertainty,” says lead author Nathan Ng, a graduate student at the University of Toronto and visiting student at MIT.

Research Team and Presentation

Ng co-authored the paper with Roger Grosse, an assistant professor at the University of Toronto, and Marzyeh Ghassemi, an associate professor at MIT. The research will be presented at the International Conference on Machine Learning.

Challenges with Existing Methods

Traditional methods for uncertainty quantification often involve complex calculations that don’t scale well with models having millions of parameters. These methods also require assumptions about the model and training data, which can affect accuracy.

Minimum Description Length Principle (MDL)

The researchers used the minimum description length principle (MDL), which does not require assumptions that can limit other methods. MDL quantifies and calibrates uncertainty for test points the model is asked to label. The new technique, known as IF-COMP, makes MDL fast enough for use with large deep-learning models.

How MDL Works

MDL involves considering all possible labels a model could give a test point. If there are many alternative labels that fit well, the model’s confidence in its chosen label should decrease. For example, if a model labels a medical image as showing pleural effusion but is willing to update its belief when told the image shows edema, the model should be less confident in its original decision.

Stochastic Data Complexity

MDL measures confidence using a concept called stochastic data complexity. Confident models use short codes to describe a point, while uncertain models use longer codes to capture multiple possibilities. Testing each point using MDL would require extensive computation.

IF-COMP Technique

IF-COMP approximates stochastic data complexity using influence functions and a technique called temperature-scaling, which improves calibration. This combination allows for high-quality uncertainty estimates.

Performance and Applications

IF-COMP efficiently produces well-calibrated uncertainty quantifications, detects mislabeled data points, and identifies outliers. Testing showed it was faster and more accurate than other methods.

“It is really important to have some certainty that a model is well-calibrated, and there is a growing need to detect when a specific prediction doesn’t look quite right. Auditing tools are becoming more necessary in machine-learning problems as we use large amounts of unexamined data to make models that will be applied to human-facing problems,,” says Ghassemi.

Future Directions

IF-COMP is model-agnostic, it can provide accurate uncertainty estimates for various machine-learning models, enabling deployment in a wide range of real-world settings. This helps practitioners make better decisions.

“People need to understand that these systems can be fallible and make errors. A model may look highly confident but can be swayed by contrary evidence,” Ng says.

The researchers plan to apply their approach to large language models and explore other use cases for the minimum description length principle.