• AiNews.com
  • Posts
  • NIST Releases Dioptra: A Tool to Test AI Model Risk and Safety

NIST Releases Dioptra: A Tool to Test AI Model Risk and Safety

An image depicting NIST's Dioptra tool for testing AI model risks. The image features a digital interface showing AI model performance metrics, graphs, and indicators of malicious attacks. Elements include a computer screen displaying the Dioptra tool, a scientist analyzing data, and representations of AI models and data flow. The background includes symbols of cybersecurity and data protection, emphasizing the tool's role in enhancing AI safety and security. The color scheme is professional with darker tones to highlight technology and innovation.

NIST Releases Dioptra: A Tool to Test AI Model Risk and Safety

The National Institute of Standards and Technology (NIST), a U.S. Commerce Department agency, has reintroduced a tool designed to evaluate how malicious attacks, particularly those that “poison” AI model training data, can impact AI system performance. This tool, named Dioptra, is modular, open-source, and web-based.

Dioptra's Objectives and Capabilities

Dioptra, initially launched in 2022, assists companies and individuals in assessing, analyzing, and tracking AI risks. It allows for benchmarking and researching models, providing a platform to expose them to simulated threats in a “red-teaming” environment.

“Testing the effects of adversarial attacks on machine learning models is one of Dioptra's goals,” NIST stated in a press release. The software is open-source and freely available, intended to help government agencies and small to medium-sized businesses evaluate AI developers’ performance claims.

Enhancing AI Safety and Security

Released with documents from NIST and the new AI Safety Institute, Dioptra provides guidance on mitigating AI dangers, like generating nonconsensual pornography. This follows the U.K. AI Safety Institute’s Inspect toolset, aimed at assessing model capabilities and safety. The U.S. and U.K. are collaborating on advanced AI model testing, as announced at the AI Safety Summit in Bletchley Park last November.

Executive Order and AI Standards

Dioptra is part of President Joe Biden’s executive order on AI, which mandates NIST to assist with AI system testing. The order establishes standards for AI safety and security, requiring companies, including Apple, to notify the federal government and share safety test results before public deployment.

Challenges in AI Benchmarking

AI benchmarking is challenging due to the complexity and proprietary nature of sophisticated AI models. A report from the Ada Lovelace Institute found that current policies allow AI vendors to selectively choose evaluations, making it difficult to determine real-world safety.

NIST acknowledges that Dioptra cannot fully de-risk models but suggests it can highlight which attacks might degrade AI system performance and quantify the impact. However, Dioptra is currently limited to models that can be downloaded and used locally, like Meta’s Llama family. Models accessible only through an API, like OpenAI’s GPT-4, are not supported at this time.