• AiNews.com
  • Posts
  • OpenAI’s Cybersecurity Grant Program Boosts AI Integration

OpenAI’s Cybersecurity Grant Program Boosts AI Integration

A high-tech lab environment with cybersecurity professionals working on computers. Digital elements like AI icons and cybersecurity symbols are overlaid, showcasing the integration of AI in cybersecurity. The atmosphere is focused and innovative, highlighting collaboration and advanced technology. Screens display data analysis and security algorithms

OpenAI’s Cybersecurity Grant Program Boosts AI Integration

OpenAI has shared updates on its Cybersecurity Grant Program, which was launched in 2023 to support advanced AI models and groundbreaking research at the intersection of cybersecurity and artificial intelligence. The program received over 600 applications, highlighting the critical need for collaboration between OpenAI and the cybersecurity community.

Selected Projects Highlighted

The program has supported a variety of innovative projects. Here are a few notable examples:

Wagner Lab at UC Berkeley

Professor David Wagner’s security research lab at UC Berkeley is developing techniques to defend against prompt-injection attacks in large language models (LLMs). This collaboration with OpenAI aims to enhance the trustworthiness of these models against cybersecurity threats.

Coguard

Albert Heinle, co-founder and CTO of Coguard, is leveraging AI to reduce software misconfigurations, a common cause of security incidents. AI helps automate the detection of these misconfigurations, offering significant improvements over outdated rules-based policies.

Mithril Security

Mithril has developed a proof-of-concept to secure inference infrastructure for LLMs. Their project includes open-source tools for deploying AI models on GPUs with secure enclaves based on Trusted Platform Modules (TPMs), ensuring data remains protected even from administrators. Their work is available on GitHub and detailed in a whitepaper.

Gabriel Bernadett-Shapiro

Individual grantee Gabriel Bernadett-Shapiro created the AI OSINT workshop and AI Security Starter Kit, providing technical training on LLMs and free tools for students, journalists, investigators, and information-security professionals. This initiative emphasizes training for international atrocity crime investigators and intelligence studies students at Johns Hopkins University.

Breuer Lab at Dartmouth

Professor Adam Breuer’s lab at Dartmouth is developing new defense techniques to prevent attacks on neural networks that reconstruct private training data. Their approach aims to avoid compromising model accuracy or efficiency.

Security Lab at Boston University (SeclaBU)

Ph.D. candidate Saad Ullah, Professor Gianluca Stringhini from SeclaBU, and Professor Ayse Coskun from Peac Lab are enhancing LLMs' ability to detect and fix code vulnerabilities. This research could enable cyber defenders to prevent exploits before they are used maliciously.

CY-PHY Security Lab at University of Santa Cruz (UCSC)

Professor Alvaro Cardenas’s group at UCSC is exploring the use of foundation models to design autonomous cyber defense agents that respond to network intrusions. They aim to compare these models with those trained using reinforcement learning to improve network security and threat information triage.

MIT Computer Science Artificial Intelligence Laboratory (MIT CSAIL)

Researchers Stephen Moskal, Erik Hemberg, and Una-May O’Reilly from MIT CSAIL are automating decision processes and actionable responses using prompt engineering in red-teaming exercises. They are also exploring LLM-Agent capabilities in Capture-the-Flag (CTF) challenges to discover vulnerabilities in a controlled environment.

ChatGPT’s Role in Cybersecurity

ChatGPT has become a popular tool among cybersecurity professionals, used for translating technical jargon, writing code for artifact analysis, creating log parsers, and summarizing incidents quickly. OpenAI has granted free access to ChatGPT Plus to many in the cybersecurity community to enhance AI adoption in cyber defense. This initiative is expanding to include ChatGPT Team and Enterprise, starting with partners at the Research and Education Network for Uganda (RENU).

Call for Proposals

OpenAI continues to invite proposals from those who share the vision of a secure, AI-driven future. Interested parties are encouraged to submit their proposals to join the effort in enhancing defensive cybersecurity technologies.