- AiNews.com
- Posts
- Adobe’s Project Super Sonic Uses AI to Generate Custom Sound Effects
Adobe’s Project Super Sonic Uses AI to Generate Custom Sound Effects
Image Source: ChatGPT-4o
Adobe’s Project Super Sonic Uses AI to Generate Custom Sound Effects
Creating engaging videos requires more than just great visuals—sound plays an essential role in enhancing the viewer’s experience. However, finding the right sound effects or creating them from scratch can be a time-consuming process. At its annual MAX conference, Adobe unveiled Project Super Sonic, an experimental prototype that demonstrates how users could one day use AI-driven tools like text-to-audio, object recognition, and even their own voice to generate background sounds and audio effects for video projects.
AI-Generated Sound Effects from Text Prompts
One of the most exciting features of Project Super Sonic is its ability to generate sound effects based on a text prompt. This concept is similar to services already available, such as those from ElevenLabs, but Adobe adds its own unique twist. While this feature alone may not seem groundbreaking, it provides a foundation for what comes next.
Innovative Object Recognition for Audio
Adobe has taken the text-to-audio concept a step further by integrating object recognition into the workflow. Users can click on any object in a video frame, and the system will create a prompt based on the object, generating a corresponding sound effect. This fusion of multiple AI models allows for a seamless experience, combining visual and audio elements in video creation.
Vocal Control for Custom Audio Creation
The real “wow” moment of Project Super Sonic comes with its vocal control mode. Here, users can record themselves mimicking the sounds they want—whether by voice, clapping, or even playing an instrument—and the AI will generate the correct audio effect. This approach gives creators greater control over the energy, timing, and expressiveness of the sound, transforming the tool into a more immersive and flexible sound design platform.
Licensed Data and Ethical AI Practices
Justin Salamon, head of Sound Design AI at Adobe, explained that Project Super Sonic started with the text-to-audio model. He emphasized that, like all of Adobe’s generative AI projects, the team only used licensed data to train the model, ensuring ethical AI practices in its development. The tool analyzes the different characteristics of a user's voice or other sounds to guide the AI's generation process, allowing for precise customization.
Future of Project Super Sonic
While Project Super Sonic was introduced as part of Adobe’s “sneaks”—experimental showcases of the company's latest innovations—there is no guarantee that it will be integrated into Adobe's Creative Suite. However, given that the same team worked on the audio extension feature of Adobe Firefly’s Generative Extend, which is now part of Adobe Premiere, there’s a good chance that Super Sonic could make its way into future software releases.
What This Means for Video Creators
If Project Super Sonic is eventually integrated into Adobe’s product lineup, it could be a game changer for video creators and sound designers. By streamlining the audio creation process and offering a variety of AI-powered tools, Adobe would empower users to produce high-quality sound effects with minimal effort. The ability to generate custom sounds from text, objects, and vocal input would provide unprecedented flexibility and creativity in sound design. This level of control would particularly benefit independent creators and small teams, who often face resource limitations in producing professional-grade audio for their videos. Whether or not Project Super Sonic moves from demo to production, its potential impact underscores Adobe’s commitment to revolutionizing content creation through AI.