Adobe just applied generative AI to music creation, this could change the game forever

    Music creation has long been done by a relatively small group of people, but with Adobe Research’s Project Music GenAI, you can create impressive music tracks through text prompts.

    In the past 18 months, we’ve seen generative AI be applied to conversational chat, image generation and recently video with OpenAI’s Sora showing seriously impressive results. Now it’s time for the audio industry to be disrupted.

    Today, this is an early-stage generative AI music generation and editing tool, but it’s easy to see a path to where new artists emerge and songs are created from text prompts alone. The question is, how long till an AI-generated song reaches number one on the charts?

    I know personally that selecting music for a video has always been problematic, often things get incorrectly flagged for copyright claims, something that could be avoided if the music is completely new, every time.

    With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,

    Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies. 

    The new tools begin with a text prompt fed into a generative AI model, a method that Adobe already uses in Firefly. A user inputs a text prompt, like “powerful rock,” “happy dance,” or “sad jazz” to generate music. Once the tools generate music, fine grained editing is integrated directly into the workflow. 

    With a simple user interface, users could transform their generated audio based on a reference melody; adjust the tempo, structure, and repeating patterns of a piece of music; choose when to increase and decrease the audio’s intensity; extend the length of a clip; re-mix a section; or generate a seamlessly repeatable loop. 

    Instead of manually cutting existing music to make intros, outros, and background audio, Project Music GenAI Control could help users to create exactly the pieces they need—solving workflow pain points end-to-end. 

    Project Music GenAI Control is being developed in collaboration with colleagues at the University of California, San Diego, including Zachary Novack, Julian McAuley, and Taylor Berg-Kirkpatrick, and colleagues at the School of Computer Science, Carnegie Mellon University, including Shih-Lun Wu, Chris Donahue, and Shinji Watanabe. 

    “One of the exciting things about these new tools is that they aren’t just about generating audio—they’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It’s a kind of pixel-level control for music.”

    Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies. 

    Adobe has a decade-long legacy of AI innovation, and Firefly, Adobe’s family of generative AI models has become the most popular AI image generation model designed for safe commercial use, in record time, globally.

    Firefly has been used to generate over 6 billion images to date. Adobe is committed to ensuring our technology is developed in line with our AI ethics principles of accountability, responsibility, and transparency.

    All content generated with Firefly automatically includes Content Credentials – which are “nutrition labels” for digital content, that remain associated with content wherever it is used, published or stored. 

    Jason Cartwright
    Jason Cartwright
    Creator of techAU, Jason has spent the dozen+ years covering technology in Australia and around the world. Bringing a background in multimedia and passion for technology to the job, Cartwright delivers detailed product reviews, event coverage and industry news on a daily basis. Disclaimer: Tesla Shareholder from 20/01/2021

    Leave a Reply


    Latest posts


    Related articles