News

More to see, and hear, with Firefly

Improved Adobe Firefly Video Model enhances video with sound effects, avatars, more.

Karen Moltenbrey

Adobe’s Firefly Video Model hasn’t been in general release long, but it continues on an upward trajectory with new features that improve motion fidelity and user workflows. The updated model also offers increased levels of creativity through visual Style Presets, added partner models, video sound effects, and avatar-led videos. 

It doesn’t appear that Adobe has plans to clip the wings of its Firefly Video Model, which entered beta earlier this year and general release in April. Rather, its generative AI video model is flying ever higher, with Adobe expanding the model, bolstering its features and capabilities by adding advanced creative controls, improved model performance, and new generative features. 

Style Presets

New Style Presets provide a fast, easy way to stylized video. (Source: Adobe)

This update brings better tuned motion fidelity to the Firefly Video Model so the video generations move more naturally, with smoother transitions and lifelike accuracy to the animation and environmental effects. Also added are new workflow tools in the Firefly Web app to accelerate workflow, and new beta tools for generating sound effects and avatar-led videos. As a result of these enhancements, creatives now have more control over the style, structure, and output of their generations.

The following capabilities, now available in Firefly Video, are structured to provide users with more control, more speed, and more creative freedom. They include:
Composition reference for video, whereby a user can upload a reference video, and Firefly will match the structure and flow to the new video generation. 
Style presets to instantly apply a visual style such as claymation, anime, 2D, or line art.
Keyframe cropping to generate between first and last frame to match format with minimal editing. 

A pair of new features (both in beta) have been added to Firefly that boost a user’s storytelling endeavors: Generate Sound Effects and Text to avatar.

Generate Sound Effects makes it easy to generate custom sounds by enabling custom audio layering with precise timing by using a text prompt or voice input, to add emotion, energy, and cinematic flair to visuals. Users can then export the completed video to Adobe Express or Premiere Pro. Adobe notes that like its other Firefly generative AI models, this is commercially safe to use. 

Text to avatar turns scripts into avatar-led videos with customizable accents, backdrops, and visuals. Users can select from a library of avatars and customize their background with a color, image, or video; they then can add custom script and select an accent for their avatar when reciting the script. This feature can be used to create videos for a wide range of purposes, from FAQs with a virtual presenter to training materials. 

Also new is the Enhance Prompt feature, which refines user inputs and adds language that helps Firefly better understand a user’s intent, for speedier results.

More partnerships on tap

In April, Adobe began integrating non-Adobe generative AI partner models into the Firefly ecosystem, giving users a choice when it comes to adding third-party partner models into their work, enabling them to create using different aesthetic styles. Initially they included Google Imagen 3 and Veo2, OpenAI GPT image generation, and Black Forest Labs Flux 1.1 Pro. Now, the ecosystem has been expanded to include Runway Gen-4 Video and Veo 3 with Audio. In the near future,  Luma AI Ray 2 and Pika 2.2, available in Firefly Boards, will be added to Generate Video, while Topaz Labs’ Image and Video Upscalers and Moonvalley’s Marey will be available in Firefly Boards.

LIKE WHAT YOU’RE READING? INTRODUCE US TO YOUR FRIENDS AND COLLEAGUES.