News

Adobe has more plans for 3D, lots more

Why limit itself and its users to 2D?

Karen Moltenbrey

Adobe has mastered 2D content creation over the past few decades, and the company is hoping to do likewise in the 3D space, sort of. Adobe is focused on democratizing 3D content creation and is integrating some 3D capabilities and features within its popular 2D tools, including a beta version of After Effects. It is also working on a platform called Project Sunrise for e-commerce that enables marketers and other non-experts in 3D to generate various media assets from a single 3D model for use on a company’s website and in electronic marketing collateral. In related news, Adobe is continuing its work with generative AI and said it will introduce three new GenAI models in the months ahead: Firefly 3D Model, Firefly Video Model, and Firefly Audio Model.

What do we think? While Adobe is stepping into 3D, it is doing so while staying in its lane. It is not trying to compete with 3D DCC applications like Maxon’s Cinema 4D and Autodesk’s Maya and 3ds Max, which are aimed at masterful 3D users. Rather, it is bringing the many advantages of 3D to a different set of creators who are not especially proficient in the complex world of 3D. Smart move. As for its ongoing development of GenAI, the company is moving fast.

One of the big reasons Adobe has been moving at such a fast pace of innovation in developing its original Firefly model is due to its vast libraries that it has been building over the years. Stock provided a significant advantage during this process; however, its video library is not nearly as extensive, nor is its 3D model collection. Rest assured, the Stock team is hard at work extending those collections for training these new Firefly models. In addition to content, the right algorithms are needed, as the processes are a little bit different for each of these new areas (3D, video, audio). Also, each has its own set of hurdles. In terms of audio, for instance, there with scoring video. With 3D, there’s a lot of interesting overlap between the generative space and the neural CG space, which is a whole new AI-based representation of 3D content. And that’s just for starters.

No doubt, Adobe is well on its way to solving these problems or else the company would not have revealed its intention to deliver the models at this time. Adobe tends not to discuss its coming attractions in terms of products or technology too far out, so it is likely these three new models (3D, Video, Audio) will be placed into beta sooner rather than later.  

Adobe is opening up the world of 3D

More than four decades ago, Adobe came into existence and immediately made an impact in the electronic and desktop publishing world, first with PostScript and then with PageMaker and After Effects through an acquisition of Aldus Corp. Then, little by little, application by application, Adobe revolutionized 2D imaging and design, and later, video and audio. Today, Adobe software is used by millions, and its app logos are recognized around the world.

Over the years, Adobe has acquired more than two dozen companies, and a large chunk of that technology has been integrated into the company’s expanding products and feature sets. More recently, this included the acquisitions of Workfront, Frame.io, and Figma (the latter of which is still in progress). In early 2019, Adobe acquired Allegorithmic, maker of Substance for the creation of 3D textures and materials used in games and video postproduction. Combining Allegorithmic’s Substance 3D design tools with Adobe Creative Cloud’s imaging, video, and motion graphics tools has resulted in a powerful tool set for artists working in film, television, design, and more.

3D can make an image pop. (Source: Adobe)

The area of 3D is not new territory for Adobe. It began focusing on 3D several years ago, with the introduction of 3D in PDFs and the ability to create 3D images and convert 2D images into 3D with Photoshop 7.0 back in 2002. It’s important to note, though, that these are different generations of technologies compared to the recent integrations of 3D capabilities in Adobe’s main Creative Cloud flagship applications today. So, while Adobe kick-started its 3D journey long before Allegorithmic’s Substance team came aboard, there’s little doubt about the impact that group is now having on the company’s continuing 3D journey.

There are some 3D capabilities in Illustrator that lets users extrude text and, to some extent, design basic models. Artists can also apply Substance materials inside Illustrator, a feature released about a year ago. But as Francois Cottin, Adobe’s senior director of marketing for 3D, points out, that usage is mostly for illustration purposes. “We’re not talking about 3D specialists and workflows that are part of industrial design, gaming, VFX, etc. This is solely for illustration,” he says.

Democratizing 3D

Adobe recently added 3D capabilities to the beta version of After Effects, which will be in general release soon, likely before year-end. This is bringing 3D capabilities to the After Effects core, where users will be able to composite 3D objects in their View sequences, says Cottin, while still retaining all the capabilities they have had natively. A workflow with the Substance tools is also supported, whereby users can create assets in Substance (model and texture) and then bring them into After Effects. In fact, Cottin says there has been a high demand among video users for the ability to composite 3D objects into their videos.

3D design
The Substance 3D Collection lets users create state-of-the-art 3D designs. (Source: Adobe)

Cottin emphasizes that over time, 3D is entering the workflows of most content creators, and for various reasons, although they differ from those of graphic designers. That group starts using 3D mostly for illustration purposes; in video, it usually begins with 3D compositing. There is a similar situation in e-commerce, where 3D renderings are being used more and more. It’s for these groups that Adobe is connecting its Substance tools to the company’s many other offerings and bringing 3D capabilities to all types of workflows.

There are two main reasons why creatives want those capabilities. First, it opens new levels of control for images or videos, such as moving a light, an object, and so forth, which, until recently, required a robust desktop machine with an expensive GPU. And two, a lack of standardization has resulted in 3D applications having their own file format—yet that, too, is evolving with OpenUSD and OpenPBR. As a result, Cottin believes we will see more and more creatives wanting to manipulate 3D files without having to spend two or more years mastering the software or the entire process chain.

In the past, 3D experts worked completely in a silo, whereas today, there is growing collaboration between the experts and generalists, particularly within a large organization. However, there is a varying degree to which this is happening, as some companies are fairly advanced in this transformation, while others are in their infancy. “Adobe’s strategy is to bring 3D to everyone in different forms, depending on what tasks people need to do,” says Cottin.

Project Sunrise

Adobe is helping some of these non-expert 3D users, particularly those working in e-commerce, in their journey by introducing Adobe Labs’ Project Sunrise for generating product images from 3D models. More specifically, Sunrise is a platform that handles 3D data in 3D files and 3D projects pertaining to e-commerce at companies looking to embrace the medium at a larger scale. As Cottin explains, the platform takes a 3D model from its 3D digital twin down to the production level for all kinds of media assets that can be used on a company’s website and for electronic marketing materials. In essence, it’s an optimization process that also automatically generates model variations.  

With Sunrise, users can easily create hundreds, even thousands, of assets from a single model. Cottin describes Sunrise as automation tools at scale. For instance, rather than photograph a product in a variety of colors (an expensive endeavor), highly realistic renderings can be produced instead; also, objects can be placed in a variety of environments to target specific customers. Additionally, an AR preview can be produced for customers viewing the image on their smartphones, or a 3D view can be made as well as a short video.

Adobe 3D image
With Project Sunrise, artists no longer have to replicate an e-commerce product using traditional methods in order to show it in different color options, making the process much easier, especially when it involves hundreds or thousands of options. (Source: Art by Ronan Mahon, courtesy of Adobe)

“This is the only way to be competitive today in e-commerce—by providing enough of these media assets that companies need. The more visual assets you provide, the more product you sell,” Cottin says, noting that no one buys something by looking at just one picture, no matter if  it’s headphones or an MRI machine.

Sunrise, which is closely connected to Adobe’s GenStudio enterprise offerings, targets very large companies—specifically their merchandisers and marketing folks as opposed to creatives who are adept at manipulating media assets. The platform fills that middle space between the artists who create the 3D digital twins and the e-commerce teams that can generate a plethora of media assets from those digital twins.

Although Sunrise connects to the Substance tools, it is agnostic and will work with any 3D objects originating from any 3D workflow. It works like this: Users create a 3D model in whichever DCC software they like and import the model into Sunrise, where a marketing person works on a design (adding pictures, videos, AR views, etc.) using the platform. Because Sunrise is cloud-based, users do not need the extensive computational power in their local computers, so collaboration is possible for those across disparate locations.

While some of Sunrise’s underlying technology comes from Substance, Sunrise is its own unique offering, Cottin emphasizes. The platform must be robust and well adapted to work with these large companies’ pipelines, he says, and that takes time to design and optimize.

Sunrise is currently under development and is being tried by pilot customers. Although not a commercial product yet, Sunrise is something Adobe intends to productize over the next year, Cottin adds.

The GenAI link

And that’s not all the company is working on in terms of 3D. Adobe has also recently bolstered Photoshop with generative AI capabilities in the form of Generative Fill and Generative Expand, which became generally available following five months in beta. Then, in September, Adobe announced GenAI beta releases of its  Premiere Pro and After Effects video products. In a “wait, wait, there’s more” moment, the company introduced new as well as additional AI-powered features across its creative applications with updates to Illustrator, Lightroom, Stock, and more. It also  announced three new Firefly models: Firefly Image 2 Model, Firefly Vector Model, and Firefly Design Model. In fact, at Adobe Max 2023, the company released more than 100 new AI features and updates across its Creative Cloud flagship applications.

At the Max conference, David Wadhwani, president of Digital Media, said Adobe is just getting started in the area of generative AI and has a lot planned for the months ahead. This includes the introduction of three new GenAI models: Firefly 3D, Firefly Video, and Firefly Audio. While Wadhwani did not pull back the curtain on these three models, he did raise it ever so slightly, providing a glimpse of some features and capabilities that are to come. 

“I like to say we’re at mile 1 of the 26-mile sprint here of what is possible with the technology and what it can do, and then fan out to new content types,” said Ely Greenfield, CTO of Digital Media, during one of the Max keynotes.

AI model
Adobe’s David Wadhwani, president of Digital Media, presented new Firefly AI-based models that recently have been unveiled and are in the works, including the Audio, Video, and 3D Models.

Adobe, like others out there, is working on generating full 3D models from a 2D image. However, that is something that takes time due to the complexity involved, Cottin notes, and is at the research level right now, as all the big players in the field are currently working on it, evidenced by the many tech papers on the subject that were presented at Siggraph 2023. Nevertheless, he sees Adobe’s approach with Firefly 3D as similar to what the company is doing with the platform for image generation.

Still, generative AI requires machine learning, which means a very large amount of good data is required to train the model—a combination of the right data in the right quantity and the right quality, along with efficient learning processes. While training the original Firefly model, Adobe used high-quality still images from its massive Stock database; video Stock and 3D [imagery] are not as plentiful, and the learning processes are more complex because not only is the model learning pixels, but also the reaction of light with materials on surfaces, which is far more data to process. And this is a general hindrance for everyone doing machine learning in 3D.

Yes, the process takes more time for 3D, but Adobe continues to make progress toward this innovation. It’s a safe bet that new 3D features will be released incrementally and in phases, as has been the case with other Firefly-related features and functions.

“Generative AI is moving very fast these days—all the models are improving. So, it’s not as if [one is done] and now it’s video and next will be 3D, or something like that,” Cottin explains. “The good news is the learning processes are relatively similar. So, the fact that we have been learning so well and so much already on pictures, vectors, and templates is really helpful to optimizing learning for 3D.”

Greenfield said the company has done a lot of investment across the board in this area but is not able to provide any specifics about where Adobe is in the process and what kind of timeline it is looking at for release of the new models. However, Adobe tends not to discuss things too far out, so it is likely that these three new models will be put into beta sooner rather than later—meaning over the course of the next few months.