The Bleeding Edge: Adobe held its annual MAX conference this week, closing the event as usual with its Sneaks presentation. Sneaks is a showcase of experimental features that Adobe developers have been working on for Creative Cloud all year. This year’s Sneaks theme was the Multiverse, alluding to the ongoing excitement around metaverse technologies.
Adobe held its annual MAX conference this week, closing the event as usual with its Sneaks presentation. Sneaks is a showcase of experimental features that Adobe developers have been working on for Creative Cloud all year. This year’s Sneaks theme was the Multiverse, alluding to the ongoing excitement around metaverse technologies.
Most of this year’s Project Sneaks have an application for use in the Metaverse, whether it’s creating 3D environments from a 2D image or assembling images for use in a Metaverse presentation. Features are in different stages of development, so they don’t always work as well as expected. Most may never see the light of day again.
However, three of the projects stood out for us this year: Project Clever Composites, Project Vector Edge and Project All of Me. Each of these projects uses Adobe Sensei AI to make it much easier for content creators and to look so polished that they stand a good chance of being integrated into Creative Cloud one day.
Smart Composites Project
Clever Composites makes it easy to put together multiple images. If you’ve ever tried to put an object or person against the background of another photo, you know it’s a challenge. First you have to isolate the subject and cut it out of an image, then place it in the background, and that’s the easy part. It can sometimes take hours to adjust lighting, shadows, color saturation, contrast and many other settings to the background image.
Clever Composites does cool stuff. First, it can be cumbersome and time-consuming to use Creative Cloud’s search to find suitable composite images. With Clever Composites, selecting an area of the background image acts as a filter and returns only images that make sense for that visual context.
Once you’ve found a suitable subject for the composite, adjusting to the background is as simple as dragging and dropping. The AI automatically adjusts exposure, scale, and all other effects, including creating a drop shadow.
Project the vector border
Another feature suitable for making composites was Vector Edge. Placing logos and other visuals on three-dimensional surfaces in a photo can be complex. The perspective tool is your friend for tasks like placing an image on a TV screen. However, it gets tricky when you deal with edges. For example, putting a logo on the edge of a box requires such dexterous photoshop magic. Vector Edge does all this for you.
The Vector Edge tool uses AI to eliminate the need to manually crop and adjust foreground images to multiple perspectives. To demonstrate this, Adobe started with a background image containing multiple objects. Sensei analyzes the photo and determines the dimensionality of the object. Then, when you place a foreground image, such as a logo, on a flat surface, it automatically adjusts to the perspective.
In addition, it detects when the subject in the foreground overlaps the edge of a 3D background surface and captures the perspective for all three-dimensional planes. It can also realistically apply content to rounded objects. It does all this in real time, so you can quickly drag the foreground image to where it needs to go, and you’ll see it transform depending on where it’s set.
project me completely
I was finally there. You could call All of Me a maternity aid, and you wouldn’t be wrong. While it’s easy to crop an image, it’s impossible to get the lost parts back without the original unless Sensei is working for you.
Adobe demonstrated this technology with a photo of a woman in front of a house with a large yard. By examining the existing elements, the AI gets a sense of what the scene should look like on a larger canvas. So by expanding the outer edges of the image, All of Me fills in the missing areas with visually accurate material. Adobe’s example showed a recreated portion of the house, the yard, and even the woman’s legs. Interestingly enough, the shoes were visually appropriate for the context of her dress.
As mentioned, these projects are experimental, and many need even more work. As such, they are not available to the public at this time and may or may not come from Adobe Studios. That said, these three seem like the best candidates to appear in future Creative Cloud releases.
If you’re interested in the other experimental features Adobe presents, we’ve included the full overview in the masthead for your convenience.