The modern smartphone uses its multiple lenses to quickly create a depth map of a scene and distinguish subject from background. It then uses that information to apply smart filters that create the artificial appearance of shallow depth of field by blurring the background. And while smartphones use multiple lenses to approximate the same sort of binocular vision as our eyes, the engineers at Adobe have built algorithms that can infer those qualities in any camera image—even ones from those old-fashioned cameras that produce files with just one lens.
In practice, it means that Adobe has figured out how to help us photographers make artificial depth of field adjustments post capture to help put the center of attention on the subject. This means a photographer who shot an image at ƒ/8 can convert that file to have the look of shallow depth of field that comes with an image captured at ƒ/2.
This capability is part of Photoshop’s major update for 2021 that added neural filters to the image editing application. These neural filters use machine learning to evaluate millions of images and distinguish, among other things, from foreground elements and backgrounds. (You can also use neural filters for other magical editing applications, including skin smoothing and expression changes and even the repositioning image elements in ways heretofore impossible without human intervention.)
Here’s how to use Photoshop’s neural filters to quickly and easily add background blur to images that would benefit from a shallower depth of field for less visual distraction from a busy background.
First, choose an image with a well-defined point of interest—a portrait for instance—and open it in Photoshop. (If there isn’t much detail in the background, the effect of the depth blur will be less pronounced. Studio portraits or other images without background detail probably aren’t ideal.) Then simply wait a few moments while the neural filter works its default magic and check the preview once the process has finished. After examining the default results, you can fine-tune the filter with the sliders and settings found below the preview window.
First is the instruction to “Click to Edit Focal Point,” which is found immediately below the preview window. This allows you to more effectively fine-tune what’s included in the blur. It’s a one-click way to tell Photoshop where the important part of the photo is and what should be sharp. If by default the blur includes a portion of the image you wanted sharp, click on that area in the preview to establish a new focal point and Photoshop’s AI will reevaluate its depth map.
Next are the sliders. Focal Distance is an alternative to clicking to edit the specific focal point. The focal distance slider effectively moves the focal point along the lens axis to establish the point of focus closer to or farther from the camera. A more accurate approach, in my opinion, is to use the “Click to Edit Focal Point” option.
The Focal Range slider could be thought of as the depth of field slider. The lower the number (to the left), the shallower the depth of field will be. The higher the number, the deeper the plane of focus—so more areas of the image will remain sharp.
Blur strength is exactly what it sounds like: how much additional blur is applied to the background and foreground elements within the frame. The default setting is 75, which tends to be pretty subtle, so dialing it up toward 100 isn’t uncommon.
Haze works similarly to Lightroom’s Dehaze slider but in reverse. Dialing up the haze lowers contrast and increases brightness in the background out-of-focus area to help them recede from prominence.
The temp slider adds warmth to the right and cools the color temperature when sliding to the left. This, along with the saturation and brightness sliders, apply to the entire image—not just the area deemed to be the background by the neural filter. I tend to think there are a dozen other methods for making these adjustments more precisely found elsewhere in the Photoshop universe, so I don’t expect to make much use of them.
Next is a checkbox to “Output Depth Map Only,” which allows saving the details of what Photoshop believes to be the subject, foreground and background, all without applying any blur. In this way, the depth map can be taken out of the Neural Filter and used as a selection method in order to create masks or apply other edits to color, contrast, sharpness and the like.
Aside from Photoshop’s survey asking if you’re satisfied with the results, the final option at the bottom of the window is the Output dropdown menu. I like choosing to output the results onto a new layer so as to maintain the integrity of the original image without altering its pixels, as opposed to Current Layer or Smart Filter. Other options include saving the image to a New Document, duplicating the layer or a duplicate layer that has been masked, which is also useful for those familiar with masks who might like more control to fine-tune the results after leaving the Neural Filter window.
One issue you might run into with Depth Blur is that it blurs the background and eliminates any noise in those areas. This can make the blurred area stand out against whatever noise remains in the unfiltered areas, so I suggest adding back some digital grain via the Add Noise filter under the Noise heading of Photoshop’s Filter menu. This will help blend the newly blurred background with the original, untouched portions of the scene and create a more natural appearance.