Deep Fusion is an image processing system that automatically works behind the scenes under certain conditions. Apple says the feature produces "images with significantly better texture, detail and reduced noise in low light."
Unlike the iPhone's Night Mode feature or other camera options, there is no user-facing signal that Deep Fusion is being used, it is automatic and invisible (on purpose).
However, there are certain situations where Deep Fusion won't be used: any time you use the ultra-wide lens, any time "Take photos outside the frame" is turned on, and when you take a burst of photos.
Please remember that Deep Fusion only works with iPhone 11, 12, 13, and SE 3.
So what is it doing? How do we get such an image? are you ready? That's what it does. It takes nine images, and before you press the shutter button it's taken four short images, four secondary images. When you press the shutter button, it takes a long exposure, and then in just one second, the neural engine analyzes the fused combination of the long and short images, picks the best ones, selects all the pixels, pixel by pixel, and iterates through 24 megapixels optimized for detail and low noise, like you'd see in a sweater. Amazingly, this is the first time a neural engine is responsible for generating the output image. This is the mad science of computational photography.
The above is the detailed content of How to use Deep Fusion on iPhone SE 3, iPhone 13, and more. For more information, please follow other related articles on the PHP Chinese website!