Here's everything Apple said about Deep Fusion during the iPhone event
What you need to know
- With Apple set to release Deep Fusion via an iOS 13 developer beta update, lets take a look at what it said about the feature during the iPhone event.
- Phil Schiller revealed it uses the nueral engine to create a new image processing system.
- He called it "computational photography mad science."
When Apple unveiled the iPhone 11 and its fancy new cameras, it also gave a sneak peek for a brand new camera feature it would soon offer: Deep Fusion. Now that the feature is rolling out with the latest iOS 13 developer beta, we decided to look back and see exactly what Apple said about the feature during the September 10 event.
Apple vice president of Worldwide Marketing Phil Schiller started by stating it uses "the neural engine of the A13 Bionic to create a whole brand new kind of image processing system."
Then he presented an image of a man seating on a couch with a vividly intertwined sweater that has a lot of intricate detail in the weaving pattern. He said this type of image would not have been possible before.
Using machine learning, the image was captured in low to medium light. Here's how it does that according to Schiller.
That was it. Apple didn't get into too much detail as the feature would not be available until a later time. It doesn't even mention it on its site right now, but we imagine that will change shortly after the feature becomes available.
The Deep Fusion feature sounds impressive. The images Apple has released so far give us a good look at the raw potential of the feature that takes full advantage of the new cameras in the iPhone 11. It could help further separate the iPhone 11 camera from the competition.
Master your iPhone in minutes
iMore offers spot-on advice and guidance from our team of experts, with decades of Apple device experience to lean on. Learn more with iMore!