The camera on the Pixel 2 is really good. In fact, it is one of the best on the market right now. But one of the main reasons for that is AI.
DeepLab-v3+ is name of the code that Google has set as open-source, confirmed in this blog post. This fancy algorithm is an image segmentation tool built using convolutional neural networks, or CNNs, It analyzes objects within a picture and then splits them apart – to foreground and background elements. Google uses this technology for the “portrait mode” on their cameras. While most others use a dual-camera setup to get the effect of a blurred background, Google does it with the power of AI. And in most cases it works really, really good!
As Google software engineers Liang-Chieh Chen and Yukun Zhu explain, image segmentation has improved rapidly with the recent deep-learning boom, reaching “accuracy levels that were hard to imagine even five years [ago].” The company says it hopes that by publicly sharing the system “other groups in academia and industry [will be able] to reproduce and further improve” on Google’s work.
Be the first to comment on "Google open-sources the AI tool for the Pixel’s portrait mode"