One of the best features of Google’s 2017 flagship smartphones, i.e., Pixel 2 and 2 XL is its camera app’s particular ‘Portrait mode.’ Now, the technology which underpins the ‘Portrait mode’ feature is launched in the form of open source. The feature is available through Google’s TensorFlow AI framework.
The search-giant officiated this introduction on Monday via its Research Blog. In the blog post, the Alphabet Inc-led company noted how it hopes other users will be able to use the feature to improve on further and make better use of the technology through the creation of new use cases.
As rightfully noted by an Android Headlines report, “One of the reasons Google’s Portrait mode has proved so popular is because it does not rely on a secondary camera to create the bokeh-like effect.” In the end, Portrait mode itself is uncommon with multiple smartphones from numerous leading technology giants.
The other changes depend on the typical use of dual rear camera systems. In addition to this, Google’s version of ‘Portrait mode’ is being implemented at a software level. In other words, it is capable of producing even better results than other hardware-focused alternatives. As a part of their announcement, under the ‘Deep Lab – v3+’ open-source program, the search-giant explained how the technology achieves the desired results. “While machine learning is at the heart of the ‘magic’ Google explains its version of ‘semantic image segmentation’ is the real key – what is now open source,” notes Android Headlines.
Semantic segmentation, in general, refers to the breaking down of a feature into different parts. These various parts are primarily grouped by a particular meaning. The term often refers to the process applied to images. Google explains that its use of semantic image segmentation utilizes the same meaningful and segmented breaking down.