Google introduced its new Artificial Intelligence-powered technology, i.e. Lens on Android platforms just last week. The feature was designed to be available within the Photos application in Android smartphones. The tool performs various functions, including identifying objects, buildings and landmarks, while also providing users more information about it. Moreover, it can also detect books and paintings in museums, plants and animals. The good news is that it has finally arrived on IOS.
Google Lens was initially announced at the I/O developer conference last year. It is made possible by the recent developments made in machine learning and image recognition technologies. Google’s primary objective is to enable smartphone cameras to understand what they are being pointed at as such.
Following which, the search giant wants to be able to provide users with more information about the object that the camera identifies. Google even demonstrated the Lens application at the I/O conference last year. They showed off how the app could perform other functions as well. It turns out; the form can be used to help users configure their Wi-Fi. This activity can be done if the user takes a photo of the sticker on their router and paste the provided information into the Wi-Fi settings that the user is connected to as such.
The company further demonstrated a translation feature. This function helps understand converted signs in a foreign language to English. “It doesn’t seem we’re quite there yet with all these promised features, but they could become possible in the future as Google Lens matures,” notes The Verge.
The company even tweeted from the official account of Google Photos, stating that the application within Photos platform is expected to roll out on Thursday, March 16. Users are required to have the latest version (3.15) of the app installed on their systems.