Google are bringing searching and recognition to your camera lens even more so with the launch of Google Lens. The aim of the new software; which ties into AI learning allows for users to use their camera to not only identify different objects, but to have Google Assistant act upon the information in gleans from what it sees.
Examples given includes automatically connecting to a WiFi network based on printed network settings, identifying a restaurant and displaying reviews/information, and general venue information for advertisement for sports games and concerts.
“When we started working on search, we wanted to do it at scale,” CEO Sundar Pichai said at Google’s I/O developer conference today. “That’s why we designed our data centers from the ground up and put a lot of effort into them. Now that we’re evolving for this machine-learning and AI world, we’re building what we think of as AI-first data centers.”
This isn’t the first time that Google have launched this type of software, with Google Goggles being avaialble for the past 7 years (launching in 2010). This is clearly a much more evolved version of the software and technology behind the older app. Samsung are doing something similar with their own assistant called Bixby, but currently it doesn’t support nearly as much as what Google are promising.