Google often uses its annual developer conference, I/O, to showcase artificial intelligence. Google Home smart speaker was introduced with Google Assistant. Duplex debuted in 2018 to answer calls and schedule appointments for businesses. Last month, the CEO Sundar Pichai kept up with the tradition and introduced LaMDA, AI “designed to have a conversation on any topic.” In an onstage demo, he demonstrated what it is like to converse with the celestial body Pluto and a paper airplane. LaMDA responded with three to four sentences for each query, and it resembles a conversation between two people.
Over time, Sundar said, LaMDA could be incorporated into Google products, including workplace, Assistant, and Search. “We believe LaMDA’s natural conversation capabilities have the potential to make information and computing radically more accessible and easier to use,” Sundar said.
A window into Google’s vision for search is shown in the LaMDA demonstration that goes beyond a list of links and could change how the web searches billions of people. That vision centers on AI in which meaning from human language can interfere, engage in conversation, and like an expert, answer multifaceted questions.
At I/O, Google introduced another AI tool, dubbed Multitask Unified Model (MUM), searches with images and text can be considered. As per VP Prabhakar Raghavan, users might someday take a pair of shows and ask the search engine while climbing Mount Fuji, should this pair will be suitable.
MUM generates results across 75 languages. A demo onstage showed how MUM would respond to the search query, “I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently?” That search query is phrased differently than people search in Google today because MUM reduces the number of searches to find the exact answer. It can summarize and generate the text. I will allow comparing Mount Fiji to Mount Adams, and that trip prep may require search results for weather forecasts, hiking gear recommendations, fitness training.