Google has demonstrated an algorithm that can conduct conversations, pretending to be any object. Google is also working on a search function that combines text, audio, images and videos.
This is software that is still in the early stages of development, said Google CEO Sundar Pichai during the keynote of developer conference I/O. The intention is to eventually integrate the software into the search function, Assistant and Workspace.
The LaMDA, Language Model for Dialogue Applications algorithm, learns a lot about an object so that users can have conversations with it. In the demonstration, the algorithm had conversations like dwarf planet Pluto and a folded paper airplane. The software is far from flawless and occasionally gives wrong answers or doesn’t keep the conversation going, Pichai said. The intent is to develop software that allows users to have conversations in a more natural way.
Google is also working on a ‘multimodal model’ to simultaneously use text, images, audio and video. For example, it should be possible to ask for video content, such as ‘show me the part where the lion roars at sunset’. Google did not say when the technology will be ready to be put online.