Google wants to spark a debate to consider new protocols that give web publishers choice and control over how their web content can be crawled by artificial intelligence.
The American internet company believes that web publishers should continue to be able to maintain sufficient control over how their content is used on the web. The current tools to do that, such as the robots.txt implementation, are according to Google developed at a time when artificial intelligence was not yet available and so, according to Google, new additional protocols must be considered. Today, website administrators can use text files robots.txt indicate which parts of a website may be crawled by a search engine and which may not.
Google wants to initiate a debate in which players from both the internet industry and the AI world enter into dialogue with each other. The American internet company wants to hear a lot of different voices and also invites people from academia and other sectors to talk to each other in the coming months.