The British market regulator CMA has started an initial investigation into foundation models such as large language models and generative AI. With the research, the CMA is looking, among other things, at the possibilities and risks of these models for consumers and competition.
“Foundation models have the potential to change much of what people and companies do,” writes the Competition and Markets Authority. “To ensure that innovation in artificial intelligence takes place in a way that benefits consumers, businesses and the UK economy, the government has asked regulators such as the CMA to explore how to support the innovative development and roll-out of AI.” In doing so, five principles are considered; safety and robustness, transparency and explainability, fairness, accountability and governance, and sufficient complaint and appeal options.
With the research, the CMA wants to better understand how the foundation models develop. In addition, it must become clearer which conditions are necessary to steer this development and use. In doing so, the market regulator mainly focuses on the consequences AI development can have on competition and consumer protection. Other British government bodies are again looking at, for example, security, privacy, human rights and copyright, the CMA reports.
Stakeholders can report to the authority until June 2 if they want to share evidence or their views. The CMA plans to issue its report in September. Examples of tools that use largelanguagemodels and generative AI include ChatGPT, Bing Chat, and Google Bard. ChatGPT was previously banned in Italy due to privacy concerns, although the chatbot has been available in the country since the end of last month.