The European Commission wants companies and researchers to store data on artificial intelligence training and in some cases entire data sets. This should make it possible to check afterwards where an incorrect assessment came from.
The rule to retain data is proposed in a white paper on artificial intelligence that the European Commission put online on Wednesday. “These requirements essentially allow for retrospective verification of problematic actions or decisions by AI systems,” the white paper reads. “Not only does this allow for oversight and enforcement, but it may also prompt the AI system maker to consider the rules surrounding AI.”
For example, companies and researchers must be able to provide a description of the data and describe how it was selected. In some cases, they even need to be able to hand over the entire dataset. More measures are needed to develop artificial intelligence responsibly, including human oversight of AI systems.
The white paper on artificial intelligence is part of the European Commission’s plans for data and artificial intelligence. It is not yet about concrete proposals, but about principles that those proposals must comply with. In doing so, the Commission mainly focuses on the fact that technology must be trustworthy.