‘Government algorithms are not black boxes, but they do have risks’

Spread the love

There is a risk that citizens will be discriminated against if the government uses algorithms. Nevertheless, there are currently no ‘black box’ algorithms in use, the Court of Audit concluded in a report.

The Court of Audit conducted an investigation into the use of algorithms in government. In the study, the agency concludes that the government uses many algorithms, but that they are mainly simple and are used for simple purposes. Nowhere is there a so-called ‘black box’ where the functioning of the algorithm is unclear; the Court of Audit was able to find out how all algorithms work. The government must continue to pay attention to this in the future, the agency warns. Transparency must always come to the fore in development, especially when purchasing algorithms from private parties.

The government does not use self-learning algorithms, says the Court of Audit. It’s all about learning algorithms. In most cases they are used to automate things. The Court of Audit mentions as an example an algorithm that determines whether a homeowner is entitled to a subsidy for a national monument on the basis of an if-then decision tree. Moreover, there are no algorithms in use that use automated decision-making. A civil servant is always involved who checks and confirms the result, even with algorithms that perform ‘risk prediction’. However, there are risks there, warns the Court of Audit. For example, such algorithms may violate the law and discriminate. “There is also a chance that the algorithm’s advice will influence the employee’s final decision,” the report says.

The latter category is where most of the risks lie. The citizen ‘is not central to this’. It is unclear to them where they can go if they have questions about algorithms in which their data appears, or if they want to object. “We recommend that the government provide citizens in a logical place with insight into which data is used in which algorithms, how those algorithms function in outline and what impact the results have,” the Court of Audit writes. The government uses the algorithms for itself. Attention is paid to privacy, but too little to ethical issues.

The Court of Audit notes in the investigation that this picture may not be complete. The overview comes from descriptions it had requested from ministries themselves.

The Court also says that there is currently no way for governments to monitor which algorithms can and cannot be used. That is why the Court of Audit has drawn up an assessment framework itself. This includes ethical issues or questions about the privacy of citizens.

Government algorithms have been under a magnifying glass for years, in particular the System Risk Indication or SyRI. The Tax and Customs Administration also used algorithms to draw up risk profiles of possible childcare allowance fraudsters, but this led to ethnic profiling and subsequently the allowance affair. Algorithms will be an important issue in the upcoming elections. Most political parties argue for more openness of algorithms in government, and many parties want a separate supervisor for this.

You might also like