IBM wants to protect ‘watermarked’ machine learning models

Spread the love

IBM has presented a method for watermarking machine learning models. In this way, for example, it should be possible to check whether a model that is used online has been stolen.

IBM writes that artificial intelligence research often involves significant investments and that it is therefore important to protect developed models against unauthorized use. While the idea of ​​watermarking machine learning models is not new, the method described by IBM in a paper should allow identifying a protected model without accessing its parameters, unlike previous attempts. This should be useful if the model can only be accessed remotely.

The idea is to train a model in advance in such a way that a specific input will result in a predetermined output, for example by presenting a specific image. However, this should not be at the expense of the accuracy of the model. The company describes three different methods, or watermark generation algorithms. One uses ‘meaningful content’ in conjunction with the training data, while the other uses irrelevant data and noise. The watermarking method presented, on which IBM claims to have applied for a patent, should also provide protection against ‘counter-watermark mechanisms’.

The limitation of this approach is that a model cannot be checked if it is not available online, but is applied internally. In that scenario, there is no remote access, but according to IBM, there is no method for the thief to monetize it either. Also, the method would not protect against theft. The company says it wants to use the method internally to then investigate whether it can be offered as a service.

algorithmsArtificial intelligenceIBMLearningMachineProtectionRemoteTraining