AI Machine-learning models vulnerable to reverse engineering

In a paper [PDF] presented in August at the 25th Annual Usenix Security Symposium, researchers at École Polytechnique Fédérale de Lausanne, Cornell University, and The University of North Carolina at Chapel Hill showed that machine learning models can be stolen and that basic security measures don’t really mitigate attacks.

Machine learning models may, for example, accept image data and return predictions about what’s in the image.

Taking advantage of the fact that machine learning models allow input and may return predictions with percentages indicating confidence of correctness, the researchers demonstrate “simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees.”

That’s a polite way of saying such models can be reverse engineered. The researchers tested their attack successfully on BigML and Amazon Machine Learning, both of which were told of the findings in February.

Source: How to steal the mind of an AI: Machine-learning models vulnerable to reverse engineering

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com

Leave a Reply