Machine learning models could become a data security disaster

Malicious actors can force machine learning models to share sensitive information by poisoning the datasets used to train the models, researchers found.

A team of experts from Google, the National University of Singapore, Yale-NUS College and Oregon State University has published a paper called “Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets (opens in new tab)”, which describes how the attack works.


Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top