Machine learning models could become a data security disaster





Malicious actors can force machine learning models to share sensitive information by poisoning the datasets used to train the models, researchers found.

A team of experts from Google, the National University of Singapore, Yale-NUS College and Oregon State University has published a paper called “Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets (opens in new tab)”, which describes how the attack works.




Leave a Comment

x