Hacker News new | past | comments | ask | show | jobs | submit login

This is definitely true. In fact, this can be exploited to extract sensitive/private attributes about the training data from the learned models. This may become an issue for, e.g., AI in healthcare.

"Overlearning Reveals Sensitive Attributes": https://arxiv.org/abs/1905.11742




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: