Hacker News new | past | comments | ask | show | jobs | submit login

I had experience developing automated underwriting models in the insurance industry. As the models are becoming more sophisticated and adapting machine learning, heavier scrutiny is coming from regulators.

For good reasons, guardrails are required to not only protect against discrimination, but also so proxies can't be used either. For example, we can't include race in health and life underwriting decisions. But since zip code is highly predictive of race, that attribute must also be excluded.

I'm not familiar with banking regulations, but imagine similar policies are applied. In these cases, being able to demonstrate that a model isn't discriminating is not only ethically important, but in many cases is legally required.




It's more than discrimination in banking, you need to show that there are no side channels where insider information might be fed in.


This comment is very, very true.

While you as a person may not be able to get details of the algorithms and methods used, the regulators get exhaustive documentation about every aspect of them.

With GDPR, explainability is going to become a requirement, but it will probably take some test cases before this happens.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: