I think it raises larger issues, because as far as NLP goes - yeah - making a classifier isn't hard (as the article states).
But what happens when an algorithm that analyze data objectively, presents a result that would be deemed 'racist' if it were released by a human?
yeah, this is useful in a corporate or marketing setting, but as we start to integrate more NLP tech into interpreting ML/statistical results, I don't think I'd want to inject bias and risk missing out on difficult yet important truths.
Tay was just a marketing gimmick though - the biggest thing I took away form it was that a major corporation can blame AI for a faux pas and nobody will hate them for it. In 5 years, the "someone who hacked my twitter" excuse will be replaced by "oops, our AI made a boo boo"
But what happens when an algorithm that analyze data objectively, presents a result that would be deemed 'racist' if it were released by a human?
yeah, this is useful in a corporate or marketing setting, but as we start to integrate more NLP tech into interpreting ML/statistical results, I don't think I'd want to inject bias and risk missing out on difficult yet important truths.
Tay was just a marketing gimmick though - the biggest thing I took away form it was that a major corporation can blame AI for a faux pas and nobody will hate them for it. In 5 years, the "someone who hacked my twitter" excuse will be replaced by "oops, our AI made a boo boo"