Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for the link. I found a few interesting topics in there.

One other, was by one of the other execs on stage on categorization of the types of risks people discuss with AI vs lumping everything together under "safety". https://youtu.be/ZFFvqRemDv8?t=1430

1. GPT outputs - Toxic, bias or non-factual

2. System Usage - misinformation, disinformation, impersonation (ex voice)

3. Society/Work - Impacts on workforce, education, replacing jobs, decision making

4. Safety - The more sci-fi style safety discussion.

It's a very insightful point to call out, and something I never see discussed with the same rigor / granularity here on HN. People often pick their strawman from the list and argue for or against all 4 using that one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: