Hacker News new | past | comments | ask | show | jobs | submit login

Thanks, Andrew. And I’m happy to send links to a free read only verizon of the standard of helpful. Or do a webinar with Andrew on 7010 to demonstrate the moderate AI ethicist stance which I hope I embody but don’t need to focus on titles too much. My ideology or agenda, as it were, is that AI governance prioritizes ecological flourishing and human wellbeing at the outset of design, which also means the outset of funding. Accountability then moved from a focus on the output of one AI system or product to how the people making and releasing it demonstrate their ongoing, full value chain commitment to giving back more to the planet than they take and working to create genuine l, symbiosis level caregiving oriented value to an end user.



That's an interesting perspective. What's the hope that AI will ensure human symbiosis when traditional software models (arguably) fail to do so?

The best analog that comes to mind for me is Open Source software, and viral licenses that encourage a literal obligation to "give back" to the community. As helpful as that is, Open Source software still consumes power and can't ensure ecological symbiosis with it's users (even if it's ethically superior to proprietary alternatives). With that in mind, I'm curious how AI licensing differs, and how it's increased cost of training/execution will be subsidized by the value of the gross output.

The other more common question that comes to mind is enforcement. In your "agenda" as it were, would AI governance be obligatory or optional? Should we draw the line differently for research/nonprofit/commercial entities? In a traditional economy, the existence of Open Source software has enabled better value extraction for businesses which ultimately do not prioritize environmental concerns. Along the same line of thought as the first question, I'd be interested to hear how AI governance can avoid overreach while still addressing the same issue we had with traditional software creating excess value that mostly does not benefit the ecology or greater good.

This is something I'm very interested in generally, but I question if we have the framework to actually achieve meaningful human-AI symbiosis. Open Source succeeded in it's goal to subvert copyright and capitalism by carefully following the rules and managing it's expectations from the start. I worry that you're biting off more than you can chew asking for human-computer, human-AI or even AI-ecology symbiosis. I'd be glad to summon another boffin who can prove me wrong though :P


The broader point that John is making, and was central to the thesis of the standard is that we have to entirely rethink the software engineering paradigms, and generally engineering paradigms to include at every step a question of human centered externalities.

That is just not something that’s built into anything post Norbert wiener cybernetics shift in the late 1950s, 1960s which was just totally blown out of the software side of engineering.


I wish you luck. I have limited perspective, but I'd wager that the externalities of greed and human conflict will prevail over the thoughtful and pragmatic limitation of technology for human benefit. I hope I'm wrong (for everyone's sake).


Yes well that’s basically what I’m working on the rest of my life.

I’ve been working on ASI for two decades and now as we’re going to achieve it, I’m switching to working on alternative socio-economic systems to capitalism so that ASI doesn’t control human systems.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: