Hacker News new | past | comments | ask | show | jobs | submit login

[flagged]



Presentation of FHI:

- Founded in 2005 by Prof Nick Bostrom as a multidisciplinary research group at Oxford to study big-picture questions for human civilization

- Aimed to shield researchers from ordinary academic pressures and foster creativity and intellectual progress

History:

Early days (2005-2010):

- Initial funding from James Martin and the Oxford Martin School

- Focus on human enhancement ethics, global catastrophic and existential risks, methodology for thinking about uncertain futures

- Hosted influential conferences and workshops that helped build academic communities around key research areas

Maturation (2010-2020):

- Artificial intelligence, especially AI safety and governance, became a major focus

- Expanded into other areas like biosafety, priorities research, population ethics

- Received major grants allowing expansion of research

- Increasing policy impact and advising governments

Final Years (2020-2024):

- Continued work through the COVID-19 pandemic after moving to a new building

- Some new research directions like digital minds and grand futures

- But also increasing bureaucratic obstacles from the Philosophy Faculty

- FHI was closed down in April 2024 when the University declined to renew staff contracts

Research Topics and Findings:

- Existential risk - pioneered the study of risks that threaten humanity's long-term potential

- Biological risk - modeled risks from emerging biotechnologies and pandemics

- Macrostrategy - studying how long-term outcomes for humanity connect to present-day actions

- Longtermism - the idea that positively influencing the long-term future is a key moral priority

- Grand futures - exploring the limits of what spacefaring civilizations could accomplish

- SETI/Fermi paradox - dissolving the paradox; grabby aliens hypothesis

- Effective altruism - identifying the highest-impact ways to improve the world

- Technology: - AI safety, alignment and governance - Whole brain emulation - Digital minds and AI consciousness - Human enhancement ethics

- Epistemology and rationality - anthropic reasoning, information hazards, moral uncertainty

- Ethics - challenges in infinite ethics, Parliamentary model for normative uncertainty

Concepts originated at FHI that are now influential: existential risk, astronomical waste, information hazards, differential technological development, crucial considerations, exploratory engineering, whole brain emulation

Learnings and Advice:

- Take the long-term view; build up new fields even if not currently fashionable

- Have a diverse team from many disciplines

- Invest in the right organizational relationships to maintain stability

- As an organization scales, its structure needs to evolve

- The key to replicating FHI is having the right people and intellectual culture focused on the most important questions


Excuse me, but this is incorrect:

> Concepts originated at FHI that are now influential: existential risk, astronomical waste, information hazards, differential technological development, crucial considerations, exploratory engineering, whole brain emulation

None of these originated at FHI. FHI may have considered them important and talked about them a bunch, but they all predate the FHI, mostly by multiple decades.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: