Call us: +44(0)1202 734 817

Nick Bostrom

Nick Bostrom

Best known for:

Professor Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk and AI as well as his work on the DeepMind Ethics board.


Nick Bostrom is a Professor at Oxford University, where he is the founding Director of the Future of Humanity Institute and works on AI policy and foundations of macrostrategy; advising some actors in the AI space as part of AI giant DeepMind’s ethics board.


Nick Bostrom has a background in physics, artificial intelligence, and mathematical logic as well as philosophy. He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed on Foreign Policy's Top 100 Global Thinkers list twice; and he was included on Prospect magazine's World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 24 languages. There have been more than 100 translations and reprints of his works. 

Bostrom envisioned a future full of human enhancement, nanotechnology and machine intelligence long before they became mainstream concerns. From his famous simulation argument -- which identified some striking implications of rejecting the Matrix-like idea that humans are living in a computer simulation - to his work on existential risk, Bostrom approaches both the inevitable and the speculative using the tools of philosophy, probability theory, and scientific analysis.

Since 2005, Bostrom has led the Future of Humanity Institute, a research group of mathematicians, philosophers and scientists at Oxford University tasked with investigating the big picture for the human condition and its future. Nick also directs the Strategic Artificial Intelligence Research Center and has been referred to as one of the most important thinkers of our age.

Nick is the author of some 200 publications including:

  • Anthropic Bias (2002)
  • Global Catastrophic Risks (2008)
  • Human Enhancement (2009)
  • Superintelligence: Paths, Dangers, Strategies (2014) – A New York Times bestseller

Nick’s recent work includes:

  • Policy Desiderata in the Development of Machine Superintelligence
  • Strategic Implications of Openness in AI Development
  • How Hard is AI? Evolutionary Arguments and Selection Effects

Fee range:

Please Enquire