Superintelligence – Nick Bostrom – Book Summary

  • Post author:
  • Post category:Book Notes

Superintelligence

Book Summary:

How would it impact the human race if we created a machine that was smarter than us?

Related Book Summaries:

Collapse – Jared Diamond – Book Summary

Antifragile – Nicholas Nassim Taleb – Book Summary

Sapiens – Yuval Noah Harari – Book Summary

Quotes:

Some little idiot is bound to press the ignite button just to see what happens.

Book Summary Notes:

  • If the future bring super intelligent ai, what will that look like and when will it appear?
  • Some of the fundamental traits that set us apart from other animals include abstract thinking and communication.
  • Revolutions have been occurring with increasing speed over the last few centuries.
  • We’ve already created technology which can learn and grow from the initial information we gave it. A simple example is the spam filter in your email.
  • The 90’s was the first point where ai began to grow and learn using structures similar to our own neural networks, this lead to the ai now present in everything from our smartphones to google.
  • AI still has limits however, for example an ai can be programmed to play a game and win, but cannot be programmed to learn to play any game put before it and win.
  • Most experts believe the advent of machines as smart as us may happen around 2075 and 30 years later we matter witness super intelligence begin to emerge.
  • Two options for building AI are to build a machine capable of processing huge amounts of data stored within itself to calculate probability and act on it. Or to build what Alan Turing called a ‘child machine’ a machine given the basic fundamentals of learning and allowed to self develop from there through experiences.
  • Most of the biggest advances by humans were made in two ways; either a huge collaboration between many many people or even countries, or through the work of one scientist or group who got significantly ahead of their peers. Both of these options have different outcomes for the concept of AI.
  • Superintelligence would need to learn human values in order for us to also survive.
  • We could teach AI to act in alignment with core human values.
  • Another problem to deal with is that machines would likely replace the entire human workforce, given time.
  • This could lead to a future where those without means are almost permanently stuck in the lower socioeconomic classes and unable to compete with the rich and their machines.
  • No matter what safety must be the top priority while developing AI that could one day possess superintelligence.