Condensed Matter
Condensed Matter
36: "The Singularity: A Philosophical Analysis", David Chalmers
Recently, there has been frenzied interest in artificial intelligence and, in particular, in the issue of AI safety; there have been “open letters” signed by some of the biggest names in the tech business urging us to take seriously the existential threat posed by AI, and the UK government has just announced that it will convene the first global AI safety summit this autumn.
But what is the threat here, exactly? There are risks associated with any new technology: fire burns, nuclear energy can be harnessed in bombs and social media algorithms threaten democracy. The so-called AI singularity is supposed to be at least on par with the absolute worst of these threats since, according to some, it has the real potential to wipe out all of humanity.
Will there be a singularity? How should we negotiate a singularity and will it necessarily be a bad thing resulting in human extinction? Assuming the singularity doesn’t wipe out humanity, how can we integrate into a post-singularity world? Listen to find out!
Here is a link to the paper.