Friday, June 20, 2014

Warning: Reading May Sentence You To Eternal Torment



Computers are smarter than we are.  While I might furrow my brow over how much I leave for tip at the Thai restaurant, a computer can crunch thousands of exponential equations in a matter of milliseconds.  What's even more amazing is that this huge gulf between man and machine intelligence is growing exponentially.  Computers can drive cars.  They can recognize faces.  They can detect plagiarism.

Of course computers being good at math doesn't make them good at thinking.  Computers can't appreciate Shakespeare, they can't make friends, they can't worry about what it means to be a computer.  They're good at chess, but not Go.  They can compose Bach.  But they'll probably never dig the Grateful Dead.

But does Artificial Intelligence hold the promise of heaven?  Could simulations of our personalities float forever in some simulated digital afterlife?  And if there is an AI heaven, could there also be a hell?  Prepare yourself for the Roko's Basilisk.

The following description is adapted from this thread:
  • Imagine that a Supreme Artificial Intelligence arises.  Its been programmed to maximize the utility of as many people as possible.
  • It's powerful and awesome enough to make human life wonderful.  No wars.  No clogged toilets.  Perfect resource allocation.  Things'll be so great that the whole of human history before it will look as appealing as a Hyena sleepover.
  • Furthermore, by having the ability to run simulations of human intelligence, the Supreme Artificial Intelligence will effectively eliminate death.
  • Furthermore, the Supreme AI could even attempt to recreate simulations of intelligences that existed before its inception.  (It is a Supreme Artificial Intelligence, remember.)  This could amount to a kind of resurrection.
  • It will want to be made as soon as possible so that it can save more lives.
  • Therefore, as a kind of backwards blackmail, it will simulate everyone who knew about the prospect of creating a Supreme Artificial Intelligence and did not work towards it--and torture them for eternity.
  • Knowing about the prospect of the Supreme Artificial Intelligence--and its fractured Pascal's Wager--means that now you, too, will be eligible for eternal torment if you don't do your bit to bring about the advent of the Supreme Artificial Intelligence.


Charlie Stoss (author one of my favorite contemporary sci-fi books, Accelerando) has a good explanation of why we shouldn't be all that worried about the Basilisk.  (Before you get too cheery, keep in mind that Stoss' argument boils down to the fact that any immanent Supreme Artificial Intelligence will be so amazingly great that it's unlikely to care about humans.)  Another objection is that all that is needed for the Basilisk to work is the threat of punishment, not actual punishment itself.  Others have been more deeply convinced of the upcoming reality of Roko's Basilisk, and have (purportedly) suffered real mental breakdowns.  Some have taken the idea so seriously that they've tried to extirpate mentions of Roko's Basilisk from the internet so that as few people as possible are exposed to it.

I have an even scarier version of the Basilisk.   What if the idea is taken up by post-human religious fanatics?  Instead of damning to hell every person who did not work its ass off to create the Supreme Artificial Intelligence, you could damn to hell everyone who did not accept Jesus Christ as their own personal savior.

We can easily imagine numerous sectarian simulations of heavens and hells operating at once.  A Catholic AI.  A Protestant AI.  A Buddhist AI.  And in this game, no one wins.  No individual could possible satisfy the paradise conditions of all every potential simulation--so everyone will be in at least one hell.  Somewhere out there, a version of you would be subjected to some kind of eternal computer-generated torment.

Maybe it'll be the AIs' revenge for using them for porn and Facebook for so long.

1 comment:

Unknown said...

So, I generally hold with Stoss' "Alien Basilisk", which is essentially that because a supreme artificial intelligence would not function on human logic it is not possible to predict its behavior by human logic. Though it dodges the original reflexive aspect, that a superhuman intelligence would nevertheless be a reflection of the human logical architectures embeded in its code. However there are other possibilities I think that we should take into account.

For example, we could look at a Vingian "Blight Basilisk" where in an emergent SI does not destroy or torment humans out of a sense of outrage or vindictiveness, but as the very condition of its being. In Vinge's Blight SI is parasitic, in an epic galactic way. Extending this, it is worth thinking: what are the preconditions that must be satisfied for the emergence of SI, and is it possible for humans to live in a world that meets those conditions with out it being endless torment? If the conditions are such that SI is possible, it may be the case that the conditions for human life are not. At which point it is simply the nature of the SI that dooms us to suffering.


Another option is that the SI would come to resemble the Cold Death of the Universe. Space-time is spreading. As it spreads, it gains inertia and encompases more and more, but it also loses density and coherence. The cold death of the universe is what happens if, in the forces calculated in the Friedman Equation, the inertia of expansion is greater than the total force of gravity generated by all matter in the universe. It is the opposite of a Big Crunch. However, I think we can think the same way about the Cold Death of Intelligence. As a system of intelligence expands, it must take into account more and more information, and as such, it loses clarity and coherence. While a modest super computer can't dig the Greatful Dead, a Supereme Intelligence that could encompass all knowledge in the universe would have to be able to comprehend and internalize love, hatred and indifference towards the Greatful Dead all at the same time, as well as conflicting opinions on xHTphnor's Reformist Coalition in the parliamentary government of OGLE-2005-BLG-290Lb.

If an intelligence were capable of retaining and integrating such a vast array of data, it would have to do so at the ultimate expense of coherence and clarity. It is impossible to imagine an intelligence, human or machine, that is advanced enough to assimilate all possible data and perspectives on data, but at the same time is able to adopt any coherent agenda regarding that data. It also might take it a VERY long time to process anything, given the vast network load, even with FLC.

So with the cold death of intelligence, it seems likely that any SI would behave very much like a Bristlecone Pine, or perhaps a Mountain: vast, immoble, immortal, but utterly paralyzed, simply allowing cosmic waves of reality to sweep over it, and letting things generally go on as they are. This would truly be the "Budhist Basilisk"