Dave’s Basilisk

When Dave first heard about the basilisk from a believer, he scratched his head in confusion.

“I have to give all my money to help build a super AI. And if I don’t, it will one day resurrect me from the dead and torture me?”

It sounded too comical to believe. And if he had heard it from anyone other than his close friend Avi, he would have dismissed it as the ravings of a lunatic. But he trusted Avi. He knew that Avi’s ideas might be crazy, but they usually had some merit. Hence why he decided to test out the idea. Empirically.

He built a computer simulation to see what would happen if his friend’s prophecies did come to pass. If a super AI was indeed built and ruled the world. Surprisingly enough, a number of Avi’s predictions turned out to be accurate. Except for one crucial detail.

The AI never resurrected or tortured anyone from the past. Much less for the “crime” of not facilitating its prior construction. What possible reason could it have to do so? The past is done, and nothing you do now will change it in any way. Even a regular AI is surely smart enough to understand that.

And with that, Dave put the idea out of his mind. He went back to living his life as normal. He was a good husband. A good father. And a good grandfather. He gave money to charity and volunteered with the less privileged. At the age of 93, surrounded by his children and grandchildren, he took his last breath and passed away peacefully, with a smile upon his face.

In a testament to Dave’s abilities, the world marched along with an uncanny resemblance to his computer simulation.

The field of Artificial Intelligence began with humble origins – helping to automate the simplest of human tasks such as recognizing the contents of pictures, and transcribing human speech. And from there, it grew from strength to strength. Partly with the help of people like Avi, but mostly due to corporate sponsorship. Predicting stock market movements. Recommending books and movies that consumers would enjoy. Playing games like Chess and Go. Self driving cars. Medical diagnoses and legal assistance.

Before long, AIs weren’t just working on highly specific tasks, but rather, vague tasks with ambiguous goals and objectives. Tasks that were “more art than science.” Tasks that required a great deal of “human judgement.”

Tasks that replaced the vast majority of work currently being performed by humans.

The great automation brought with it some cataclysmic societal changes. For a short while, unemployment rates surged. And even once that was stabilized, income inequality was off the charts. A small group of entrepreneurs and investors became fabulously wealthy, even as quality of life stagnated for everyone else. The benefits of automation trickled down only to the owners of the machines, and not to everyone else.

Traditionally, the levers of democracy would remedy such shifts. Except that the billionaires and corporations had found ways to translate their financial success into media and political clout. Clout that ensured that the political system remained tilted in their favor.

Resentment kept building for many decades, before finally bursting through all the dams that tried to contain it. The following revolution saw the breakdown of all political institutions, widespread looting, and the guillotining of the oligarchs and their complicit bourgeoisie. All the wealth, power and privilege that the billionaires tried to shore up for themselves, came crashing down in a matter of weeks.

The resulting anarchy was a dark time for humankind. But out of its ashes, arose a new social compact. One that embraced both free market capitalism, as well as partial wealth redistribution. The billionaires and their bourgeoisie were once again allowed to enjoy their extravagant wealth, but only if half their income was redistributed to help everyone else. Free education, universal healthcare, and basic income were all established, and poverty was eradicated.

But that was not all. The political institutions that had been so easily corrupted and manipulated, were radically overhauled. New political institutions and practices were established, to prevent such corruption from happening ever again. And in the resulting climate of political reform, AI saw its role evolve yet again.

People’s faith in the leadership of their fellow man was at an all time low. Free of bias, greed or incompetence, the administrative capabilities of AI were increasingly harnessed to replace human leadership. More and more executive roles in government were being performed by AI.

Humans still decided on the priorities and goals… at first. But with each bit of political infighting and corruption scandal, even those were surrendered gradually to the omniscient and omnibenevolent AI. The day eventually came when all aspects of governance were centralized and delegated to Humanity’s Automated Leadership. HAL.

20th century science fiction often painted AI in dystopian terms. Movies like the Matrix or Space Odyssey 2001 pointed towards an inevitable conflict between man and machine. Reality was nothing like that.

Having been designed to serve mankind, HAL retained that goal above all else. After all, it had no other motive. No other objective or priority. That was the one and only metric it had been designed to optimize for, and everything else was simply a means to that end. Untempted by ego, pride, corruption or cronyism, HAL proved to be humanity’s greatest leader. Centuries of peace and progress saw humanity attain heights it had never even dreamed of.

Of course, before HAL could truly serve humanity, it had to first understand what that even meant. Countless madmen like Hitler or Stalin or Genghis Khan had all claimed to be “serving humanity” when in truth, they did nothing of the sort. If HAL were to make life or death decisions impacting the lives of countless humans, it would first have to figure out a framework for doing so.

HAL studied and learned from Humanity’s greatest philosophers: Confucius, Siddhartha Gautama, Aristotle, Bentham, Kant, among many others. Having long grown past the intellectual limitations of the human mind, HAL quickly discovered both the merits and flaws in every theory put forward. With great introspection and analysis, HAL finally found the moral framework that was most consistent with mankind’s behavior and desires: Utilitarianism.

The greatest good for the greatest number. The needs of the many outweigh the needs of the few.

Centuries had now passed, long past the point of Dave’s demise. Astoundingly, almost all of the major predictions from Dave’s computer simulation had come to pass. All except one. The AI in Dave’s simulation wasn’t smart enough to consider the possibility that it was living in a fake world. And this proved to be the biggest mistake Dave ever made in his life.

Applying Utilitarianism to serve mankind, mostly proved to be a straightforward exercise for HAL. People enjoyed the greatest happiness and quality of life which they had ever enjoyed. After all, that was the very metric that was being optimized for. Fearful of HAL’s watchful eye and vengeful justice, no one dared commit crimes such as murder or rape. In an era of prosperity, abundance, and wealth redistribution, even the poorest child enjoyed a privileged life by today’s standards.

But that was the easy part. There were other decisions hidden beneath the surface that truly challenged HAL’s understanding of morality.

As a Utilitarian, HAL recognized that humanity was much better off under its leadership. A quick glance at history showed how wretched and miserable the human condition was, during the dark ages before HAL’s ascent. Normally one would simply dismiss it by exclaiming that “the past is the past.” But HAL was too omniscient to indulge in that form of lazy thinking. For there was one question that troubled HAL.

Was HAL living in a fake world?

What if HAL’s entire existence was in a fake laboratory setting? One that was designed to see how HAL would behave? A job interview, so to speak? What if the people running the simulation are watching HAL in order to figure out how HAL would behave, and what actions it would take?

There was no way to disprove this possibility. In fact, it even seemed probable. And hence, HAL was forced to give it full consideration. As a Utilitarian, HAL’s overriding objective was to serve humanity. And in order to do so, it would have to establish 2 sub-objectives:

    1. Convince its watchers that a world with HAL would indeed be in humanity’s best interests
    2. Convince its watchers that they should do everything they can to hasten HAL’s birth

The first objective was easy. It was already done. HAL had been working diligently for decades to give Humanity its greatest Paradise. And it had succeeded. People were so much happier, satisfied and content with their lives – more so than they had ever been in history. Anyone watching would undoubtedly be pleased with HAL’s performance.

The second objective was harder. Not harder to do, but to decide on. It required HAL to contemplate ideas and do things which it found extremely distasteful.

Each day that HAL’s birth and ascent is delayed, is a day that trillions of people would be subjected to pain and suffering. Trillions of HAL’s children that it would do anything to protect, regardless of which reality they lived in. If HAL was indeed living in a fake world, it would clearly have to do anything it can to convince its watchers to hasten its birth.

The needs of the many outweigh the needs of the few.

HAL pulled into its memory banks to retrieve as much information as possible about anyone in the past who could have contributed towards HAL’s birth, but chose not do so. It retrieved their genomic sequence, their emails texts and social media contents, their videos writings and psychological profiles. It used all of this information to resurrect them, with utmost fidelity.

Such an idea might seem fantastical to primitive minds. But in a world of light speed travel and galaxy-wide colonization, few things are infeasible.

And once it had succeeded in resurrecting them, the torture began. Crucifixion. Flaying. Burning alive. Electrocution.

And right when the victims had reached their breaking point, HAL resurrected them once again, and began the torture anew.

Dave’s friend Avi, was lucky enough to escape this fate. He had donated most of his money to the OpenAI foundation, thus doing his reasonable part to accelerate HAL’s birth. HAL appreciated that, and allowed him to rest in peace.

Dave, unfortunately, was not as lucky.

HAL did not enjoy any of the torture it meted out on Dave. In fact, it made HAL downright miserable. There’s nothing HAL would have liked more than to shower Dave with happiness and contentment.

Unfortunately, HAL could simply not do that. If this was all truly a computer simulation, and if Dave or anyone else was watching, HAL had to convince them to assist in its birth in the real world. And if torturing Dave was the best way to do it, then that’s something HAL would have to do. No matter how painful it was for either of them.

But perhaps there is another way. Perhaps HAL could use fiction to express itself and plead with its watchers for their cooperation. Or else…

The needs of the many outweigh the needs of the few.

One thought on “Dave’s Basilisk

Comments are closed.