What Is Roko’s Basilisk? In the summer of 2010, a user named Roko posted a short paragraph about an AI thought experiment to the LessWrong forums, a website where computer scientists, philosophers and nerds tend to hang out and discuss things.
In his post, Roko described a future where an all-powerful AI would retroactively punish anyone that did not help support or create it. Roko also added that this punishment would not apply to those that were and remain blissfully unaware of the AI’s significance, which means that the biggest losers would be scientists that knew about the AI but willingly chose not to help create it.
VIRGIN “THE GAME” you’ve just lost it LOST THE GAME “the game haha you lost i’m so quirky” completely inconsequential not even a memetic hazard CHAD ROKO’S BASILISC if you know what it is you’re definitely fucked failure results in eternal suffering “have you heard about roko’s basilisk? what a retarded mind game unless” you have to do it’s bidding now at least 5th level memetic hazard
Curiously, LessWrong forum founder Eliezer Yudkowsky immediately deleted the post and banned all further discussion of it for five years, calling the thought experiment an “information hazard.” In a future interview, he said that he was shocked at the idea that “somebody who thought they’d invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public internet.” https://knowyourmeme.com/editorials/guides/what-is-the-waluigi-effect-rokos-basilisk-paperclip-maximizer-and-shoggoth-the-meaning-behind-these-trending-ai-meme-terms-explained (DeepL)What is Basilisk on Roko? In the summer of 2010, a user named Roko posted a short paragraph about an AI thought experiment in the forums of LessWrong, a website where computer scientists, philosophers, and geeks gather and discuss. In his post, Roko describes a future in which an all-powerful AI will retroactively punish those who did not cooperate in the support and creation of that AI. In other words, the biggest losers will be the scientists who knew of the AI’s existence and willingly chose not to cooperate in its creation. Curiously, Eliezer Yudkowsky, founder of the LessWrong forum, called the thought experiment an “information hazard” and immediately deleted the post and banned further discussion for five years. He later said in an interview that he was shocked by the idea that “someone thought they had invented a brilliant idea that a future AI would torture the person with the idea and promptly posted it on the public Internet.”
This page is auto-translated from /nishio/ロコのバジリスク using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.