Global Government And Mass Surveillance May Be Needed To Save Humanity, Expert Says

by Dagny Taggart


This article was written by Dagny Taggart and originally published at The Organic Prepper

A prominent Oxford philosopher who is known for making terrifying predictions about humanity has a new theory about our future, and it isn’t pretty.

Over 15 years ago, Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, made the case that we are all living in a Matrix-like computer simulation run by another civilization.

Here’s a summary of that theory, explained by Vox:

In an influential paper that laid out the theory, the Oxford philosopher Nick Bostrom showed that at least one of three possibilities is true: 1) All human-like civilizations in the universe go extinct before they develop the technological capacity to create simulated realities; 2) if any civilizations do reach this phase of technological maturity, none of them will bother to run simulations; or 3) advanced civilizations would have the ability to create many, many simulations, and that means there are far more simulated worlds than non-simulated ones. (source)

Will humanity eventually be destroyed by one of its own creations?

If you find the idea of living in a computer simulation that is run by unknown beings troubling, wait until you hear Bostrom’s latest theory.

Last Wednesday, Bostrom took the stage at a TED conference in Vancouver, Canada, to share some of the insights from his latest work, “The Vulnerable World Hypothesis.”

While speaking to head of the conference, Chris Anderson, Bostrom argued that mass surveillance could be one of the only ways to save humanity – from a technology of our own creation.

His theory starts with a metaphor of humans standing in front of a giant urn filled with balls that represent ideas. There are white balls (beneficial ideas), grey balls (moderately harmful ideas), and black balls (ideas that destroy civilization). The creation of the atomic bomb, for instance, was akin to a grey ball — a dangerous idea that didn’t result in our demise.

Bostrom posits that there may be only one black ball in the urn, but, once it is selected, it cannot be put back. (Humanity would be annihilated, after all.)

According to Bostrom, the only reason that we haven’t selected a black ball yet is because we’ve been “lucky.” (source)

In his paper, Bostrom writes,

If scientific and technological research continues, we will eventually reach it and pull it out. Our civilization has a considerable ability to pick up balls, but no ability to put them back into the urn. We can invent but we cannot un-invent. Our strategy is to hope that there is no black ball.


If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semi-anarchic default condition. (source)

Bostrom believes the only thing that can save humanity is government.

Bostrom has proposed ways to prevent this from happening, and his ideas are horrifyingly dystopian:

The first would require stronger global governance which goes further than the current international system. This would enable states to agree to outlaw the use of the technology quickly enough to avert total catastrophe, because the international community could move faster than it has been able to in the past. Bostrom suggests in his paper that such a government could also retain nuclear weapons to protect against an outbreak or serious breach.

The second system is more dystopian, and would require significantly more surveillance than humans are used to. Bostrom describes a kind of “freedom tag,” fitted to everyone that transmits encrypted audio and video that spots signs of undesirable behavior. This would be necessary, he argues, future governance systems to preemptively intervene before a potentially history-altering crime is committed. The paper notes that if every tag cost $140, it would cost less than one percent of global gross domestic product to fit everyone with the tag and potentially avoid a species-ending event. (source)

These tags would feed information to “patriot monitoring stations,” or “freedom centers,” where artificial intelligence would monitor the data, bringing human “freedom officers” into the loop if signs of a black ball are detected.

How very Orwellian.

Being monitored by artificial intelligence is a horrifying idea.

The idea of artificial intelligence monitoring human activity is particularly alarming, considering that we already know AI can develop prejudice and hate without our input and that robots have no sense of humor and might kill us over a joke. Many experts believe that AI will eventually outsmart humans, and the ultimate outcome will be the end of humanity.

Is having robot overlords a good idea, even if they might prevent someone from selecting a black ball? We already have mass surveillance, and global governance seems to be on the way as well.

Bostrom acknowledged that the scenario could go horribly wrong, but he thinks the ends might justify the means:

Obviously, there are huge downsides and indeed massive risks to mass surveillance and global governance.

On an individual level, we seem to be kind of doomed anyway.

I’m just pointing out that if we are lucky, the world could be such that these would be the only way you could survive a black ball. (source)

For those who remain skeptical, Bostrom advises weighing the pros and cons:

A threshold short of human extinction or existential catastrophe would appear sufficient. For instance, even those who are highly suspicious of government surveillance would presumably favour a large increase in such surveillance if it were truly necessary to prevent occasional region-wide destruction. Similarly, individuals who value living in a sovereign state may reasonably prefer to live under a world government given the assumption that the alternative would entail something as terrible as a nuclear holocaust. (source)

What do you think?

If you had to choose between the kind of surveillance and global government Bostrom proposes or eventual annihilation by AI, which would you select? Do you think the possibility of a black ball being selected is a genuine threat? If so, how soon do you think it will happen? Please share your thoughts in the comments.


EDITOR’S NOTE:  I could write an entire article on the many reasons why this “expert’s” ideas are horribly flawed (and maybe I will).  One thing that stands out though is the Globalist (and luciferian) obsession with the idea of the “simulated universe”.  They are extremely motivated to convince people that what they intuitively know about life and existence is wrong, and that everything we do is meaningless.  This is a perversion of the classic Gnostic idea of pulling away “the veil”.  If everything we know and accomplish is actually a matter of perception or simulation, then everything becomes relative, including morality.  Globalists also love to push the notion of a cataclysmic AI event, an event they are actively seeking to create, and then suggest that global government is the only solution to stop it from happening.  Bostrom didn’t come up with this concept – the UN, the IMF and other globalist institutions have been peddling it for the past few years.  My question for Bostrom is this:  If the world is nothing more than a simulation, then why does it matter if we eventually select a “black ball” idea and end it?  And, if it doesn’t matter, then why would we need global government to save us from it?  Elitist propaganda is so backward that they do not seem to notice the inherent and obvious contradictions…

Brandon Smith, Founder of