Axis of Logic
Finding Clarity in the 21st Century Mediaplex

United States
Hawaii’s and Alaska’s Twitter Errors That Turned Twitter Terrors Are Only the Tip of the Iceberg
By Dallas Darling
Submitted by Author
Thursday, Feb 1, 2018

Just as the future can’t be created based on blind optimism or paralyzing fear, neither can this present age of man-machine convergence. Consequently, when Hawaii tweeted a ballistic missile alert of an impending attack on Twitter, which caused 40 minutes of pure terror for thousands of residents, it was only the tip of the iceberg of how we are at a crucial junction regarding the fusion between humans and a digitalized world.

Alaska, meanwhile, experienced their own technocratic reign of terror. Right after a jolting earthquake in the middle of the night, vibrating and glowing cellphones suddenly read: “Tsunami danger on the coast. Go to high ground or move inland. Listen to local news.” As imagined, people were petrified as well as they clamored to call 9-11 to ask if it was a mistake or if they really needed to head inland.

Twitter’s “Virtual” Terror

The terrorizing alerts were fortunately triggered by false alarms. Hawaii’s Governor David Ige, however, knew within two minutes it was a false alarm. Still, he couldn’t send a message right away because he had forgotten his password. His apologies of how he abdicated his human sovereignty to his Twitter account login and passwords which was connected to the Hawaii Emergency Management Agency fell of course fell on deaf ears.

As for Alaska, the message made sense in coastal communities that had prepared for the possibility of tsunamis. Unfortunately, and for 30 minutes, it left out this tiny piece of information that should’ve followed the warning: “There is no TSUNAMI Warning for the Anchorage area and Vicinity. We are *outside* the danger zone.” Again, people weren’t very happy with another end of the world scenario-even if it was only virtual.

My Cyborgs Bigger Than Your Cyborgs

It’s becoming clear that when humans interact with their technological smart devices that there’s some serious considerations to be made. Although it might be true that humankind is moving at warp speed towards a world that may resemble bliss, the future could also usher in a dystopian society that is orchestrated and overseen by supercomputers, networked bots, super-intelligent software agents, and, of course, human error.

To be sure, imagine a future President Donald Trump threatening Kim Jong-un with: “My cyborgs and robots are bigger and more numerous that yours!” Better yet, imagine Artificial Intelligence and super computers-let alone glitches or viruses-taking on a life of their own. Considering how the world’s military machines, nuclear capabilities, and infrastructures are connected to the Internet of Inhuman Things, anything is possible.

Just Because We Can, Should We
And then there’s the haunting knowledge of Moore’s Law. Depending on when we start counting, overall technological progress is going to leap from today’s pivot point of four to 128. Just as frightening is how the scope of our ethics will limp, yes limp, beside the exponential growth of technologies. Tossing in our limited cognitive abilities and human errors, it doesn’t take long to observe the approach of a Perfect Technological Blunder.

In “Technology Versus Humanity: The Coming Clash Between Man and Machine,” Gerd Leonhard thinks we’re already at this crucial junction. He therefore believes we should act with greater foresight, with a decidedly more holistic view, and with much stronger stewardship. He also says that we can no longer adopt a wait-and-see attitude if we want to remain in control of our destiny and the developments that could shape it.

Asking Important Questions of Technological Determinism
Basically, it’s not enough to leave decisions to the world’s most powerful political and military organizations, including their venture capitalists, corporate technologists, and even scientific psychologists. The fundamental challenge will indeed be if we want to depend on a single political leader, military general, corporate CEO, or a governor’s password-let alone solely technology that knows no ethics, norms, or beliefs.

In the face of these false, or someday real, alarms brought on by the fusion of human and technological breakthroughs, we’d also better start asking serious questions followed by even more serious regulations. For instance, do our current human-machines technologies and alert systems have the potential to violate the human rights of anyone involved? Do they have numerous inbuilt safeguards? And how mentally competent are the controllers?

Keeping a Lid on Pandora’s Box
Moreover, do they leave our thinking to software and algorithms because it’s just so much more convenient and fast, or do they cause a loss of personal control since there’s no way knowing if the AI’s anticipation was correct or not? We should also be concerned it the design of technology is meant to mend, fix, upgrade, or even eradicate what makes us human; rather than to respect-and protect-what makes us human-like nuclear war.

Fortunately, there’s still time to control Pandora’s Box of human errors fused with the awesome powers of technologies. We haven’t quite reached the stage where automated supercomputers conduct business as usual-including the decision to launch a pre-emptive nuclear strike due to a glitch or false alarm. (Unless, that is, the preconditions of a virtual nuclear war is or has already socially engineered people for a real nuclear war.)

When Science Fiction Becomes Science Fact
We may be close, however. Indeed, low-cost, ubiquitous digital technologies have made it possible for us to not only outsource our thinking, our decisions, and our memories to ever-cheaper mobile devices, but it’s now plausible for a hacker to hijack a system that may control the destiny of millions. Nikola Tesla’s warning that we may live to see man-made horrors beyond our comprehension may already be here.

As technology’s power increases exponentially, it only makes sense that a fundamental precautionary principle is to hold those who create things with potentially catastrophic consequences accountable, making sure they do not proceed until they have proven that any unintended consequences can indeed be controlled. The same goes with political and military leaders and their alert systems, including governors like David Ige.

Otherwise, and given the undemocratic nature of technology and our leaders’ teleological evolutionary advancement, the exact opposite may happen next time.


Dallas Darling is the author of Politics 501: An A-Z Reading on Conscientious Political Thought and Action, Some Nations Above God: 52 Weekly Reflections On Modern-Day Imperialism, Militarism, And Consumerism in the Context of John’s Apocalyptic Vision, and The Other Side Of Christianity: Reflections on Faith, Politics, Spirituality, History, and Peace. He is a correspondent for www.WN.com. You can read more of Dallas’ writings at www.beverlydarling.com and www.WN.com//dallasdarling.)