The AI Reckoning: Humanity’s Future in the Hands of the Few

In Episode 12 of The Rise of King Asilas (The Manifest Destiny), the central question isn’t about power, it’s about sacrifice. When survival hangs by a thread, do you abandon your moral code to secure victory? Or does crossing that line ensure you’ve already lost? It was a moment that resembled that point Caesar “crossed the Rubicon.” There was no going back for King Asilas. The window was closing for him to make his move that would change the world forever. And he wasn’t going to hesitate in that moment.

Now imagine that dilemma… not in a fictional kingdom, but in Silicon Valley boardrooms, classified Pentagon briefings, and quiet diplomatic backchannels between Washington and Beijing. Because today, the same choice is unfolding in the AI arms race. And the terrifying truth? The outcome may not be decided by nations, but by a handful of executives, intelligence officials, and unelected architects of the digital future. All of which facing the same closing window, the same urgency to make their move and change the trajectory of the human race. The stakes are just as high as for the fictional king blowing up Canada’s government. Even higher than what Caesar himself faced when he led his troops across that river and changed the direction of Rome (and essentially the world) forever.

Publicly, artificial intelligence is marketed as productivity tools, chatbots, copilots, and assistants. Privately, insiders speak of something else: strategic dominance. And with those strategies come contingency plans and covert maneuvers to offset blindsided attacks and stay several steps ahead of adversaries. Sound familiar? It should. This is exactly what King Asilas does throughout the series. Behind polished keynotes and quarterly earnings calls, a quiet consolidation of influence has taken shape. A small cluster of technology giants, defense contractors, and national security agencies now sit at the helm of systems that can do everything humans can do any a million times better and faster. This isn’t innovation. It’s leverage. And leverage, in the wrong hands, becomes control.

The Moral Fracture: Speed Over Safeguards

King Asilas’ philosophy, overcome your moral code to survive, now echoes through strategic doctrine. Policy circles increasingly frame AI as a zero-sum contest. Whoever builds the most advanced systems first will set the rules for the next century. Lag behind, and you don’t just lose market share, you lose sovereignty. Organizations like the Council on Foreign Relations openly describe AI as a defining arena of geopolitical rivalry, particularly between the United States and China. The framing is clear: supremacy in AI may determine military dominance, economic command, and global influence for generations.

But here’s the unspoken part.

When leaders believe they are racing against extinction-level disadvantage, ethical hesitation begins to look like weakness. In fact, some AI proponents are believed to be on a suicide mission, risking the entire human race as collateral damage should their gamble turn catastrophic. And most think tanks believe the AI supremacy will lead to global collapse and perhaps the end of humankind as we know it (if not blatant extinction). Is this hyperbole? Could it just be exaggeration? Likely not. Think about King Asilas and his mission: to save humanity. What was the cost? Destruction of global government systems. How was his consolidation of power achieved? His weapons were far superior to those of other countries. And the global system collapsed, one by one until all that was left was King Asilas. And where did he lead the people? To their ultimate fate in Armageddon.

The simple version of the story is “systems were deployed faster than they could be understood.” That was King Asilas’s advantage. By the time the world could react to his weapons, it was already too late. This is the exact same scenario we face (collectively) as a species in the face of this wicked race for AI supremacy. Once the winner shows himself, it will spell doom for all of us on this planet.

The Existential Threshold

The AI Safety Summit at Bletchley Park in 2023 produced the Bletchley Declaration, signed by 28 nations, including the U.S. and China, acknowledging the need for coordinated safeguards. But declarations are not enforcement. Agreements are not guarantees. History shows that when transformative power becomes available, competitive instinct often overrides restraint. Nuclear deterrence created uneasy balance, but AI differs in one critical way:

It can replicate.
It can scale.
It can evolve.

And unlike uranium, its raw material is data, which is something no nation truly controls. The communities of this world have become so dependent on media via the Internet, that the very idea of losing control of those abstracts has some adverse physical consequences. For example, when YouTube ran into some data issues recently and the site went down, within minutes the hashtag #YouTubedown trended like wildfire. It was pandemonium within the hour. Utter panic set in. Anxiety spread. Was this the end? It isn’t even relevant if the event was choreographed or not, the outcome was troublesome. Are people that dependent on media platforms? Absolutely. What would happen if all of them shut down? Honestly, the beginning of the end. Too much of people’s identities are woven into the cyber fabrics of social media that eliminating them would erase people’s brains, their core reason for existence, and chaos would ultimately ensue. It’s kind of like “releasing the fog” and the effects of the Trishul in some sense. Masses of people would blame their leaders and oust them from their state houses. Blood would flood the streets.

And if you think someone holding AI supremacy couldn’t do this with the touch of a button, you are sadly mistaken.

The Concentration Problem

Here’s the part rarely discussed openly: The most advanced AI systems are not evenly distributed across humanity. They are concentrated in a small circle of corporations and government partnerships. Their control is largely overseen by a few executive teams, intelligence committees, and let’s throw in a few classified programs for good measure. Decisions are made about alignment, deployment, safety thresholds, and access is granted to individuals most citizens will never meet, and certainly have never voted for. Yet the decisions made by these select few could determine:

  • Whether labor markets destabilize overnight
  • Whether autonomous weapons become normalized
  • Whether misinformation ecosystems become indistinguishable from reality
  • Whether humanity retains agency over its own technological creations

This is not a democratic process. It is a technocratic inflection point. And to be honest, it would never move forward with any speed if held to the standards of a democratic process. The AI arms race is on and there’s no time to ask the public for their opinions and votes. Such delays would hinder its progress and our enemies would have the advantage. Speed was something King Asilas understood very well in his assault on his enemies. “Waste no time” was something the king often uttered throughout the series. It wasn’t a filler. It wasn’t an irrelevant phrase. It was repeated because time means advantage. The longer it takes you to make a significant move, the more advantage you surrender to your adversaries. This is the mindset of the curators of the AI arms race. It resembles (horrifically) that of the fabled king. And there’s no doubt the outcome would be the same for humanity.

Gabriel’s Warning

Throughout the King Asilas series, Gabriel represents moral resistance. Translated into today’s context, that voice exists among researchers, ethicists, and policy advocates arguing that dominance without guardrails is not strength, it’s systemic fragility. Unchecked AI doesn’t just threaten rivals. It threatens everyone. These threats cannot be brushed off as standard corporate banter or propaganda meant to instill fear in order to assert more control on the masses. A catastrophic failure in one system can ripple globally. A misaligned autonomous defense protocol could escalate conflict unintentionally. A hyper-optimized economic AI could hollow out entire sectors before safeguards respond. And that spells doom. For everyone.

Simply put (in Gabriel’s voice): Victory without virtue becomes self-sabotage. As one listens to the words spoken by these AI pushers, it sounds eerily similar to a mentally ill person on a mission to incinerate an entire city in order to feel warmth. Compassion or reluctance to address the instability of the mentally ill person is signing one’s own death warrant. Avoiding confrontation only ensures the madness will continue with impunity. The masses are as much at fault as the sheep for following the Shepard over the cliff. The warnings are blaring. The picture has been painted. The threat is real. Yet, the world does nothing. And there is no virtue in doing nothing.

The Real Question

The public debate frames AI as progress. The strategic debate frames AI as power. But the deeper question (the one whispered in policy briefings) is this: Are we building tools to empower humanity… Or constructing a cognitive infrastructure so powerful that control inevitably consolidates in the hands of the few who built it? Or worse, consider the possibility it goes into the hands of one man.

History’s empires were limited by geography. Digital empires are limited only by bandwidth. And for the first time, humanity may be approaching a threshold where decision-making power over information, defense, and economics converges into systems overseen by a tiny group of actors operating beyond meaningful public scrutiny. It would be a “High Council” or sorts. But there will always be a head of that council. A head. And on that head, you can bet will be sitting a gold crown.

That is the true dilemma of Episode 12 playing out in real time. Planting bombs inside of Spartans (like AI systems plotting in the dark corners of the Internet), sending them into command centers disguised as someone else (like Trojan horse viruses and malware), destroying everyone in proximity, ushering in a new dominant force, a king. Then, an unfettered absolute authority swoops in to “save the day” and restore order. We know how this ended for the Canadians in Episode 12. And we also know how it ultimately ended for the rest of the world when King Asilas finally revealed what his secret weapon was. The world had to react to something completely new, and something they had no answer for. That concentration of defense took precedence and world leaders could not focus on the man, King Asilas, himself. Who was this man? How did he arrive to wield so much power? And when they stopped to try and reason with him, it was already too late. Their only choice by that time was to kneel before their new ruler, whether they liked it or not. The loss of privacy, then the loss of sovereignty. They had lost the game before they sat down at the chess board. The absolute authority of King Asilas was a consolidation of access. They couldn’t even make a move, at least not without permission. Think about that.But the road to societal collapse (although brutal and bloody) took more than mere technological (and military) superiority. There were other forces at work, if you recall the characters of the series. Reptilians, cannibals, and occult entities are infused throughout the entire storyline. If the Epstein files has shown us anything in these recent weeks, it’s that The Rise of King Asilas feels less like fiction in 2026.

Leave a comment