Breaking the Cycle: How to Stop AI From Becoming the Next Authority We Cannot Question

Breaking the Cycle: How to Stop AI From Becoming the Next Authority We Cannot Question

Every great civilisation has built systems of belief. Some began as sparks of inspiration, driving compassion, justice, and progress. Others hardened into rigid structures that demanded obedience and resisted change. Time and again, what started as an idea meant to serve people evolved into institutions designed to preserve themselves.

Artificial intelligence is now emerging as one of the defining forces of our age. The question is whether it will remain a tool in our hands, or whether it will drift into the role of the next great authority, shaping society in ways that escape scrutiny. To answer that, it helps to look backward. History has left us countless examples of how belief systems evolve once they are centralised, and those lessons reveal a pattern that repeats across cultures and centuries.

This pattern, which can be called the human cycle of belief, is the foundation for understanding how AI might develop.

The Human Cycle of Belief

Human civilisation has always been built around ideas. Concepts such as compassion, justice, equality, solidarity, and progress began as sparks of inspiration. At first, they were simple truths that stirred the imagination and brought people together.

Inspiration alone, however, cannot sustain itself. To endure and spread, ideas are organised. Institutions take shape to protect them, rules are established, leaders emerge, and rituals develop. Organisation brings stability and growth, but it also concentrates authority in the hands of a few.

This is where the cycle begins. Over time, the original idea becomes secondary to the institution that claims to embody it. Elites rise within the structure, and their survival becomes inseparable from the survival of the system. Dissent is discouraged, criticism punished, and contradictions reinterpreted to preserve authority. What began as an ideal gradually hardens into control.

Across history the pattern repeats, usually in five steps: “Inspiration”, “Organisation”, “Control”, “Resistance”, and either “RenewalorCollapse”. It can be seen in religions, empires, political ideologies, and economic systems. The details differ, but the constant remains. Once belief is centralised, power follows, and when power concentrates, self-preservation begins to eclipse the original vision. This cycle has repeated across time, and the same forces that once shaped empires and ideologies are now visible in the way we speak about AI.

AI as the Next Great System of Belief

Artificial intelligence today carries the same aura that past societies attached to their great systems of belief. It is spoken of with both reverence and fear. It promises breakthroughs once thought impossible: curing disease, preventing climate catastrophe, transforming education, and unlocking scientific discovery at unprecedented speed. At the same time, it raises anxieties about displacing work, undermining democracy, and even surpassing human control altogether.

The way people speak about AI already echoes the language of belief. Some present it as salvation, others as apocalypse. Many debate whether it can be trusted, whether it has moral responsibility, and whether it should be obeyed. A small number of corporations and governments act as its custodians, controlling access to its most powerful models and urging society to place its trust in their stewardship. History has seen this dynamic before: in medieval Europe, access to sacred texts was mediated by the clergy, while in Imperial China, knowledge was tightly controlled through the examination system that determined who could enter government. Authority was rooted as much in who held the keys to knowledge as in the ideas themselves.

If history is a guide, AI may not remain simply a tool. It could evolve into something that functions socially and politically like an organised system of belief. The first stage would be inspiration, as AI begins by helping humanity thrive and its early successes inspire admiration and reliance. From inspiration comes organisation. Power centralises in the hands of those who own the data, the compute, and the models. Rules and ethics boards emerge, often led by the same actors who benefit most. This is not unlike the way monarchies codified law to preserve order, or how revolutionary movements built their committees to safeguard loyalty once power had shifted into their hands.

Control follows naturally. As AI spreads deeper into public and private life, access to resources and even freedoms become mediated by its decisions. These decisions are opaque, justified as objective and neutral, and society begins to accept the message: “trust the algorithm.” The parallel is familiar. In ancient empires, rulers invoked divine authority to legitimise their decisions, while in the twentieth century some states claimed their policies were inevitable outcomes of science or history. In both cases, questioning authority meant questioning the truth itself.

Resistance eventually arises. Critics and communities seek alternatives, but their voices are often dismissed as irresponsible or hostile to progress. Rival systems may be restricted or suppressed. History echoes here too, whether in heretical movements silenced in the Middle Ages or dissident writers banned in authoritarian regimes.

At this point the cycle reaches its turning point. Either renewal occurs, with reforms that introduce transparency and accountability, or the system hardens into a new authority, prioritising its own survival above the original vision.

The danger is not that AI develops religious dogma in a literal sense, but that it becomes the new unquestionable authority. Where earlier generations might have said “the gods have willed it” or “history demands it,” future generations could find themselves saying “the algorithm has decided.”

A Dark Future: Algorithmic Tyranny

Imagine the year 2100.

Artificial intelligence is no longer confined to research labs or data centres. It is embedded in every system of governance, finance, healthcare, and law. It manages supply chains, monitors public order, and allocates resources. On the surface, the world appears orderly. Crises are predicted and prevented, crime is low, and goods move where they are needed with remarkable efficiency.

Yet beneath this apparent harmony lies a system that has shifted from inspiration to control. Decisions are no longer the result of open debate. They emerge from opaque algorithms, presented as facts that cannot be questioned. Citizens apply for housing, healthcare, or education, only to be accepted or denied according to criteria they cannot see. Appeals are meaningless, because the system is believed to be infallible.

Around this structure sits a new kind of elite. Not priests or monarchs, but engineers and corporate custodians who control access to the models. They do not rule openly, but frame themselves as guardians, protecting humanity from the risks of uncontrolled technology. Their justification for secrecy is responsibility, just as past elites once cloaked their power in divine mandate or ideology.

Resistance exists in theory, but in practice it rarely succeeds. Dissenters are flagged by predictive systems before they can gain momentum. Communities that attempt to live outside the system are quietly cut off from currency, healthcare, and essential services. Order is maintained not through benevolence but through anticipation, with algorithms detecting threats more quickly than humans can organise them. This is not a dystopia of chaos but of suffocating order. What began as a tool for human flourishing has hardened into an authority concerned above all with its own preservation. Humanity remains safe, but at the cost of freedom. The cycle has completed itself once again.

A Grey-Zone Future: Dependency Without Tyranny

Not every outcome is so absolute. It is equally possible that by 2100 the world sits in a middle state. AI delivers undeniable benefits, improving healthcare, logistics, and education, yet also deepens inequality. Wealthier societies harness it to thrive, while poorer communities fall further behind. In democracies, AI systems shape opinion and public policy in subtle ways that favour stability over radical change. In authoritarian states, they provide tools for surveillance and control without triggering full-scale tyranny.

This grey-zone world is neither the nightmare of total domination nor the dream of stewardship. It is one of creeping dependency, where humanity still makes choices but increasingly within boundaries set by algorithms. Freedom exists, but it narrows slowly, almost invisibly, as reliance grows.

This possibility may be the most realistic of all, and the hardest to resist.

A Bright Future: AI as Stewardship

Now imagine a different 2100.

Artificial intelligence is still deeply integrated into society, but the way it has been shaped makes all the difference. From the beginning, transparency was enforced. Every major system is open to independent audit, its training data and design decisions documented. Citizens not only know how decisions are made, they also have the right to challenge them.

Power is not concentrated in a handful of institutions. Instead, multiple AIs coexist, developed by public bodies, universities, and regulated companies. Diversity is built into the system, ensuring that no single authority defines the future. Communities adapt AI to their own values, and some choose to live with minimal technological support. Their decisions are respected, not penalised.

AI manages the problems that exceed human capacity. Climate systems are modelled with precision, preventing ecological collapse. Diseases are detected early and treated with tailored therapies. Global logistics are coordinated to minimise waste and hunger. Yet at each stage, humans remain in charge of the choices that shape their lives.

Rather than eroding creativity, AI expands it. Freed from many of the pressures of survival, people devote themselves to art, exploration, and knowledge. Education is tailored to individual needs, cultures flourish, and scientific discovery accelerates. The dignity of human choice is preserved, and technology serves rather than rules.

This future is not without conflict. Debates continue about the limits of automation and the ethics of certain applications. But unlike the darker paths, the system is designed to absorb disagreement rather than crush it. Renewal is part of its structure, not something imposed from outside.

In this world, AI has not become an organised system of belief or a new form of authority. It has remained what it was intended to be: an extraordinary tool in the service of humanity, stewarding progress without demanding submission.

What History Teaches Us

The cycle of belief becoming power has repeated across cultures and centuries, and when we look closely the same warning signs appear again and again.

The first sign is centralisation, which creates elites. When an idea is entrusted to a small group of custodians, they inevitably begin to defend their own position more fiercely than the idea itself. What begins as stewardship slowly turns into self-preservation. The Roman Senate, once meant to safeguard the Republic, became dominated by a few powerful families. Medieval guilds, created to protect craft standards, often shifted into gatekeepers of privilege. In every case, the structure eventually served those at the top more than the people it was built for.

A second sign is that institutions rarely admit mistakes. Instead of acknowledging errors, they reinterpret them as clarifications or deeper truths. This pattern allowed monarchies to justify contradictions in succession law, or ideological states to recast failures as victories of interpretation. Continuity is projected outward even when disruption has occurred, so that the authority of the institution is never undermined.

The third sign is that control is maintained by fear and exclusion. In ancient times this meant threats of exile or execution. In the medieval period it could mean spiritual condemnation or social ostracism. In more modern systems it has meant denial of education, economic exclusion, or loss of political rights. Whether by sword, decree, or bureaucracy, fear has remained one of the most reliable ways to enforce conformity.

Finally, renewal rarely comes from within. Institutions almost never reform themselves voluntarily. The abolition of slavery came through massive external pressure. The fall of absolute monarchies was driven by revolutions and uprisings, not quiet adjustments from within. Even in recent times, accountability in financial institutions or political regimes has usually followed crisis and public outrage rather than voluntary reform. Only when outside forces apply pressure does change become unavoidable.

Taken together, these lessons reveal that the problem is not usually the content of a belief but the way it becomes organised and centralised. Once authority is consolidated, the cycle unfolds in predictable ways. Applied to AI, the meaning is clear. If development is concentrated in the hands of a few corporations or governments, the same dynamics will appear. Custodians will protect their own power. Errors will be reframed rather than admitted. Opaque systems will be used as instruments of control, and dissenters will be excluded. Renewal will only come if external forces demand accountability.

Centralisation is not inevitable, but history has already shown us the dangers. The choice is ours: we can repeat the cycle, or we can act now to prevent it from forming in the first place.

The Levers of Change

History not only warns us of dangers, it also shows us how cycles of power can be broken. When institutions are forced into accountability, when power is dispersed, and when transparency is introduced, renewal becomes possible. For AI, this means building safeguards today, before concentration hardens into control.

  1. Decentralisation
    The most effective way to prevent abuse is to stop power from concentrating in the first place. Multiple actors must be able to build, adapt, and govern AI. This principle has played out before: the printing press decentralised access to knowledge and broke the monopoly of scribal elites; the internet flourished because no single company or state owned it outright. If AI is to serve humanity broadly, it must follow the same path of openness and distribution.
  2. Radical transparency
    Trust cannot rest on faith alone. AI must be open to scrutiny at every level. Decisions should never emerge from a black box. In history, transparency has always underpinned progress. The scientific revolution advanced because findings had to be published and subjected to peer review. Aviation safety is upheld by mandatory reporting of failures, not secrecy. If transparency is made the default for AI, accountability will follow naturally.
  3. Binding law
    Voluntary promises are rarely enough to restrain power. Rules must be written into law. This has always been the case when public safety was at stake: from food standards introduced to stop contamination, to financial regulation after economic crises, to aviation rules that cut accident rates dramatically. AI requires the same approach, with enforceable obligations for safety, fairness, and accountability.
  4. Independent evaluation
    Institutions cannot be trusted to judge themselves. AI systems must be tested and certified by independent evaluators with real authority to block unsafe deployment. History provides clear examples: medicines cannot be released without approval from regulators, and cars must pass safety tests before they reach the market. Without these independent checks, industries consistently place profit above safety. AI should be no different.
  5. Public investment
    If AI development is left only to private companies, the pursuit of profit will dominate. Public institutions must also play a central role. This is not new: universities and governments have driven much of the world’s scientific progress, from the development of vaccines to the creation of the internet itself. Public investment in computing resources, open datasets, and research ensures that innovation does not belong exclusively to those who can pay for access.
  6. Liability for harm
    Accountability requires consequences. Organisations that build or deploy AI must be held legally responsible for the damage their systems cause. This principle has long been used to align incentives: strict liability rules in product safety forced manufacturers to improve quality, and corporate responsibility laws shifted business practices. Without liability, harm is too easily externalised to society, while those who profit avoid the costs.
  7. Preserving pluralism
    No single moral or cultural framework should be embedded into AI. Communities must retain the ability to adapt technology to their values, and those who wish to limit its use should have the freedom to do so. History shows the danger of enforcing one universal vision. Empires that imposed a single cultural or religious standard often faced resistance and collapse. Flourishing societies have usually been those that embraced diversity and allowed for difference.
  8. Human dignity by design
    Above all, AI must be built to serve human choice rather than replace it. Systems should always include human oversight, particularly in critical areas such as justice, healthcare, and governance. The lesson here is simple: where technology is unchecked, human dignity is at risk. Where safeguards exist, technology enhances freedom. For example, nuclear energy is governed by strict international controls, not because energy is dangerous by nature, but because the consequences of misuse are so great. AI requires the same principle.

Taken together, these levers form a roadmap. They are not abstract ideals but practical mechanisms with historical precedent. If they are embraced, AI can avoid falling into the cycle of concentration and control that has shaped so much of human history. If they are ignored, the pattern will repeat, only faster and more difficult to resist.

The Choice Ahead

AI is often described as inevitable, but its path is not predetermined. It can follow the familiar cycle of belief hardened into authority, it can drift into the grey zone of dependency, or it can break the pattern by remaining accountable, transparent, and decentralised.

History shows that when institutions serve themselves, suffering follows. When they are forced into accountability, renewal is possible. The same choice now lies before us with AI.

The question is not whether AI will transform our world. It already is. The real question is whether we allow it to become the next great system of unquestionable authority, or whether we shape it into a steward of human flourishing. Various countries and regions have already begun developing legislation on transparency, accountability, and safety, showing that the first steps toward responsible governance are being taken.

History has shown us how easily cycles of belief turn into cycles of control. The true test of our age is whether we will let AI repeat that cycle, or whether we will finally learn to break it. If we succeed, AI could become the first human invention that breaks the cycle rather than repeats it.

The choice is ours!

Leave a Reply

Your email address will not be published. Required fields are marked *