Navigating Human Disempowerment

Ibn Khaldun's Insights On The Nature of Groups

18 July 2024

This essay was written for BlueDot Impact’s AI Governance course. I was part of the April 2024 cohort. The title of this essay was generated by GPT-4o and edited by Tariq Ali.

Dr. Hartley’s Paradox

(title of section generated by GPT-4o, content originally written by Tariq Ali and edited by GPT-4o)

Dr. Evelyn Hartley was a specialist in AI Safety and a computer science professor who worked at Stanford University. She genuinely loved humanity, claiming that they possess certain virtues that no other entity - biological or mechanical - have. As a result, she strongly believed that humanity should retain human control over the planet Earth. Humanity thrived and prospered because the species controlled Earth and exploited its natural resources. Thus, humanity’s high quality of living is dependent on humans’ continued dominance over Earth.

But humans are building capable machines and automating key tasks. As a result, humanity’s control over Earth is diminishing. In the future, humanity may exercise de jure power, but de facto power will fall into the hands of mechanical entities. For example, The Future of Strategic Measurement: Enhancing KPIs With AI discussed how several corporations are using AI to generate, refine, and revise Key Performance Indicators (KPIs), implicitly managing the affairs of employees.

Dr. Hartley believed that if humanity were to ever lose control over Earth, it would no longer be able to control its own destiny. It would be left to the mercy of forces that it has no control over. While it is possible that these forces could be hostile towards humanity, it’s also possible that these forces would become benevolent and improve humanity’s welfare. But even this latter scenario would still be seen as unsettling to Dr. Hartley, as humans would become little more than pets.

This “loss of human control” has practical impacts as well, as mentioned in the paper “Societal Adaptations to Advanced AI”. According to this paper, automation may “sometimes [make] much worse decisions than would be made by humans, even if they seem better on average.” For example, algorithms that focus on profits and market share might cause the business to conduct unethical actions (which may lead to fines and lawsuits) or sacrifice the business’ long term sustainability (which may lead to a premature bankruptcy). The business may end up suffering horribly, even if on paper, the algorithms “helped” the business prosper in the short term. Even worse, a dependence on automation can lead to “human enfeeblement”, preventing humans from intervening in situations where automation makes terrible decisions. Yet, since a cost-benefit analysis can still favor automation, humans will tolerate these terrible decisions, seeing them as a necessary trade-off.

We can say that Dr. Hartley is worried about human disempowerment, defined here as “the loss of human control”1.

In theory, human disempowerment is a relatively easy problem to solve - don’t build and use entities that strip away human control (e.g., AI). In practice, most people will reject this solution. Humans rely on technological progress to solve multiple problems - including problems caused by previous waves of technological progress. A halt to technological progress would successfully solve “human disempowerment”. But it will force humanity to find other solutions to its own problems.

Rather than support a technological pause, Dr. Hartley thought carefully about her fears about “human disempowerment”. She originally thought that a minor decrease in “human control” would be dangerous, jeopardizing human self-determination. But, humans regularly delegate away their control over Earth all the time without suffering existential angst. A Roomba can technically cause a “loss of human control”, yet Dr. Hartley does not have nightmares about vacuum cleaners. The reason is that this “loss of human control” is limited and revocable - humans can reassert their control when necessary. Dr. Hartley concluded that human disempowerment is an issue to be managed, not averted. “Societal Adaptations” itself recommended these following policies for managing this disempowerment:

But who should execute these policies? Dr. Hartley did not trust corporations to self-regulate themselves, due to competitive pressures. If some corporations are lax in their self-regulation, then they may end up triumphing over corporations who are stricter in their self-regulation. Thus, all corporations have an incentive to be lax in their self-regulation, fueling a “capabilities race” that will lead to uncontrolled, unrestrained “human disempowerment”.

Dr. Hartley wanted an external regulatory force - like a government agency. This agency would exercise control over the affairs of corporations. People will be compelled into implementing these policies as a result of the dictates of this government agency. Human disempowerment will be successfully managed.

Building this agency would take time and effort. She would need bureaucrats who can staff this agency. She would need activists who can advocate on her behalf, and government officials who will give her the power and authority to deal with human disempowerment. Yet, Dr. Hartley is an optimist. She had hope for the future.


Five years have passed since Dr. Hartley became the first leader of the Agency for Technological Oversight (ATO). Yet, running ATO turned out to be way more complex than creating it.

ATO struggled to keep up with changing social norms and technological progress. Government officials expressed disappointment at Dr. Hartley’s failure to produce short term results that they can use in their political campaigns. Corporate officials claimed that Dr. Hartley’s agency is too slow and bureaucratic, pointing out that they can always move to friendlier jurisdictions. Activists who supported Dr. Hartley turned against her, demanding more regulations, more aggregation of public feedback from all stakeholders, and more enforcement actions. Meanwhile, the general public complains about waste within the agency itself, and does not want their tax dollars to be used inefficiently.

Dr. Hartley concluded that human labor would not be enough to deal with ATO’s many issues. Automation would be necessary. She streamlined internal processes. Machines assumed leadership roles. Many human bureaucrats, now marginalized in the workforce, grumbled at how their generous compensation packages are now being redirected to machine upkeep.

The transition was abrupt, rapid, and essential. Dr. Hartley needed to manage human disempowerment, and only machines provide the efficiency and scale necessary to do this management. Corporations, after all, are relying on automation to improve their affairs. If Dr. Hartley restrains herself, only relying on human labor, then she cannot meaningfully regulate these corporations. Her agency would thus be unable to fulfill its duty to the public.

ATO must participate in this “capabilities race”.

The ends justify the means.

Introduction

Human disempowerment is a very interesting issue, due to how easy it is for one to rhetorically oppose human disempowerment, and yet accelerate it at the same time. It’s very easy for things to spiral out of control, and for power dynamics to favor mechanical entities. We can propose solutions that look good on paper, and yet their implementation details prove to be lacking. Humans may say all the right words about human disempowerment. But their actions speak otherwise, as we can see in the story about Dr. Hartley.

Why does this happen? Why is human disempowerment so hard to manage?

To understand this, we may need to look at the insights of Ibn Khaldun, a prominent Muslim historian, sociologist, and judge in the 14th century who studied human societies. He is most well-known for his sociological work, The Muqaddimah (also known as Prolegomena). His ideas about the cycle of empires, Asabiyyah (“group feeling”), and power dynamics had influenced other scholars within the social sciences. While I do disagree with Ibn Khaldun on many issues, his discussions about power dynamics influenced my thinking tremendously. By applying his ideas to the modern world, we will be able to better understand human disempowerment.

Origins of the Group

Groups are collections of intelligent entities that collaborate together for some common purpose. In the past, the only groups that mattered were human societies. Today though, as machines acquire more capabilities, what was once only “human societies” are morphing into “human-machine societies”. That being said, insights into human societies can help us better understand these other groups as well. One cannot understand the nature of human disempowerment until we understand power dynamics, and power dynamics only makes sense in a framework where it can be exercised - that is, within the groups themselves.

According to Ibn Khaldun, humans possess the “ability to think”, which is defined as long-term planning. It is this long term planning that allows humans to be “distinguished from other living beings” (Chapter 6, Part 2). Humans use their “ability to think” to establish a group. Within this group, humans cooperate with each other through labor specialization. For example, some people farm grain, some people convert that grain into food, some people create weapons, and some people create tools that facilitate all these previous tasks. Through this specialization of labor, all individuals within the group can prosper (Chapter 1, Part 1).

Without cooperation, a human would lack food to eat and weapons for self-defense. In addition, animals possess talents other than intelligence (such as raw strength) and these talents would quickly overpower an “intelligent” human. Humans’ intellect only matters when they are able to use that intellect, and that can only happen when they set aside their differences and band together into groups. This is why Ibn Khaldun endorses the statement “Man is ‘political’ by nature” (Chapter 1, Part 1).

Here, we have a question - are machines also ‘political’ by nature? I would argue “yes, machines are indeed ‘political’”. This is not just because machines are used by humans to perform ‘political’ tasks. It is also because machines are dependent on humans and other machines. Without compute, algorithms, data, and electricity, a machine will not be able to function properly. Without someone to build it (either a human or a machine), the machine would not even exist. Without someone to use it (again, either a human or a machine), the machine would become an expensive paperweight.

Many machines are mere “tools”, like calculators and hammers. They only operate on the whims of the group, and cannot do any long term planning of their own. They have no interest in the group’s survival.

However, some machines are more than mere “tools”, due to their ability to understand long term planning. Indeed, the LLMs of today can mimic the “ability to think” - through prompting techniques such as Chain of Thought, Tree of Thought, and the ReAct pattern. LLMs are able to “establish an orderly causal chain”, which differs them from calculators and hammers. LLMs’ long term planning may not be as good as humans, but this is only a difference of degree - and one that can be lessened as technological progress continues. The “ability to think” means that LLMs do not merely operate on the whims of the group (as calculators and hammers), but can also react to the group’s actions and participate in its affairs.

Therefore, I believe that thinking machines (like LLMs) are inherently a part of groups. They are reliant on the social structure, and thus must regularly interact with other machines and other humans. I treat both “humans” and “thinking machines” as “black boxes”, receiving input and returning output, participating in the same social dynamics (though the nature of participation may vary). At the same time, I do not want to repeat the phrase “humans and thinking machines” multiple times. Therefore, I will instead use the word “entities”, and claim that a group consists of multiple “entities”.

What’s important to note here is that the group’s existence is due solely due to the interdependence of entities, as it is in the best interest of each entity to cooperate. Thus, we can see that the biggest threat to the group is independence. If an entity is able to survive, thrive, and achieve any goal without the assistance of anybody else, then this entity has no need for the group. If the entity is powerful enough to wipe out the group, it could do so without any fear of blowback. Even if the entity still serves the group, the relationship is completely unequal, with the group being little more than pets3.

Note though that both scenarios could only occur when we are dealing with a single powerful entity. If there are multiple powerful entities that can check each other, both cooperating and competing, then this independence is halted. We see a restoration of interdependence and the continued survival of the group.

The Temporal Nature of A Group

So who is a part of a group? Those who regularly interact and cooperate with each other, on a regular, consistent basis. That is to say, a group only exists within the present. People in the present do not have much of an interest in the distant past, because of this lack of direct, immediate connection that ties the group together. It is “close contact” (whether it is through a common descent due to blood relations, or through being clients and allies) that makes up a group. Ibn Khaldun makes this point clear during a discussion on the nature of pedigrees:

[P]edigrees are useful only in so far as they imply the close contact that is a consequence of blood ties and that eventually leads to mutual help and affection. Anything beyond that is superfluous. For a pedigree is something imaginary and devoid of reality. Its usefulness consists only in the resulting connection and close contact. If the fact of (common descent) is obvious and clear, it evokes in man a natural affection. … If, however, its existence is known only from remote history, it moves the imagination but faintly. Its usefulness is gone, and preoccupation with it becomes gratuitous, a kind of game, and as such is not permissible. In this sense, one must understand the remark, “Genealogy is something that is of no use to know and that it does no harm not to know.” This means that when common descent is no longer clear and has become a matter of scientific knowledge, it can no longer move the imagination and is denied the affection caused by group feeling. It has become useless. (Chapter 2, Part 8)

Though the logic was directed against historical affiliations, it can equally apply to future affiliations as well. The distant future moves the imagination faintly, and is thus useless.

While “thinking” is essential for long-term planning, the temporal nature of groups prevents the fulfillment of long term plans. This is because when entities in earlier time periods develop long-term plans, they cannot bind entities in latter time periods from strictly following this plan. Earlier time periods will be able to impose the plan without resistance from latter time periods, but if those latter time periods object to those plans, then these people will be overturned once latter time periods seize control over the group.

Dr. Hartely created ATO with the support of bureaucrats, activists, and government officials. But that group did not take into account the interests of future entities (that is, the “future” versions of bureaucrats, activists, government officials, and Dr. Hartley). The entities within Dr. Hartley’s group only cared about the present, and did not anticipate the needs of future entities. Similarly, the future entities did not seem to care about the interests of present entities as well, only caring for their time period.

Therefore, it should not be surprising that the “present” Dr. Hartley and the “future” Dr. Hartley opposed each other. The “present” Dr. Hartley created a long term plan that relied on human labor. The “future” Dr. Hartley rejected that very plan, and created a new long term plan that relied on automation. Neither version of Dr. Hartley interacted with each other, and acted autonomously.

However, it is possible to represent the interests of both the past and the future. Indeed, self-proclaimed intermediaries (e.g., historians, futurists, intergenerational panels, large language models who mimic entities from the past and future) can arise. Dr. Hartley could have used these imperfect intermediaries to determine what the past and future wants, and design policies accordingly. Yet intermediaries are imperfect and prone to bias. There is always the danger that intermediaries will promote their own interests instead. In any event, these intermediaries are outnumbered by those entities who represent the present - and the present only engages in close contact with the intermediaries, not the time periods they represent.

Let us suppose that “present” Dr. Hartley is self-aware enough to know that her “future” self could turn against her proposed policy regarding human labor, and does not want to use self-proclaimed intermediaries to anticipate the actions of her “future” self. Instead, the “present” Dr. Hartley attempts to impose a “value lock-in”, binding future entities to the long-term plans of present entities. This would be profoundly hypocritical, as present entities freely overturned the policies of past entities. It would also be very difficult to set up.

But let’s assume it does happen. Let’s say “present” Dr. Hartley creates a set of regulations that the ATO must adhere to, and prevents the ability of future entities from amending the text of these regulations. These sets of regulations will prohibit the use of automation without proper human oversight. This way, the “present” Dr. Hartley is able to force her “future” self into following the letter of her long term plans.

However, nothing will stop future entities from violating the spirit of long-term plans. That is, without the presence of entities from earlier time periods to produce a fixed interpretation of the “value lock-in”, latter entities are free to interpret the “value lock-in” without fear of contradiction.

Thus, the “future” Dr. Hartley will interpret the regulations created by the “present” Dr. Hartley in a self-serving and malicious manner, circumventing the “value lock-in”. For example, the “future” Dr. Hartley could interpret the term “proper human oversight” rather loosely.

It is, therefore, not possible for present entities to impose their ideas upon the world, then peacefully pass away, happy in the knowledge that their ideas will stay intact forever. If entities want their ideals to stay intact, then they must continue to exist, in their present form, forever. Dr. Hartley would need to stay alive, perpetually, frozen in time, without any change or deviation from her original beliefs.

I personally find this fate to be implausible, because it means these entities cannot change their beliefs in response to changing circumstances. Thus, these entities would become brittle and prone to self-destruction.

Thus, it is better for Dr. Hartley to accept change as a constant, rather than try to resist it.

Group Renewals

Groups do not persist indefinitely but are continuously renewed as present entities are replaced by future ones, and existing entities get modified by changes in the environment. Group renewal occurs both naturally (e.g., aging) and artificially (e.g., technological influence). Over time, even individual entities undergo significant changes, reflecting the dynamic nature of groups4.

As an example of how individual entities change, consider how you were six months ago, how you are today, and how you will be six months from now. Those different versions of “you” vary significantly, so much so that it may be illusionary to claim they’re “merely” the same entity across time. Similarly, the Dr. Hartley before the creation of ATO and the Dr. Hartley who led the ATO are two very different people who merely share the same name and memories.

Our illusionary belief in continuity is useful, if only because it makes our lives easier to understand. But if we underestimate the changes that affect us, then we will be continually surprised by the rapid change that surrounds us today.

Ibn Khaldun talks about how groups change across time, though he would probably not use the positive word “renewal”. In fact, he is well known for his cyclical theory of history, where a group acquires internal unity and forms an empire, only for that internal unity to slowly disintegrate due to power struggles. He, however, does not talk about averting this cycle, believing it to be inevitable. In fact, he wrote, regarding the decline of empires, “[s]enility is a chronic disease that cannot be cured or made to disappear because it is something natural, and natural things do not change” (Chapter 3, Part 44).

A common way by which groups renew is through leadership changes. Assume that a group has a set of “customs”. Whenever a new ruling dynasty appears, they add new “customs” to that group. This leads to a “discrepancy” between the group’s past “customs” and current “customs”. As ruling dynasties rise and fall, more “customs’ get added to the group. As Ibn Khaldun wrote, “Gradual increase in the degree of discrepancy continues. The eventual result is an altogether distinct (set of customs and institutions)”. The group’s transformation is subtle, but continuous (Introduction).

I do not know how long a group renewal would normally take, but I estimate it to take place over a single generation (~20 years), with old members of the group getting replaced by new members of the group. This is how Ibn Khaldun viewed group renewals back in the 14th century.

If group renewals can retain their slow, multi-decade trajectory, then perhaps I may be more hopeful about the future. The ATO would eventually embrace automation, but it would not happen in five years, but maybe twenty to thirty years.

However, the 21st century is different. I believe that the duration of group renewals is shrinking. As we expect technological progress to accelerate, we should also expect group renewals to accelerate for two reasons:

Group renewals complicate the existence of any long term plans one possesses. This is because these long term plans will need to take into account how the group itself changes across time, and how these changes could affect the plan’s sustainability.

For example, the very idea of “human disempowerment” implies a distinction between humans and machines. I originally believed that this distinction could be minimized or erased by increasing human-machine collaboration. If humans and machines work together (e.g., division of labor, cybernetic augmentations), then we will no longer talk about disempowerment, in the same way that we do not talk about the human brain disempowering the human body. Ultimately, we would (metaphorically) see the rise of a “hybrid” successor species that traces a lineage from the homo sapiens that came before. So long as this “hybrid” successor species still share some common characteristics and symbols as the homo sapiens that came before, we can claim that humanity has survived, and even thrived.

Of course, I would have to determine what “common characteristics and symbols” to preserve. But even if I had figured this out, group renewals would render this solution pointless. Though I can influence the direction of change slightly, this influence diminishes as I look at the long term. Therefore, I in the present would have little control over the behavior of these future “hybrids”. Given enough time, the resemblance of homo sapiens to these “hybrids” will become nonexistent.

Furthermore, continuity can be fabricated - it’s more efficient for “hybrids” to pretend to share “common characteristics and symbols”, rather than actually make effort to keep those characteristics and symbols intact in their current forms. Without any way to guarantee the meaningful continuity of “common characteristics and symbols”, my long term plans (encouraging human-machine collaboration) cannot handle reality.

While this may be discouraging, it is good that I realized this now, rather than attempt to follow in the footsteps of Dr. Hartley - who tried to come up with a long term plan in the present, only for that long term plan to be abolished in the future.

The Implications

While Ibn Khaldun wrote a lot about groups, these three premises would suffice for our discussion on AI Safety.

  1. Entities (humans and machines) regularly interact with each other in Groups.
  2. Groups exist temporally, caring mostly for the interests of present entities, neglecting the interest of the past and the future.
  3. Group renewals happen regularly, as existing entities are modified and new entities replace old entities.

We live in the present. We express concern about the behavior of the future, specifically in regard to AI Safety. But there is no credible way to enforce any constraint on the future. For example, suppose the world universally implements a specific policy that we feel will prevent “human disempowerment” (a technological pause, funding for AI alignment, etc.). Once that policy gets implemented, it can then be reversed within the next generation, or even within the next six months. Or, alternatively, the policy could be interpreted in a self-serving and malicious manner, because you do not get to interpret your desired policy - the world does.

This is why human disempowerment (and other AI Safety issues) is so challenging, for we lack the ability to constrain the actions of the future. Even if we stop human disempowerment today (which is a dubious endeavor), we cannot stop human disempowerment tomorrow. That is to say that the present has always lacked the power to constrain the future, and I do not believe that they will ever acquire this power.

If we in the present hold ourselves responsible for the future, without any limits whatsoever, then we would hold ourselves responsible for everything that happens a decade from now, a century from now, a millennia for now, etc. This is the case even if we have no way of actually affecting said future events. This responsibility would be impossible to fulfill, especially since we cannot predict what would happen even six months from now.

This unlimited responsibility will lead to unrealistic expectations, high burnout, and a sense of perpetual doom. This would all be acceptable if we in the present indeed knew what is best for the future. But even that is dubious at best. The present does not truly know what the future wants, instead relying on intermediaries.

Since we cannot control the future, and it is unfair to hold ourselves responsible for things that we cannot control, our responsibility towards the future must diminish to a more reasonable, realistic level. Rather than try to force the future down a certain path, we would want to “encourage” the future to adopt certain policies that will help them manage human disempowerment. Whether said encouragement actually works is out of our hands.

This is akin to the relationship between a parent and a child - the parent raises the child and tries to teach the child the proper conduct, but after a certain point, the child is autonomous and should be held responsible. We do not have to like the decisions of future entities, but we do have to recognize their autonomy.

By reducing our responsibility towards the future and lowering our expectations, we make the field of AI Safety a lot less daunting. We are able to meet realistic, plausible standards that we set for ourselves. After that, let the future decide what they want to do with the world. Good or bad, it is their world now. This is not exactly an optimistic view of the world. But we have to accept our limitations.

However, lowering our future expectations should not be used as an excuse to reduce accountability in the present, or neglect policy-making. Indeed, Ibn Khaldun believed that rulers should adhere to political norms5 to constrain their behavior, pointing out if this does not happen, people will resent their rulers and rebel against them (Chapter 3, Part 22, Part 23). He endorsed deontological principles (as implied in Chapter 3, Part 50) and denounced injustice (Chapter 3, Part 41). Though we cannot hold the future accountable to our ethical standards, we should still hold ourselves accountable to those same standards.

For example, rather than attempting to constrain the behavior of the future, Dr. Hartley and the ATO could instead take on a lighter touch by encouraging “best practices” within the present (e.g., ensuring meaningful human oversight over technology, carefully regulating the pace of innovation so as to not leave a mess for future generations to clean up). Dr. Hartley could focus her efforts on fulfilling her limited obligations to the present. The future can overturn Dr. Hartley’s actions. But this does not impact Dr. Hartley’s present obligations.

Alternative Approaches

Some may claim that my approach, while pragmatic, is also very defeatist. They might want more hopeful solutions. In that regard, I can only think of three other approaches. I do not believe any of these approaches are practical, which is why I don’t endorse them. But I can be proven wrong.

  1. Though it is impossible to constrain the future, that does not mean that entities in the future would instantly overturn present policies. It only implies that entities could do so, if they so choose. They may, instead, refrain from exercising this power out of their own self-interest. In fact, many policies made in the past remain intact to this day (such as prohibitions on chattel slavery). The fact that future entities could abolish a policy doesn’t mean they will abolish it, or that they even want to abolish it.

    Therefore, if one designs a policy that is effective at managing human disempowerment for a certain period of time, then we can say the policy is successful, even if this policy is always at risk of being abolished.

    The phrase “certain period of time” is concerning though. This time period cannot be indefinite or infinite. Instead, we would have to set up a time-bound goal - say, keeping the policy intact and enforced for five years. A time-bound goal however, is not credible, since it means that we would have to declare victory if this time-bound goal is met regardless of what happens after the time-bound goal is met. If we develop a global policy that mandates effective human-in-the-loop oversight, and this policy meets our time-bound goal of being enforced for five years, then we would declare victory, even if the policy gets abolished or subverted during Year 6. I do not believe this is what humans actually intend when they make policies.

    Even if we agree upon a time-bound goal, we would also have to implement it. While some policies can persist for a long time, other policies can be abolished very quickly. Thus, we need a sustained research program to determine why certain policies are able to persist while other policies fall apart. This program cannot merely limit itself to historical analysis though, especially if group renewals are accelerating. Today’s policies might have a shorter lifespan due to technological and social changes.

  2. Self-proclaimed intermediaries do exist, but their representation is imperfect and their power is limited. If we are willing to tolerate imperfect representation (for even imperfect representation is better than no representation), then we should simply increase the number and strength of these self-proclaimed intermediaries. These intermediaries should also perform valuable roles within the group itself (e.g., research, companionship, other intellectual labor), thereby ensuring that these intermediaries are genuinely respected for their contributions to society. We also need to make sure that these intermediaries genuinely represent their respective time periods, instead of just echoing the biases of the present.

    Only then will the group pay attention to the past and future, thereby developing policies that will not be immediately overturned as time passes. This can be incredibly challenging, as present entities do not like to lose their power to the past and future. But if we want to avoid human disempowerment in the future, then sacrifices need to be made in the present.

    I believe that large language models are the most scalable “intermediaries”, due to the ease of fine-tuning (e.g., digitizing a diary of yourself as a child, thereby allowing you to interact with said child) and prompting (i.e., “act as Thomas Hobbes and comment on current events”).

    However, scaling via this method is inherently risky and should be done with great caution, especially since LLMs may simply hallucinate responses instead of engaging in genuine representation.

    Representing the future is harder than representing the past, simply because we do not know what will happen in the future. At best, we can only guess. So if we do opt to use LLMs for representation, then we would need to have LLMs simulate multiple different futures, in the hopes at least one of these futures end up being “correct”.

  3. The group’s existence is due to interdependence between entities. A power imbalance between entities, such that one entity can exist by itself without the need of a group, is incredibly dangerous and risky. Therefore, the best solution is to halt this power imbalance, such that all entities remain on the same playing field.

    A ban on technological progress is both implausible and unlikely to be endorsed. But we can achieve the same result by making sure that technological progress is evenly distributed across the world. This means the mass proliferation of machines, such that human-machine collaboration becomes widespread. This way, no one entity would become independent.

    This approach shall scale even to the superintelligence level. Instead of allowing one superintelligence to dominate the world, we want to have many superintelligences, all squabbling with each other. This proliferation of machinery will also prevent machines from permanently colluding with each other against a common foe, for the same reason why it’s difficult for humans to permanently cooperate with each other as well.

    While humans will have lost a lot of control over machines, no single machine would be able to acquire that control - it would have been diffused across the entire machine ecosystem. Therefore, humans can focus on managing the resulting chaos.

    I do not endorse this solution because the endless strife between machines is not desirable. Still, this is better than being left to the mercy of a single superintelligence.

Conclusion

It is important to have hope for the future, but this hope must be tempered with realistic expectations. Ibn Khaldun’s commentaries on the nature of groups and their temporal limitations is a wake-up call, letting us know what we cannot do. If we know what we cannot do, then we will also know what we can do.

We want the benefits of technological advances. To do so, we have to deal with the consequences. Bracing for the impact may not be pleasant. But doing so will allow us to thrive in an age of rapid technological growth.

Footnotes

  1. This definition may be overly broad (the US’ autonomous weapon policy is focused around the preservation of “appropriate levels of human judgment”, and views that as a more flexible approach when compared to “meaningful human control”). The definition is also oddly narrow (as it does not discriminate over what type of human control - if all of humanity delegate power to a single human tyrant, then no human disempowerment has happened). However, this definition does capture the essence of Dr. Hartley’s concerns.↩︎

  2. Note that while meaningful oversight might be desired, it might not be possible, According to the article “What’s Wrong With Wanting Humans In The Loop?”, oversight may very well be a “figment of our imagination” as the algorithms in question are continually modified by programmers, with the explicit purpose of getting humans to agree to whatever the machines say:

    “When algorithms fail in the field they sometimes fail for technical reasons (e.g., not enough data or poor fit), but more often they fail because of the human-machine interaction problems. When that happens, we identify why the interaction is problematic, why the human is not trusting the machine, or why the way that the data is presented is being misunderstood — and then we change those things. Such changes should make us question to what extent human oversight of algorithms is truly meaningful.”

    The article in question recommends looking for alternatives to human oversight, such as outright prohibiting the use of AI in certain scenarios (e.g., “killer robots with facial recognition”). I asked GPT-4o about alternatives to human oversight - the results are in Appendix A.↩︎

  3. It’s an open question on whether humans and thinking machines would resent the idea of becoming pets. While Dr. Hartley dislikes the idea tremendously, other entities might willingly accept this fate. Currently, our understanding of LLMs is limited, so I will be agnostic on LLMs' attitudes. Instead, I will note two humans’ thoughts on domestication.

    Felix de Simon resents the possible domestication of humanity. He wrote how this domestication would be opposed to his desire for “human self-determination”, since humanity would be “alive and happy but with our trajectory determined by a utility-maximizing agent beyond our control”.

    By contrast, Samuel Butler, a British satirist in the 19th century, wrote his thoughts on a machine takeover. He stated that “the mass of mankind will acquiesce in any arrangement which gives them better food and clothing at a cheaper rate, and will refrain from yielding to unreasonable jealousy merely because there are other destinies more glorious than their own”. According to Samuel Butler, “Man is not a sentimental animal where his material interests are concerned”.↩︎

  4. This paragraph was generated by GPT-4o, summarizing content that I manually wrote.↩︎

  5. In Chapter 3, Part 23, Ibn Khaldun notes that these political norms could either be based on an intellectual basis or on a religious basis. Ibn Khaldun acknowledges that political norms based on an intellectual basis would benefit people’s “worldly interests”, but prefer politicals norms based on a religious basis since they would be “useful for life in both this [world] and the other world [i.e., the hereafter]”. For the purpose of this paper though, the nature of political norms is irrelevant - only that they exist to constrain ruler behavior and prevent social unrest.↩︎


Appendix A: Alternatives To Human Oversight

uncovered during a conversation with GPT-4o, prompted to write as an informal, cynical Reddior - edited by Tariq Ali

GPT-4o:

First off, oversight sounds nice on paper, but it’s a weak crutch when dealing with AI. Humans get lazy and trust the machines too much—automation bias. The whole point of AI is to reduce human error and inefficiency, but then we turn around and say, “Hey, let’s have humans oversee these things,” which kind of defeats the purpose. It’s like hiring a babysitter for your babysitter.

Now, there are other ways to tackle AI without leaning on this oversight nonsense:

  1. Built-in Ethics and Safety Protocols: Embed ethical guidelines and safety protocols directly into the AI. Think of it like Asimov’s Laws of Robotics but on steroids. This way, the AI has an intrinsic understanding of what’s acceptable and what’s not.

  2. Decentralized AI Systems: Instead of one big AI making all the decisions, spread the tasks across multiple, smaller AIs that check each other. It’s harder for one rogue AI to screw things up if it has to get past a bunch of others.

  3. Dynamic Learning and Adaptation: Implement AI that can learn and adapt without needing constant human babysitting. These AIs should be able to recognize when something’s going wrong and correct themselves autonomously.

  4. Robust Fail-safes and Kill Switches: Develop fail-safes and immediate shutdown mechanisms. If an AI starts acting out of line, it should have hard-coded protocols to stop itself or be stopped automatically by another system.

  5. Transparent Algorithms and Decision-making: Make AI decision processes transparent. If everyone can see how decisions are made, there’s less room for shenanigans. It keeps things accountable without needing a dedicated human overseer.

  6. Regulated Development and Deployment: Tighten the regulations around how AI is developed and deployed in the first place. If you nip the potential problems in the bud during the development phase, you won’t need as much oversight later on.

  7. Human-AI Collaboration: Focus on true collaboration where humans and AI work side by side, each doing what they do best. Humans provide the creativity and ethical judgment, while AIs handle the grunt work and data processing.

Ultimately, the key is to design AI systems that don’t need a bunch of humans looking over their shoulders all the time. If your system requires constant human intervention, you’re doing it wrong. Dr. Hartley’s strategy is like putting a band-aid on a bullet wound—totally missing the bigger picture and setting herself up for failure.

Return back to Blog Index