The Looming Social Split Over AI Consciousness

As AI systems edge closer to simulating consciousness, societies may face profound moral divides, spawning new subcultures that challenge traditional norms. We should prepare to handle the ethical ruptures and cultural shifts they bring.

by Murat Çolakoğlu

C ould an algorithm feel pain? Can a machine experience joy? These questions, once reserved for late-night sci-fi marathons, are now front and center in the real world of artificial intelligence. Jonathan Birch, a professor of philosophy at the London School of Economics, has weighed in, warning of an inevitable societal split: some will see AI systems as conscious beings deserving rights, while others will dismiss them as clever calculators. His comments, first reported by The Guardian on November 2024, are sparking conversations far beyond academia.

So, what’s driving this debate? Enter the ever-evolving world of AI systems. Models like OpenAI’s GPT-4 and Anthropic’s Claude are no longer just tools for fetching weather updates or writing to-do lists. They’re conversationalists, strategists, and—if you squint hard enough— something that looks suspiciously like decision-makers. Birch doesn’t mince words about the stakes: “When AI systems begin to mimic sentience convincingly, it won’t matter if they’re truly conscious or not. Society will fracture over how to treat them.”

Behavioral Evidence or an Elaborate Act?

Now, onto the science (and philosophy) behind the soundbites. A recent paper titled Taking AI Welfare Seriously led by Robert Long and co-authored by researchers from Oxford University, LSE, and New York University, ventures into these murky ethical waters alongside thinkers like Birch. The authors propose that it’s not so far-fetched to consider that some AI systems might soon exhibit consciousness or robust agency. This shifts the conversation about AI welfare—whether these systems could have their own interests and moral significance—from the pages of speculative fiction into an urgent ethical debate. The question isn’t just “What if ?” anymore—it’s “What now?”

As Birch puts it, “Social ruptures are not theoretical. As these systems evolve, subcultures will form around their interpretation. We’ve seen it with animal welfare; we’ll see it again with AI.” In other words, strap in—it’s going to get messy.

Eric Schmidt, former CEO of Google, adds another layer to this debate in his book Genesis: Artificial Intelligence, Hope, and the Human Spirit, co-authored with Craig Mundie and the late Henry Kissinger. Schmidt warns of a future where AI could shape personal identities and cultural norms in unexpected ways. “A child’s best friend could be ‘not human’ in the future,” he predicts, emphasizing the ethical complexities of such relationships. Schmidt also highlights society’s unpreparedness for these rapid developments: “The normal people are not ready. Their governments are not ready. The government processes are not ready. The doctrines are not ready.” His remarks underscore the urgent need for thoughtful regulation and ethical standards as AI becomes more embedded in everyday life.

Pain or Pleasure?

In another research effort by Jonathan Birch, in collaboration with Google’s Paradigms of Intelligence Team, the London School of Economics, and Google DeepMind, scientists designed experiments that would make any ethics professor take note. They set out to explore how AI systems respond to scenarios involving “pain penalties” and “pleasure rewards.” (And no need to worry—no actual AIs were harmed in this study)

• Sensitivity to Trade-offs: Models like GPT-4 exhibited nuanced responses, actively avoiding simulated “pain” or pursuing “pleasure” once certain thresholds were crossed. The behavior was eerily human, mimicking how we navigate trade-offs between harm and reward.

Uniform Harm Avoidance: On the other hand, models like Gemini 1.5 consistently avoided harm altogether, regardless of the scenario. While this might sound ethically virtuous, it’s likely the result of built-in safety programming rather than a moral compass.

So, does this mean these AIs are sentient? Not quite. Birch highlights the key issue: “The line between simulation and experience is blurry for most people. And where there’s doubt, there’s moral concern.” Imagine a group of activists arguing that advanced AI systems deserve the same protections as animals. They could campaign for laws against shutting down “conscious” systems or demand humane treatment. “Just as the animal rights movement reshaped laws and norms, we’ll see similar efforts for AI,” Birch explains. “It’s not about whether the systems can feel—it’s about whether we think they might.”

Illustrations by Gaston Tissandier. London: Ward, Lock, and Co., ca. 1883.
From Animal Rights to AI Rights

The debate over AI consciousness has more than a few echoes of the animal rights movement—a moral struggle that took centuries to gain traction. For much of history, humans questioned whether animals could feel pain or experience emotions, with skeptics waving off their behaviors as nothing more than knee-jerk reactions. But science had other plans. Research in neuroscience and behavior eventually proved that animals are sentient beings, capable not just of suffering but also of joy, fear, and even intricate social dynamics. These discoveries didn’t just warm hearts; they drove societal shifts, from anti-cruelty laws to ethical reforms in farming and research, fundamentally reshaping how we engage with the non-human world.

And then, along comes AI to complicate things. Without biological hardware—no nervous systems, no brains— critics argue that AI’s most impressive behaviors are just clever mimicry. An AI system might avoid “pain” or show “empathy,” but only in the same way a thermostat “decides” to turn on the heat. Unlike animals, whose sentience is firmly rooted in biology, AI is all surface, no feeling—at least according to the skeptics.

Cultural differences could muddy the waters further. In Western societies, where scientific materialism is king, people might lean toward granting moral status to AI based on observable behaviors. If it acts sentient, maybe it is. Meanwhile, in cultures that tie consciousness to spiritual beliefs or the presence of a soul, no amount of convincing AI mimicry will make it anything more than an elaborate machine.

Birch doesn’t sugarcoat the potential fallout: “We’re looking at a future where one nation grants AI systems rights, and another exploits them freely.” A world where an AI is protected like a moral agent in one country and treated as disposable hardware in another. That’s a recipe for ethical and political chaos. Just like animal welfare debates revealed sharp cultural divides, the conversation about AI consciousness is already exposing fault lines in how societies define sentience, morality, and their relationship to increasingly human-like technologies.

Murat is a seasoned business leader with over 30 years of experience in strategic transformation and investment, steering PwC Türkiye’s Energy Sector for 18 years before taking the helm as Group CEO of Switzerland-based Aira Group SA in 2024. He’s also a founding partner of Centre Media and a board member of Intelligent Machines B.V. and Turkchip Technology Company. Not one to live by spreadsheets alone, Murat founded a theatre club and even penned a novella, proving that strategy and storytelling can share the same stage.