
The Hard Road to Regulating the Revolution
As AI transforms our world at an unprecedented pace, the challenge lies not just in harnessing its potential, but in crafting regulations that keep up with its revolutionary impact.
by Abdul Jarrar
Let’s be honest: no one can fully predict the extent to which Artificial Intelligence (AI) will impact society or disrupt businesses. What is certain, however, is that it has already emerged as one of the most transformative forces of our time.
Throughout my career of more than 25 years in the tech industry—most of it spent as a senior executive leading a wide range of products, customer engagements, investments, and ecosystem partnerships—I’ve witnessed AI reshape priorities for enterprises and governments globally, and for good reason. AI’s ability to integrate seamlessly into devices and applications, learning from and adapting to our lifestyles, raises profound questions. Will these systems become sentient? Could they develop emotions and viewpoints of their own? Most importantly, will their decisions align with our values? These are the pressing dilemmas we must confront as AI deployment expands.
Harnessing the power of AI can be tremendously beneficial. Envision, for example, a future where machines equipped with 3D spatial intelligence—an advanced capability allowing AI to perceive, understand, and navigate physical environments in three dimensions, similar to human spatial awareness—can instinctively adapt and react in real-time. This technology will soon be real: Fei-Fei Li, often referred to as the “Godmother of AI,” leads a startup that focuses on advancing 3D spatial intelligence. Li and her company are pioneering innovations in how machines interact with their physical environments. In doing so, however, she has also stressed the importance of ethical considerations.
Speaking at the 2023 IEEE World AI Conference, “The challenge is not just teaching machines to see or move in a 3D world but ensuring they do so ethically and in ways that enhance human life,” she said. Li is right. While the potential of AI is immense, it brings significant societal, governance, and innovation challenges that demand immediate attention.
Geoffrey Hinton, known as the “Godfather of AI,” warned in an interview with The New York Times on May 1, 2023 that, “We don’t really understand how these systems work internally. We can see what they do, but we don’t fully grasp how they arrive at their decisions. It’s like a black box.” This lack of transparency underscores the urgent need for frameworks to address the following priorities:
Establishing a Standards Body
A global standards body will be essential to ensuring the transparency and explainability of AI. Without this foundation, addressing key issues like ethical AI, bias, fairness, and emotional manipulation will be nearly impossible. Such a framework would allow governments and businesses to certify AI technologies before widespread deployment. When considering the importance of such a body, we should remember Hinton’s wise words: “The more these systems learn, the more unpredictable they become. That unpredictability should worry us all when decisions that could affect millions are at stake.”
Bridging the Digital Divide
AI must not exacerbate existing inequalities. Equal access to technology is crucial, but the cost of building and maintaining data centers often limits access. Nations should explore alternatives based on open standards and open ecosystems to maintain competitiveness and inclusivity. Strengthening Privacy Protections Data privacy is paramount. Most AI systems require vast amounts of data (textual, visual, auditory) for training, often without users understanding how their information is being used. This highlights the urgent need for stricter regulations to safeguard users’ privacy and ensure transparency in AI data use.
Sandboxes for Risk Assessment
Governments need mechanisms, such as sandboxes and expert evaluations, to better understand and test AI systems. The societal implications of this disruptive technology surpass anything we’ve encountered before. Leaving AI completely unregulated is a significant risk, and selfregulation by companies alone will not suffice.
Collaboration: A Necessity, Not a Choice
The rapid development and deployment of AI present both opportunities and challenges. While many unknowns remain about artificial general intelligence (AGI) and the potential sentience of AI, one thing is clear: society, industry, and policymakers must prepare for these eventualities, which may arrive sooner than anticipated.
Effective regulation is crucial to harness AI’s benefits while mitigating its risks. Developing this effective regulation requires urgent and sustained collaboration between policymakers, researchers, and industry leaders. By working together through public-private partnerships and inclusive frameworks, we can ensure that AI is developed and used responsibly, ethically, and to the benefit of humanity.
Explore Abdul Jarrar’s insights in the first issue of Centre Mag.
