World must shape a more inclusive AI
Innovation has emerged as a significant topic at the World Economic Forum in Davos. Artificial intelligence is often described as a tool — powerful, yes, but ultimately neutral. This framing is misleading because AI has crossed a qualitative threshold. It is no longer simply used within systems of power; it is actively restructuring them.
When algorithms determine which languages are digitally visible, which populations are deemed “risky”, which narratives circulate globally, and which data is economically valuable, AI ceases to be a passive instrument. It becomes a player — one that silently reshapes governance, markets, and public discourse.
The question, then, is not whether AI is transforming the world. It is who shapes that transformation.
Algorithmic bias is often framed as a technical problem that better data or improved models can solve. But bias is rarely accidental. It reflects structural imbalances in whose data is collected, whose realities are modeled, and whose values are encoded into systems.
Most large-scale AI models today are trained on datasets dominated by a narrow set of countries, languages and cultural assumptions. Vast portions of humanity — particularly in Africa, parts of Asia, and Latin America — remain underrepresented, misrepresented or entirely absent. When such systems are deployed globally, they do not merely reproduce bias; they normalize it. According to UNDP, AI readiness exceeds 70 percent in advanced economies, but falls below 20 percent in fragile states, raising the risk of a “new Great Divergence” driven not by factories or machines, but by data and algorithms.
In this sense, bias is not only a technical flaw but a geopolitical signal, revealing how historical asymmetries are being translated into digital form.
Data is the primary resource of AI. Across the global majority, enormous volumes of data are generated daily — through public services, mobile platforms, biometric systems, social media and digital finance. Yet control over this data often lies elsewhere. Storage infrastructure, computing capacity, and model ownership remain concentrated in a few global hubs.
This raises a fundamental question: can societies meaningfully govern AI systems built on data they do not control?
Data sovereignty is not about isolation. It is about agency — the ability of states and communities to decide how data is collected, interpreted, shared and valorized. Without this agency, AI risks becoming a new vector of domination rather than a shared engine of progress.
This trajectory, however, is not inevitable. When Google was mapping African buildings using AI within Google Maps, African researchers and engineers played a central role in shaping how models were trained and validated. Local expertise enabled the system to recognize informal settlements, diverse architectural forms and urban patterns often invisible to conventional datasets. What mattered was not the technology itself, but who shaped it. This was not AI imposed from the outside, but AI co-created with those who understand the context best. The result was not only greater accuracy, but also greater legitimacy.
The lesson is clear: collaboration is not the problem. Asymmetry in collaboration is.
History offers sobering lessons. Traditional imperialism relied on military force and territorial control. Contemporary forms rely on economic leverage, technological dependence and control over strategic resources. Current tensions over Venezuela — where sovereignty over natural resources collides with external political and economic pressure — illustrate how domination today often operates without formal occupation.
AI risks following a similar path. Instead of oil or minerals, the strategic resources are now data, computing and algorithms. Instead of military bases, influence is exercised through platforms, standards and infrastructures. If AI systems are designed, governed, and deployed without meaningful participation from most of the world, they risk becoming tools of digital imperialism — subtly constraining choices, narratives and development pathways while presenting themselves as neutral and inevitable.
Yet dominance is not destiny. For decades, Hollywood shaped global cultural images almost unilaterally. Today, the landscape is far more plural. African cinema, Asian streaming platforms, Latin American series and local creative industries now reach global audiences on their own terms.
This shift did not happen by accident. It was enabled by local talent, accessible technologies, policy choices, and audiences demanding stories that reflected their own realities.
AI can follow a similar trajectory. The fact that a few actors dominate today does not mean they must dominate forever. With the right investments in local ecosystems, data governance and human capacity, AI can evolve from a monoculture into a plural, multipolar technological landscape.
AI is global in impact, yet its governance remains uneven. A few countries and corporations dominate standard-setting and norm-shaping spaces, while much of the global majority remains under-represented.
There is, however, a window of opportunity. Recent efforts led by the United Nations reflect a growing recognition that AI governance cannot be left to a few powerful actors alone. These initiatives acknowledge that AI raises not only technical risks, but also deep questions of equity, development and sovereignty. Today, only seven countries actively participate across all major non-UN AI governance initiatives, while 118 countries — mostly in Africa, Asia-Pacific, and Latin America — are absent.
For this momentum to translate into legitimacy, the involvement of the global majority is crucial. Inclusion cannot be symbolic. Countries and communities most affected by AI systems must have real influence over priorities, risk definitions and accountability mechanisms. Otherwise, global governance risks reproducing the very imbalances it seeks to address.
Whether AI becomes a new instrument of imperialism or a foundation for a more balanced global order will depend on decisions made now: about data sovereignty, co-creation, governance inclusion and whose knowledge counts. If AI is shaping the world, then the world must shape AI — together, pluralistically and without domination.
The author is a member of the UN Secretary-General’s High-Level Advisory Body on AI and program director of FORCE-N at Cheikh Hamidou Kane Digital University.
The views don’t necessarily reflect those of China Daily.
































