DeepSeek and the global AI race: why diversity in tech still matters 

By Rini Mukkath

In the ever-shifting AI landscape, a new player is shaking up the sector. DeepSeek, a Chinese large language model (LLM) developed by a team of former Baidu engineers, is making headlines for its technical prowess and ambitions to challenge OpenAI, Google, and Meta. As discussions intensify around DeepSeek’s capabilities and its potential to disrupt the AI sector, one crucial element remains overlooked: the role of diversity in AI development and how it shapes the global information ecosystem. The Australian Government has banned DeepsSeek citing security concerns. The home affairs minister, Tony Burke, said, “AI is a technology full of potential and opportunity – but the government will not hesitate to act when our agencies identify a national security risk.”  

The geopolitics of AI and its implications for representation 

DeepSeek’s emergence underscores the geopolitical nature of AI advancement. While much of the AI industry has been dominated by Western tech giants, China’s investment in homegrown models signals a shift towards a more multipolar AI landscape. But with this shift comes an urgent question: how does AI, built within different cultural, political, and linguistic contexts, impact global narratives and representation? 

DeepSeek’s new chatbot failed to provide accurate information about news and information topics 83 percent of the time, ranking it tied for 10th out of 11 in comparison to its leading Western competitors, a NewsGuard audit found. It debunked provably false claims only 17 percent of the time. 

Western AI models have long been critiqued for their biases—often reflecting and amplifying racial, gender, and ideological prejudices embedded in the data they are trained on. DeepSeek, like its Western counterparts, will not be immune to such biases. However, its development in China means it will be shaped by a different set of social, political, and ethical considerations, raising important questions about how diverse voices are (or are not) represented in training datasets and content moderation decisions. 

Users testing DeepSeek, a Chinese AI chatbot, have noticed that it can generate responses on sensitive topics before abruptly censoring itself in real time. This unusual form of self-policing offers a rare glimpse into how the model controls information and opinion. 

Unlike traditional censorship, which typically occurs behind the scenes, DeepSeek appears to edit itself in plain view. The chatbot, which has quickly risen to become the most downloaded free app in the UK and the US, was also linked to a dip in US tech stocks earlier this week. 

According to this Guardian article, one user, Salvador from Mexico, asked DeepSeek whether free speech is a legitimate right in China. Initially, the AI provided a thoughtful preamble, outlining potential discussion points such as China’s crackdown on Hong Kong protests, persecution of human rights lawyers, and censorship related to Xinjiang’s re-education camps. It even appeared to reason with itself, expressing a desire to “avoid any biased language” and provide an objective comparison with Western approaches. 

 DeepSeek’s answer began by discussing the ethical justifications for free speech and how China prioritises state authority over individual rights. However, mid-sentence, its entire response vanished and was replaced with: “Sorry, I’m not sure how to approach this type of question yet. Let’s chat about math, coding and logic problems instead!”  Salvador described the moment as “very abrupt,” adding: “It is censoring in real time.” 

Despite this, the AI’s underlying model, R1, is open-source and can be downloaded separately, reportedly without the same pro-China restrictions. DeepSeek’s inconsistency raises questions about whether it functions as a tool of Chinese state control or if its censorship mechanisms are still evolving.   

Language models and information gatekeeping 

A key aspect of DeepSeek’s capabilities is its multilingual focus, particularly its strength in processing Chinese-language content. While this has clear advantages for accessibility and linguistic inclusivity, it also highlights an ongoing issue: AI models are increasingly acting as gatekeepers of information, determining what is amplified and what remains unseen. 

For marginalised communities, this is particularly concerning. If AI models continue to be developed in silos—each reflecting the cultural and political priorities of their home country—they risk deepening global information divides. For instance, will DeepSeek offer fair representations of minority communities in China, or will it reflect state narratives? Similarly, Western models trained primarily on English-language sources often overlook or misrepresent perspectives from the Global South. 

Why diversity in AI development matters 

AI is not neutral. Every model, including DeepSeek, is a product of the data it is fed, the biases of its developers, and the regulations governing its use. Without diverse voices in AI development, the risk of perpetuating systemic biases only grows. 

A more inclusive approach to AI requires global collaboration—one that goes beyond competition between tech giants and national interests. It means actively involving diverse communities in the development, auditing, and policymaking processes of AI systems. It also means investing in AI literacy, so that users, especially those from underrepresented backgrounds, can critically engage with the technology shaping their realities. 

As DeepSeek enters the AI race, it serves as a reminder that technological innovation alone is not enough. If we are to create an AI-powered future that truly benefits all, we must ensure that the voices shaping that future are as diverse as the world we live in.