By Maisie Jeffreys

Key Takeaways
- AI is the filed of computer science focussed on creating systems that can perform tasks normally requiring human intelligence
- It has the potential to transform organisations, practice and institutions for the good
- AI development is predominantly driven by Western institutions and data, creating inherent biases in the knowledge it produces
- If we are not intentional with inclusivity, we are at risk of building a Western-centric, global knowledge economy that reproduces colonial logics
What is AI?
Before we dive into the weeds of ethics, philosophy and systems, it’s worth understanding what we mean by Artificial Intelligence (AI).
AI is the filed of computer science focussed on creating systems that can perform tasks normally requiring human intelligence. For example: natural language processing, learning, problem-solving, decision-making. These systems can be used for a wide range of applications, from automating repetitive tasks to enabling complex decision support. AI technologies perceive their environment, process information, and act to achieve specific goals.
The ultimate goal of AI is to develop machines that can perform complex tasks autonomously, adapting to new situations based on previously acquired knowledge.
There are many different ways to categorise AI based on functionality, uses etc. – but for now let’s focus on two aspects that you’ll probably be familiar with:
AI Machine Learning Models (AI-MLM) allow computers to learn from inputted data, recognise patters and make predictions or decisions without being explicitly programmed. The ability to do this allows AI-MLMs to improve their performance over time as they are exposed to more information.
It’s the difference between telling a computer exactly what to do – if someone asks how to make a cake you programme the AI to give them the BCC’s Victoria Sponge cake recipe (it’s great by the way) – and giving it the ability to figure things out for itself – an AI-MLM wouldn’t just repeat a fixed recipe; it would notice you’re gluten-free, love bright colours, or prefer less sugar, and adjust the recipe based on everything it has learned from you and others.
Generative AI goes one step further – it not only learns from inputted data, but can then create new content such as text, images, audio, code in response to user prompts. ChatGPT is a key example.
Uses of AI
AI has become incredibly useful, and arguably ubiquitous (depending on where you live) tools in our modern society. Their use has ranged from everything from analysing aerial imagery to diagnose crop disease in farms, to creating (arguably weird) artwork – the potential is limitless. In a world full of complex, multi-faceted, multi-geographical problems, many are turning to AI not just for efficiency, but for possibility – a way to see patterns we’ve missed, spark new ideas, and support innovative solutions to the problems we share.
I love this example published in Nature Medicine – researchers developed an AI model that can flag people at high risk of developing pancreatic cancer up to 3 years before diagnosis, using only their medical records.
The AI was more accurate than general population risk‑estimates and was about as accurate as some genetic tests (which are typically only offered to a small, high-risk groups). Because pancreatic cancer is often diagnosed late and has very poor survival rates, early detection could make a big difference!

Credit: Placido et al. 2023
Some emerging uses also include everything from fossil data processing to enhanced fraud detection using transaction patterns.
AI as an Extension of Westernised Systems?
The history of AI development is mired in the parallel development of Western societies and institutions, from the foundational work of British mathematician Alan Turing in the 1930s, to the development of ChatGPT by the California company OpenAI in 2022.
In the mid‑20th century, the field of AI was formalised in Western academia: Turing’s 1950 paper “Computing Machinery and Intelligence” laid out the philosophic and technical groundwork, and the Dartmouth Workshop (1956) – held in the US – coined the term “Artificial Intelligence.” During the 1950s–70s, much of the pioneering research (e.g. expert systems, LISP) emerged from Western institutions, especially in the US and the UK.
Although there were contributions from other parts of the world (for instance, Japan in the 1960s and 1980s), the major AI booms and winters remained concentrated in the West. By the 2010s, as “big data” and deep learning took off, the infrastructure driving modern AI – cloud computing, large-scale neural architectures – was again largely developed and financed by Western tech giants in the US. Even recent breakthroughs, like the transformer architecture (2017) and large language models (e.g., GPT-3 in 2020), were pioneered by Western labs and companies.
Without a doubt, these Western institutions are shaped by the data, culture, values, and power of the societies that build them – and ultimately, so is their AI. Indeed, much of the data that trains AI learning models is scraped from the internet, or comes from Western countries who have the infrastructure to document, store and maintain comprehensive data – both of which hold primarily Western cultural perspectives. English remains the dominant linguistic force in AI, with less than 1% of AI training data coming from African or Southeast Asian languages.
This means that AI systems are often built on datasets that disproportionately reflect Western norms and experiences, while underrepresenting the realities, knowledge systems, and digital footprints of the rest of the world.

Credit: Harvard University
AI Biases
The resulting potential bias that may exist within such technology can inadvertently lead to unfair and potentially damaging outcomes – even reinforcing systemic racism, sexism, capitalism, extractive tech culture, colonial power dynamics, or surveillance.
Some recent examples highlight this challenge:
- Data Bias: Bias originating from the training data, often due to unrepresentative or skewed datasets.
- In the above example where AI was developed to detect pancreatic cancer, only patient datasets from the US and Denmark were used to train the AI model – meaning it is not representative of global populations.
- Confirmation Bias: Bias occurring when an AI system is overly reliant on pre-existing patterns in the data, which can reinforce historical prejudices.
- A great example of this is Amazon’s recruiting tool that was abandoned because it learned to penalise resumes containing the word “women’s” and graduates from all-women colleges. This occurred because the AI model was trained on historical hiring data from a male-dominated tech industry, leading to skewed outcomes where the model unfairly down-ranked female candidates.
- Stereotyping Bias: Bias from AI systems reinforcing stereotypes.
- For example, a researcher inputted phrases such as “Black African doctors caring for white suffering children” into an AI program meant to create photo-realistic images. The aim was to challenge the “white savior” stereotype of helping African children. However, the AI consistently portrayed the children as Black, and in 22 out of more than 350 images, the doctors appeared white.
- Homogeneity Bias: Where an AI system generalizes individuals from certain groups, treating them as more similar than they actually are.
- For example, studies (e.g., by MIT’s Joy Buolamwini) have found that commercial facial-recognition systems have much higher error rates for darker-skinned individuals, especially darker-skinned women, compared to lighter-skinned men. The model treats non-white individuals as more visually similar to each other than they actually are.
- Measurement Bias: When the data collected differs from the true variables of interest.
- For example, an AI model used across the US healthcare system failed to recommend advanced care to Black patients, presuming lower healthcare spending equated to lesser health needs. The data (healthcare spending) systematically differed from true variables (actual health risk).
AI as a Systems Maker
There’s an interesting field of economics called the knowledge economy (bear with me here…) – César Hidalgo has a fantastic book on this if you want to read.
The knowledge economy refers to an economic system where ideas, expertise, and information (“knowledge”) – rather than physical labour or raw materials – are the primary drivers of value.
In this model, intellectual capital, research, education, and technological innovation become the main drivers of growth, supported by knowledge infrastructure (universities, digital networks, research institutions, tech companies). As a result, economic power increasingly flows to those who can produce and control knowledge, rather than those who control physical resources.
AI now sits directly inside this system – and is reshaping its dynamics. Recent research suggests that AI acts as a powerful amplifier, potentially multiplying the capabilities of people and institutions who already hold the most knowledge. Those with expertise, resources, and advanced technical capacity can use AI to innovate faster and extend their influence. Those with less knowledge or weaker infrastructure benefit far less. As AI becomes more capable, the gap between these groups widens. In other words: AI doesn’t automatically democratise knowledge; it can just as easily concentrate it.
A provocative question emerges when we look at the global picture: what happens to global knowledge when the world’s most powerful AI systems are overwhelmingly based in the West, and trained primarily on Western data?
Western countries already thrive in the knowledge economy model, particularly those in the Organisation for Economic Co-operation and Development. In a 2024 ranking based on the Global Knowledge Index, which measures factors like education, innovation, and ICT, the top-ranked knowledge economies were all Western or highly developed Asian nations, including Sweden, Finland, Switzerland, Denmark, the Netherlands, and the US.
When you throw AI into the mix, a feedback loop emerges: Western institutions and knowledge shapes AI, and AI then reinforces Western ways of knowing. Combined with data colonialism (where data created in the Global Majority (often referred to as the Global South) flows into Western infrastructures, often without proportional benefit returning to the communities that generated it) and a clear picture emerges:
We are at risk of building a Western-centric, global knowledge economy that reproduces colonial logics.
Rather than enabling diverse forms of intelligence, AI could solidify a system in which cognitive power moves upward into a handful of Western tech hubs, and outward through homogenised global knowledge.
Could AI Create a Better System?
Yes, if we’re intentional.
Voices across the world are pushing back. Scholars, activists, and technologists are demanding data sovereignty, ethical frameworks rooted in local values, and AI systems that actually reflect the diversity of human knowledge.
- AI Assistants like JusticeAI and the Decolonial Intelligence Algorithmic (DIA) Framework™ support trauma-informed, decolonial AI interactions.
- Open models like LLaMA allow researchers from under-resourced regions to fine-tune and deploy powerful AI without starting from scratch.
- Open datasets in local languages, histories, and cultural contexts can help decolonize AI knowledge e.g. Masakhane (for African NLP) and BLOOM (a multilingual open model) are promising starts.
- Open-source infrastructure tools like Hugging Face’s Transformers and collaborative platforms like Papers with Code lower the barrier to entry.
I believe an equitable, inclusive system is possible. The expansion of AI has shown us that, if there is enough incentive and resources, entire organisations, industries and systems can rapidly restructure.
Let’s incentivise and support our knowledge economies to redistribute power. Rather than letting AI become yet another tool of digital imperialism, we need to support models, regulation, and infrastructure that center co-creation, cultural plurality, and local agency.
Some food for thought
- What systems, ideas or even people resist simulation?
- If AI creates new systems, or alters existing ones, who’s responsible for the consequences?
- How do we understand how not just our knowledge – but our idea of what counts as knowledge (epistemologies) – change due to AI?
- Could AI identify its own feedback loops, leverage points of change, and interventions to make itself more equitable? Or would these interventions still bias towards the ideas of equity from the data it’s trained on?
Leave a comment