Artificial intelligence is no longer a futuristic concept — it’s embedded in our daily lives. But who decides how these systems work, and at what cost?
In Code Dependent: Living in the Shadow of AI, Madhumita Murgia, the Financial Times’ first Artificial Intelligence Editor, examines those questions through human stories that span continents.
Murgia draws on years of reporting about technology’s social impact, from data workers in Nairobi to healthcare algorithms in India.
With a background in biology and immunology, she approaches AI with both scientific rigor and a journalist’s instinct for storytelling. Her work moves beyond hype to reveal what it really means to live in AI’s shadow.
Below are five key takeaways that make Code Dependent an important and unsettling read.
1. The invisible labour behind AI
AI may look automated, but it’s built on vast amounts of human effort. Murgia spotlights the hidden workforce that powers machine learning. This includes people labelling images, moderating toxic content, and feeding the data that allows AI systems to “think.”
It’s called the human-in-the-loop model, and it is very exploitative.
Many of these workers operate in developing countries under exhausting conditions for low pay. Murgia described how this invisible labour forms the “foundation of artificial intelligence,” yet remains largely unseen and unprotected.
Her message is simple: Behind every polished algorithm lies a network of human hands. When we celebrate AI’s sophistication, we should also acknowledge the people who enable it, often at significant personal cost.
2. Algorithms quietly shape human choices
One of Murgia’s most striking arguments is that AI doesn’t just predict our behaviour — it can quietly influence it. Systems that decide who gets hired, who qualifies for a loan, or who receives medical care often reduce people to data points.
When predictions are built on biased or incomplete data, they can trap people in feedback loops that reinforce inequality. A low credit score, for instance, might limit someone’s financial options — confirming the algorithm’s prediction and shrinking their opportunities further.
As Murgia explains it, these technologies are not neutral. They “codify human bias at scale,” subtly narrowing our choices under the guise of efficiency.
We must stay self-aware, reflexive and in control of our agency to resist this phenomenon.
3. Biases in AI can amplify inequality
AI’s global reach does not mean its benefits are shared equally. Murgia documents how bias in design and data deepens social divides, particularly for women, migrants, and marginalized communities.
In her reporting for the Financial Times, she has shown how algorithmic decisions — from facial recognition to welfare systems — can reinforce systemic inequities rather than solve them. When technology reflects the blind spots of its creators, the result is discrimination at digital speed.
By tracing stories across continents, Murgia reveals how these systems often extract value from the Global South while concentrating power and profit in the Global North. This dynamic is also known as “data colonialism.”
4. Seamless intelligence is a myth
Silicon Valley often promotes AI as a flawless, omniscient force. Murgia dismantles that myth. Her reporting shows that many systems marketed as “AI” are fragile, inconsistent, or dependent on human intervention.
She exposes the messy reality behind automation: biased data, inconsistent performance, and “human-in-the-loop” labour that companies rarely acknowledge. The gap between the marketing narrative and everyday reality, she argues, is where public trust can erode.
Murgia reminds readers that “smart” technology is only as good as the data and ethics behind it. Blind faith in AI doesn’t make systems more accurate. It just makes us less critical.
Ethics in AI matter tremendously. To full appreciate that fact, we must need to be aware of the exploitative nature of it’s development that is so well hidden behind a digital smoke screen.
5. Reclaiming human agency
Despite its warnings, Code Dependent isn’t pessimistic. Murgia highlights resistance against the idea that technological dominance is inevitable.
Content moderators are demanding fair pay, researchers are exposing algorithmic bias, and the healthcare industry is challenging discriminatory behaviour that AI tech is capable of.
Her reporting points to small but powerful acts of resistance: journalists investigating opaque systems, designers embedding ethics into products, and everyday users questioning the fairness of automated decisions.
She calls for a culture of transparency and accountability, where citizens, not just corporations, shape how AI is used. In her owns words, “We can’t opt out of AI, but we can demand better from it.”
A note on criticism
Some reviewers, including the LSE Review of Books, have noted that Code Dependent relies heavily on personal storytelling rather than on systemic analysis. That critique is fair, as the book’s power lies in its empathy and anecdotal depth, not policy precision.
However, that depth of intimacy is also its strength. By grounding global issues in personal narratives, Murgia makes the impact of AI tangible. She transforms a technical debate into a human one.
AI is powered by people
Code Dependent isn’t a book about algorithms and codes. It’s a story about the people who power AI, often at their expense, and it amplifies their experience in a world where it’s hidden.
Murgia reminds us that every line of code reflects human decisions, values, and biases.
For anyone navigating the increasingly digital landscape, whether as a business leader, artist, or everyday citizen, her work is a call to stay engaged and authentic.
AI may be complex, but that doesn’t mean our relationship with it has to be. Awareness, transparency, and accountability remain profoundly human tools.
For more inspiring reads, click here.



