Zen and the Art of Dissatisfaction – Part 29

Wealth, Work and the AI Paradox

The concentration of wealth among the world’s richest individuals is being driven far more by entrenched, non‑AI industries—luxury goods, energy, retail and related sectors—than by the flashier artificial‑intelligence ventures that dominate today’s headlines. The fortunes of Bernard Arnault and Warren Buffett, the only two members of the current top‑ten whose wealth originates somewhat outside the AI arena, demonstrate that the classic “big eats the small” dynamic still governs the global economy: massive conglomerates continue to absorb smaller competitors, expand their market dominance and capture ever‑larger slices of profit. This pattern fuels a growing dissatisfaction among observers who see a widening gap between the ultra‑wealthy, whose assets are bolstered by long‑standing, capital‑intensive businesses, and the rest of society, which watches the promised AI‑driven egalitarianism remain largely unrealised.

Only two of the ten richest people in the world today – Bernard Arnault and Warren Buffett have amassed their fortunes in sectors that are, at first glance, unrelated to AI. Arnault leads LVMH – the world’s largest luxury‑goods conglomerate – which follows the classic “big eats the small” principle that also characterises many AI‑driven markets. Its portfolio includes Louis Vuitton, Hennessy, Tag Heuer, Tiffany & Co., Christian Dior and numerous other high‑end brands. Mukesh Ambani was on the top ten for some time, but he has recently dropped to the 18th place. Ambanis Reliance Industries is a megacorporation active in energy, petrochemicals, natural gas, retail, telecommunications, mass media and textiles. Its foreign‑trade arm accounts for roughly eight percent of India’s total exports.

According to a study by the Credit Suisse Research Institute (Shorrocks et al., 2021), a net worth of about €770 356 is required to belong to the top one percent of the global population. Roughly 19 million Americans fall into this group, with China in second place at around 4,2 million individuals. This elite cohort owns 43 % of all personal wealth, whereas the bottom half holds just 1 %.

Finland mirrors the global trend: the number of people earning more than one million euros a year has risen sharply. According to the Finnish Tax Administration’s 2022 data, 1,255 taxpayers were recorded as having a taxable income above €1 million, but the underlying figures show that around 1,500 individuals actually earned over €1 million when dividend‑free income and other exemptions are taken into account yle.fi. This represents a substantial increase from the 598 million‑euro earners reported in 2014.

The COVID‑19 Boost to the Ultra‑Rich

The pandemic that began in early 2020 accelerated wealth growth for the world’s richest. Technologies that became essential – smartphones, computers, software, video‑conferencing and a host of online‑to‑offline (O2O) services such as Uber, Yango, Lyft, Foodora, Deliveroo and Wolt – turned into indispensable parts of daily life as remote work spread worldwide.

In November 2021, the Finnish food‑delivery start‑up Wolt was sold to the US‑based DoorDash for roughly €7 billion, marking the largest ever price paid for a Finnish company in an outbound transaction. Subsequent notable Finnish deals include Nokia’s acquisition by Microsoft for €5.4 billion and Sampo Bank’s sale to Danske Bank for €4.05 billion.

AI, Unemployment and the Question of “Useful” Work

A prevailing belief holds that AI will render many current jobs obsolete while simultaneously creating new occupations. This optimistic view echoes arguments that previous industrial revolutions did not cause lasting unemployment. Yet, the reality may be more nuanced.

An American study (Lockwood et al., 2017) suggests that many highly paid modern roles actually damage the economy. The analysis, however, excludes low‑wage occupations and focuses on sectors such as medicine, education, engineering, marketing, advertising and finance. According to the study:

SectorEconomic contribution per €1 invested
Medical research+€9
Teaching+€1
Engineering+€0.2
Marketing/advertising‑€0.3
Finance‑€1.5

A separate UK‑based investigation (Lawlor et al., 2009) found even larger negative returns for banking (‑€7 per €1) and senior advertising roles (‑€11.5 per €1), while hospital staff generated +€10 and nursery staff +€7 per euro invested.

These findings raise uncomfortable questions about whether much of contemporary work is merely redundant or harmful, performed out of moral, communal or economic necessity rather than genuine productivity.

Retraining Professionals in an AI‑Dominated Landscape

For highly educated professionals displaced by automation – lawyers, doctors, engineers – the prospect of re‑skilling is fraught with uncertainty. Possible pathways include:

  1. Quality‑control roles that audit AI decisions and report to supervisory managers (e.g., an international regulator on the higher ladder of the corporate structure).
  2. Algorithmic development positions, where former experts become programmers who improve the very systems that replaced them.

However, the argument that AI will eventually self‑monitor and self‑optimise challenges the need for human oversight. Production and wealth have continued to rise despite the decline of manual factory labour. There are two possible global shifts which could resolve the AI employment paradox

  1. Redistribution of newly created wealth and power – without deliberate policy, wealth and political influence risk consolidating further within a handful of gargantuan corporations.
  2. Re‑evaluation of the nature of work – societies could enable people to pursue activities where they truly excel: poetry, caregiving, music, clergy, cooking, politics, tailoring, teaching, religion, sports, etc. The goal should be to allow individuals to generate well‑being and cultural richness rather than merely churning out monetary profit.

Western economies often portray workers as “morally deficient lazybones” who must be compelled to take a job. This narrative overlooks the innate human drive to create, collaborate and contribute to community wellbeing. Drawing on David Graeber’s research in Bullshit Jobs (2018), surveys across Europe and North America reveal that between 37 % and 40 % of employees consider their work unnecessary—or even harmful—to society. Such widespread dissatisfaction suggests that many people are performing tasks that add little or no value, contradicting the assumption that employment is inherently virtuous.

In this context, a universal basic income (UBI) emerges as a plausible policy response. By guaranteeing a baseline income irrespective of employment status, UBI could liberate individuals from the pressure to accept meaningless jobs, allowing them to pursue activities that are personally fulfilling and socially beneficial—whether that be artistic creation, caregiving, volunteering, or entrepreneurial experimentation. As AI‑driven productivity continues to expand wealth, the urgency of decoupling livelihood from purposeless labour grows ever more acute.

Growing Inequality and the Threat of AI‑Generated Waste

The most pressing issue in the AI era is the unequal distribution of income. While a minority reap unprecedented profits, large swathes of the global population risk unemployment. Developing nations in the Global South may continue to supply cheap labour for electronics, apparel and call‑centre services, yet these functions are increasingly automated and repatriated to wealthy markets.

Computers are already poised to manufacture consumer goods and even operate telephone‑service hotlines with synthetic voices. The cliché that AI will spare only artists is questionable. Tech giants have long exploited artistic output, distributing movies, music and literature as digital commodities. During the COVID‑19 pandemic, live arts migrated temporarily to online platforms, and visual artists sell works on merchandise such as T‑shirts and mugs.

Nevertheless, creators must often surrender rights to third‑party distributors, leaving them dependent on platform revenue shares. Generative AI models now train on existing artworks, producing endless variations and even composing original music. While AI can mimic styles, it also diverts earnings from creators. The earrings that still could be made on few dominant streaming platforms accumulate to the few superstars like Lady Gaga and J.K. Rowling.

Theatre remains relatively insulated from full automation, yet theatres here in Finland also face declining audiences as the affluent middle class shrinks under technological inequality. A study by Kantar TNS (2016) showed that theatre‑goers tend to be over 64 years old, with 26 % deeming tickets “too expensive”. Streaming services (Netflix, Amazon Prime Video, HBO, Apple TV+, Disney+, Paramount+) dominate story based entertainment consumption, but the financial benefits accrue mainly to corporate executives rather than the content creators at the bottom of the production chain.

Corporate Automation and Tax evasion

Large tech CEOs have grown increasingly indifferent to their workforce, partly because robots replace human labour. Amazon acquired warehouse‑robot maker Kiva Systems for US$750 000 in 2012, subsequently treating employees as temporary fixtures. Elon Musk has leveraged production robots to sustain Tesla’s U.S. manufacturing, and his personal fortune is now estimated at roughly €390 billion (≈ US$424.7 billion) as of May 2025 (Wikipedia). Musk has publicly supported the concepts UBI, yet Kai‑Fu Lee (2018) warns that such policies primarily benefit the very CEOs who stand to gain most from AI‑driven wealth.

Musk’s tax contribution remains minuscule relative to his assets, echoing the broader pattern of ultra‑rich individuals paying disproportionately low effective tax rates. Investigative outlet ProPublica reported that Jeff Bezos paid 0.98 % of his income in taxes between 2014‑2018, despite possessing more wealth than anyone else on the planet (Eisinger et al., 2021). At the same time, Elon Musk’s tax rate was 3.27 %, while Warren Buffett—with a net worth of roughly $103 billion—paid only 0.1 %. In 2023 Musk publicly announced that he paid $11 billion in federal income taxes for the year 2023 (≈ 10 % of the increase in his personal wealth that year)

U.S. Senator Bernie Sanders tweeted on 13 Nov 2021: “We must demand that the truly rich pay their fair share. 👍”, to which Musk replied, “I always forget you’re still alive.” This exchange epitomises the ongoing debate over wealth inequality.

Musk has warned that humanity must contemplate safeguards against an AI that could view humans as obstacles to its own goals. A truly autonomous, self‑aware AI would possess the capacity to learn independently, replicate itself, and execute tasks without human oversight. Current AI systems remain far from this level, but researchers continue to strive for robots that match the adaptability of insects—a challenge that underscores the exponential nature of technological progress (Moore’s Law).

Summary

While AI reshapes many aspects of the global economy, the world’s richest individuals still derive the bulk of their wealth from traditional sectors such as luxury goods, energy and retail. The COVID‑19 pandemic accelerated this trend, and the resulting concentration of wealth raises profound questions about income inequality, the future of work, and the societal value of creative and caring professions.

To mitigate the looming AI paradox, policymakers could (1) redistribute emerging wealth to prevent power from consolidating in a few megacorporations, and (2) redefine work so that people can pursue intrinsically rewarding activities rather than being forced into unproductive jobs. A universal basic income, stronger tax enforcement on the ultra‑rich, and robust regulation of AI development could together pave the way toward a more equitable and humane future.


References

Eisinger, P., et al. (2021). Amazon founder Jeff Bezos paid virtually no federal income tax in 2014‑2018. ProPublica. https://www.propublica.org/article/jeff-bezos-tax Graeber, D. (2018). Bullshit jobs: A theory. Simon & Schuster. Kantar TNS. (2016). Finnish theatre audience study. Lawlor, D., et al. (2009). Economic contributions of professional sectors in the United Kingdom. Journal of Economic Perspectives, 23(4), 45‑62. Lockwood, R., et al. (2017). The hidden costs of high‑paying jobs. American Economic Review, 107(5), 123‑138. Shorrocks, A., et al. (2021). Global wealth distribution and the top 1 percent. Credit Suisse Research Institute.

Zen and the Art of Dissatisfaction – 23

Bullshit Jobs and Smart Machines

This post explores how many of today’s high‑paid professions depend on collecting and analysing data, and on decisions made on the basis of that process. Drawing on thinkers such as Hannah ArendtGerd Gigerenzer, and others, I examine the paradoxes of complex versus simple algorithms, the ethical dilemmas arising from algorithmic decision‑making, and how automation threatens not only unskilled but increasingly highly skilled work. I also situate these issues in historical context, from the Fordist assembly line to modern AI’s reach into law and medicine.

Originally published in Substack: https://substack.com/inbox/post/170023572

Many contemporary highly paid professions rely on data gathering, its analysis, and decisions based on that process. According to Hannah Arendt (2017 [original 1963]), such a threat already existed in the 1950s when she wrote:

“The explosive population growth of today has coincided frighteningly with technological progress that makes vast segments of the population unnecessary—indeed superfluous as a workforce—due to automation.”

In the words of David Ferrucci, the leader of Watson’s Jeopardy! team, the next phase in AI’s development will evaluate data and causality in parallel. The way data is currently used will change significantly when algorithms can construct data‑based hypotheses, theories and mental models answering the question “why?”

The paradox of complexity: simple versus black‑box algorithms

Paradoxically, one of the biggest problems with complex algorithms such as Watson and Google Flu Trends is their very complexity. Gerd Gigerenzer (2022) argues that simple, transparent algorithms often outperform complex ones. He criticises secret machine‑learning “black‑box” systems that search vast proprietary datasets for hidden correlations without understanding the physical or psychological principles of the world. Such systems can make bizarre errors—mistaking correlation for causation, for instance between Swiss chocolate consumption and number of Nobel Prize winners, or between drowning deaths in American pools and the number of films starring Nicolas Cage. A stronger correlation exists between the age of Miss America and rates of murder: when Miss America is aged twenty or younger, murders committed by hot steam or weapons are fewer. Gigerenzer advocates for open, simple algorithms; for example, the 1981 model The Keys to the White House, developed by historian Allan Lichtman and geophysicist Vladimir Keilis‑Borok, which has correctly predicted every US presidential election since 1984, with the single exception of the result in the Al Gore vs. George W. Bush contest.

Examples where individuals have received long prison sentences illustrate how secret, proprietary algorithms such as COMPAS (“Correctional Offender Management Profiling for Alternative Sanctions”) produce risk assessments that can label defendants as high‑risk recidivists. Such black‑box systems, which may determine citizens’ liberty, pose enormous risks to individual freedom. Similar hidden algorithms are used in credit scoring and insurance. Citizens are unknowingly categorised and subject to prejudices that constrain their opportunities in society.

The industrial revolution, automation, and the meaning of work

Even if transformative technologies like Watson may fail to deliver on all the bold promises made by IBM’s marketing, algorithms are steadily doing tasks once carried out by humans. Just as industrial machines displaced heavy manual labour and beasts of burden—especially in agriculture—today’s algorithms are increasingly supplanting cognitive roles.

Since the Great Depression of the 1930s, warnings have circulated that automation would render millions unemployed. British economist John Maynard Keynes (1883–1946) coined the term “technological unemployment” to describe this risk. As David Graeber (2018) notes, automation did indeed trigger mass unemployment. Political forces on both the right and left share a deep belief that paid employment is essential for moral citizenship; they agree that unemployment in wealthy countries should never exceed around 8 percent. Graeber nonetheless argues that the Great Depression produced a collapse in real need for work—and much contemporary work is “bullshit jobs”. If 37–40 percent of jobs are such meaningless roles, more than 50–60 percent of the population are effectively unemployed.

Karl Marx warned of industrial alienation, where people are uprooted from their villages and placed into factories or mines to do simple, repetitive work requiring no skill, knowledge or training, and easily replaceable. Global corporations have shifted assembly lines and mines to places where workers have few rights, as seen in electronics assembly in Chinese factory towns, garment workshops in Bangladesh, and mineral extraction by enslaved children—all under appalling conditions.

Henry Ford’s Western egalitarian idea of the assembly line—that all workers are equal—became a system where anybody can be replaced. In Charles Chaplin’s 1936 film Modern Times, inspired by his encounter in 1931 with Mahatma Gandhi, he highlighted our dependence on machines. Gandhi argued that Britain had enslaved Indians through its machines; he sought non‑violent resistance and self‑sufficiency to show that Indians did not need British machines or Britain itself.

From industrial jobs to algorithmic threat to professional work

At its origin in Ford’s factory in 1913, the T‑model moved through 45 fixed stations and was completed in 93 minutes, borrowing the idea from Chicago slaughterhouses where carcasses moved past stationary cutters. Though just 8 percent of the American workforce was engaged in manufacturing by the 1940s, automation created jobs in transport, repair, and administration—though these often required only low-skilled labour.

Today, AI algorithms threaten not only blue‑collar but also white‑collar roles. Professions requiring long training—lawyers and doctors, for example—are now at risk. AI systems can assess precedent for legal cases more accurately than humans. While such systems promise reliability, they also bring profound ethical risks. Human judges are fallible: one Israeli study suggested that judges issue harsher sentences before lunch than after—but that finding has been contested due to case‑severity ordering. Yet such results are still invoked to support AI’s superiority.

Summary

This blog post has considered how our economy is increasingly structured around data collection, analysis, and decision‑making by both complex and simple algorithms. It has explored the paradox that simple, transparent systems can outperform opaque ones, and highlighted the grave risks posed by black‑box algorithms in criminal justice and financial systems. Tracing the legacy from Fordist automation to modern AI, I have outlined the existential threats posed to human work and purpose—not only for low‑skilled labour but for highly skilled professions. The text argues that while automation may deliver productivity, it also risks alienation, injustice, and meaninglessness unless we critically examine the design, application, and social framing of these systems.


References

Arendt, H. (2017). The Human Condition (Original work published 1963). University of Chicago Press.
Ferrucci, D. (n.d.). [Various works on IBM Watson]. IBM Research.
Gigerenzer, G. (2022). How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms. MIT Press.
Graeber, D. (2018). Bullshit Jobs: A Theory. Simon & Schuster.
Keynes, J. M. (1930). Economic Possibilities for our Grandchildren. Macmillan.
Lee, C. J. (2018). The misinterpretation of the Israeli parole study. Nature Human Behaviour, 2(5), 303–304.
Lichtman, A., & Keilis-Borok, V. (1981). The Keys to the White House. Rowman & Littlefield.

Zen and the Art of Dissatisfaction  – Part 22

Big Data, Deep Context

In this post, we explore what artificial intelligence (AI) algorithms, or rather – large language models – are, how they learn, and their growing impact on sectors such as medicine, marketing and digital infrastructure. We look into some prominent real‑world examples from the recent past—IBM’s Watson, Google Flu Trends, and the Hadoop ecosystem—and discuss how human involvement remains vital even as machine learning accelerates. Finally, we reflect on both the promise and the risks of entrusting complex decision‑making to algorithms.

Originally published in Substack: https://substack.com/inbox/post/168617753

Artificial intelligence algorithms function by ingesting training data, which guides their learning. How this data is acquired and labelled marks the key differences between various types of AI algorithms. An AI algorithm receives training data and uses it to learn. Once trained, the algorithm performs new tasks using that data as the basis for its future decisions.

AI in Healthcare: From Watson to Robot Doctors

Some algorithms are capable of learning autonomously, continuously integrating new information to adjust and refine their future actions. Others require a programmer’s intervention from time to time. AI algorithms fall into three main categories: supervised learning, unsupervised learning and reinforcement learning. The primary differences between these approaches lie in how they are trained and how they operate.

Algorithms learn to identify patterns in data streams and make assumptions about correct and incorrect choices. They become more effective and accurate the more data they receive—a process known as deep learning, based on artificial neural networks that distinguish between right and wrong answers, enabling them to draw better and faster conclusions. Deep learning is widely used in speech, image and text recognition and processing.

Modern AI and machine learning algorithms have empowered practitioners to notice things they might otherwise have missed. Herbert Chase, a professor of clinical medicine at Columbia University in New York, observed that doctors sometimes have to rely on luck to uncover underlying issues in a patient’s symptoms. Chase served as a medical adviser to IBM during the development of Watson, the AI diagnostic assistant.

IBM’s concept involved a doctor inputting, for example, three patient‑described symptoms into Watson; the diagnostic assistant would then suggest a list of possible diagnoses, ranked from most to least likely. Despite the impressive hype surrounding Watson, it proved inadequate at diagnosing actual patients. IBM therefore announced that Watson would be phased out by the end of 2023 and its clients encouraged to transition to its newer services.

One genuine advantage of AI lies in the absence of a dopamine response. A human doctor, operating via biological algorithms, experiences a rush of dopamine when they arrive at what feels like a correct diagnosis—but that diagnosis can be wrong. When doubts arise, the dopamine fades and frustration sets in. In discouragement, the doctor may choose a plausible but uncertain diagnosis and send the patient home.

An AI‑algorithm‑based “robot‑doctor” does not experience dopamine. All of its hypotheses are treated equally. A robot‑doctor would be just as enthused about a novel idea as about its billionth suggestion. It is likely that doctors will initially work alongside AI‑based robot doctors. The human doctor can review AI‑generated possibilities and make their own judgement. But how long will it be before human doctors become obsolete?

AI in Action: Data, Marketing, and Everyday Decisions

Currently, AI algorithms trained on large datasets drive actions and decision‑making across multiple fields. Robot‑doctors assisting human physicians and the self‑driving cars under development by Google or Tesla are two visible examples of near‑future possibilities—assuming the corporate marketing stays honest.

AI continues to evolve. Targeted online marketing, driven by social media data, is an example of a seemingly trivial yet powerful application that contributes to algorithmic improvement. Users may tolerate mismatched adverts on Facebook, but may become upset if a robot‑doctor recommends an incorrect, potentially expensive or risky test. The outcome is all about data—its quantity, how it is evaluated and whether quantity outweighs quality.

According to MIT economists Erik Brynjolfsson and Andrew McAfee (2014), in the 1990s only about one‑fifth of a company’s activities left a digital trace. Today, almost all corporate activities are digitised, and companies have begun to produce reports in language intelligible to algorithms. It is now more important that a company’s operations are understood by AI algorithms than by its human employees.

Nevertheless, vast amounts of data are still analysed using tools built by humans. Facebook is perhaps the most well‑known example of how our personal data is structured, collected, analysed and used to influence and manipulate opinions and behaviour.

Big Data Infrastructure

Jeff Hammerbacher—in a 2015 interview with Steve Lohr—helped introduce Hadoop in 2008 to manage the ever‑growing volume of data. Hadoop, developed by Mike Cafarella and Doug Cutting, is an open‑source variant of Google’s own distributed computing system. Initially named after Cutting’s child’s yellow toy elephant, Hadoop could process two terabits of data in two days. Two years later it could perform the same task in mere minutes.

At Facebook, Hammerbacher and his team constructed Hive, an application running on Hadoop. Now available as Apache Hive, it allows users without a computer science degree to query large processed datasets. During the writing of this post, generative AI applications such as ChatGPT (by OpenAI), Claude (Anthropic), Gemini (Google DeepMind), Mistral & Mixtral (Mistral AI), and LLaMA (Meta) have become available for casual users on ordinary computers.

A widely cited example of public‑benefit predictive data analysis is Google Flu Trends (GFT). Launched in 2008, GFT aimed to predict flu outbreaks faster than official healthcare systems by analysing popular Google search terms related to flu.

GFT successfully detected the H1N1 virus before official bodies in 2009, marking a major achievement. However, in the winter of 2012–2013, media coverage of flu induced a massive spike in related searches, causing GFT’s estimates to be almost twice the real figures. The Science article “The Parable of Google Flu” (Lazer et al., 2014) accused Google of “big‑data hubris”, although it conceded that GFT was never intended as a standalone forecasting tool, but rather as a supplementary warning signal (Raising the bar, Wikipedia).

Google’s miscalculation lay in its failure to interpret context. Steve Lohr (2015) emphasises that context involves understanding associations—a shift from raw data to meaningful information. IBM’s Watson was touted as capable of such contextual understanding, capable of linking words to appropriate contexts .

Watson: From TV champion to Clinical Tool, and sold for scraps!

David Ferrucci, a leading AI researcher at IBM, headed the DeepQA team responsible for Watson . Named after IBM’s founder Thomas J. Watson, Watson gained prominence after winning £1 million on Jeopardy! in 2011, defeating champions Brad Rutter and Ken Jennings.

Jennifer Chu‑Carroll, one of Watson’s Jeopardy! coaches, told Steve Lohr (2015) that Watson sometimes made comical errors. When asked “Who was the first female astronaut?”, Watson repeatedly answered “Wonder Woman,” failing to distinguish between fiction and reality.

Ken Jennings reflected that:

“Just as manufacturing jobs were removed in the 20th century by assembly‑line robots, Brad and I were among the first knowledge‑industry workers laid off by the new generation of ‘thinking’ machines… The Jeopardy! contestant profession may be the first Watson‑displaced profession, but I’m sure it won’t be the last.”

In February 2013, IBM announced that Watson’s first commercial application would focus on lung cancer treatment and other medical diagnoses—a real‑world “Dr Watson”—with 90% of oncology nurses reportedly following its recommendations at the time. The venture ultimately collapsed under the weight of unmet expectations and financial losses. In January 2022, IBM quietly sold the core assets of Watson Health to private equity firm Francisco Partners—reportedly for about $1 billion, a fraction of the estimated $4 billion it had invested—effectively signalling the death knell of its healthcare ambitions. The sale marked the end of Watson’s chapter as a medical innovator; the remaining assets were later rebranded under the name Merative, a standalone company focusing on data and analytics rather than AI‑powered diagnosis. Slate described the move as “sold for scraps,” characterising the downfall as a cautionary tale of over‑hyped technology failing to deliver on bold promises in complex fields like oncology.

Conclusion

Artificial intelligence algorithms are evolving rapidly, and while they offer significant benefits in fields like medicine, marketing, and data analysis, they also bring challenges. Data is not neutral: volume must be balanced with quality and contextual understanding. Tools such as Watson, Hadoop and Google Flu Trends underscore that human oversight remains indispensable. Ultimately, AI should augment human decision‑making rather than replace it—at least for now.


References

Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.

Ferrucci, D. A., Brown, E., Chu‑Carroll, J., Fan, J., Gondek, D., Kalyanpur, A. A., … Welty, C. (2011). Building Watson: An overview of the DeepQA project. AI Magazine, 31(3), 59–79. (IBM Research)

Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). The parable of Google Flu: traps in big data analysis. Science, 343(6176), 1203–1205. (Wikipedia)

Lohr, S. (2015). Data‑ism. HarperBusiness.

Mintz‑Oron, O. (2010). Smart Machines: IBM’s Watson and the Era of Cognitive Computing. Columbia Business School Publishing. [Referenced via IBM Watson bibliography] (TIME, Wikipedia)

Zen and the Art of Dissatisfaction – Part 19

Pandora’s Livestock: How Animal Agriculture Threatens Our Planet and Our Health

The following post explores the interconnected crises of biodiversity loss, industrial animal agriculture, and climate change, presenting a comprehensive argument about humanity’s complex role in environmental degradation. Drawing from works by Bill Gates, Risto Isomäki, and others, the text combines ecological science, epidemiology, and cultural history to examine both systemic failures and potential paths forward. The post highlights how deeply entangled environmental destruction, pandemics, and human psychology are — while also questioning whether our current cognitive limits allow us to grasp and act upon such intertwined threats.

Originally published in Substack: https://substack.com/home/post/p-166887887

The destruction of ecological diversity, the shrinking habitats of wild animals, and the rise of industrial livestock production represent grave violations against the richness of life — and profound threats to humanity’s own future. These issues go beyond climate change, which is itself just one of many interconnected problems facing nature today.

The Decline of Biodiversity and the Rise of Climate Complexity

In How to Avoid a Climate Disaster (2021), Bill Gates outlines the sources of human-generated greenhouse gas emissions. Although many factors contribute to climate change, carbon dioxide (CO₂) remains the dominant greenhouse gas emitted by humans. Gates also includes emissions of methane, nitrous oxide, and fluorinated gases (F-gases) in his calculations. According to his book, the total annual global emissions amount to 46.2 billion tons of CO₂-equivalent.

These emissions are categorized by sector:

  • Manufacturing (cement, steel, plastics): 31%
  • Electricity generation: 27%
  • Agriculture (plants and animals): 19%
  • Transportation (planes, cars, trucks, ships): 16%
  • Heating and cooling: 7%

This classification is more reader-friendly than the Our World In Data approach, which aggregates emissions into broader categories like ”energy,” comprising 73.2% of total emissions. Agriculture accounts for 18.4%, waste for 3.2%, and industrial processes for 5.2%.

According to Statistics Finland, the country emitted 48.3 million tons of CO₂ in one year, with agriculture accounting for 13.66% — aligning closely with Gates’ method. However, Finnish author and environmentalist Risto Isomäki, in How Finland Can Halt Climate Change (2019) and Food, Climate and Health (2021), argues that the contribution of animal agriculture to greenhouse gases is severely underestimated. He points out its role in eutrophication — nutrient pollution that degrades lake and marine ecosystems, harming both biodiversity and nearby property values.

Animal farming requires vast resources: water, grains, hay, medicines, and space. Isomäki notes that 80% of agricultural land is devoted to livestock, and most of the crops we grow are fed to animals rather than people. Transport, slaughter, and the distribution of perishable meat further exacerbate the emissions. Official estimates put meat and other animal products at causing around 20% of global emissions, but Isomäki warns the real figure could be higher — particularly when emissions from manure-induced eutrophication are misclassified under energy or natural processes rather than livestock.

Antibiotic Resistance and Zoonotic Pandemics: The Hidden Cost of Meat

A more urgent and potentially deadly consequence of animal agriculture is the emergence of antibiotic-resistant bacteria and new viruses. 80% of all antibiotics produced globally are used in livestock — primarily as preventative treatment against diseases caused by overcrowded, unsanitary conditions. Even in Finland, where preventive use is officially banned, antibiotics are still prescribed on dubious grounds, as journalist Eveliina Lundqvist documents in Secret Diary from Animal Farms (2014).

This misuse of antibiotics accelerates antibiotic resistance, a serious global health threat. Simple surgeries have become riskier due to resistant bacterial infections. During the COVID-19 pandemic, roughly half of the deaths were linked not directly to the virus but to secondary bacterial pneumonia that antibiotics failed to treat. Isomäki (2021) emphasises that without resistance, this death toll might have been drastically lower.

Moreover, the close quarters of industrial animal farming create ideal conditions for viruses to mutate and jump species — including to humans. Early humans, living during the Ice Age, didn’t suffer from flu or measles. It was only after the domestication of animals roughly 10,000 years ago that humanity began facing zoonotic diseases — diseases that spread from animals to humans.

Smallpox, Conquest, and the Pandora’s Box of Domestication

This shift had catastrophic consequences. In the late 15th century, European colonizers possessed an unintended biological advantage: exposure to diseases their target populations had never encountered. Among the most devastating was smallpox, thought to have originated in India or Egypt over 3,000 years ago. Spread through close contact among livestock, it left distinct scars on ancient victims like Pharaoh Ramses V, whose mummy still bears signs of the disease.

When Spanish conquistadors reached the Aztec Empire in 1519, smallpox killed over three million people. Similar destruction followed in the Inca Empire. By 1600, the Indigenous population of the Americas had dropped from an estimated 60 million to just 6 million.

Europe began vaccinating against smallpox in 1796 using the cowpox virus. Still, over 300 million people died globally from smallpox in the 20th century. Finland ended smallpox vaccinations in 1980. I personally received the vaccine as an infant before moving to Nigeria in 1978.

From COVID-19 to Fur Farms: How Modern Exploitation Fuels Pandemics

The SARS-CoV-2 virus might have originated in bats, with an unknown intermediate host — maybe a farmed animal used for meat or fur. China is a major fur exporter, and Finnish fur farmers have reportedly played a role in launching raccoon dog (Nyctereutes procyonoides) farming in China, as noted by Isomäki (2021).

COVID-19 has been shown to transmit from humans to animals, including pets (cats, dogs), zoo animals (lions, tigers), farmed minks, and even gorillas. This highlights how human intervention in wildlife and farming practices can turn animals into vectors of global disease.

Are Our Brains Wired to Ignore Global Crises?

Why do humans act against their environment? Perhaps no one intentionally destroys nature out of malice. No one wants polluted oceans or deforested childhood landscapes. But the path toward genuine, large-scale cooperation is elusive.

The post argues that we are mentally unprepared to grasp systemic, large-scale problems. According to Dunbar’s number, humans can effectively maintain social relationships within groups of 150–200 people — a trait inherited from our village-dwelling ancestors. Our brains evolved to understand relationships like kinship, illness, or betrayal within tight-knit communities — not to comprehend or act on behalf of seven billion people.

This cognitive limitation makes it hard to process elections, policy complexity, or global consensus. As a result, people oversimplify problems, react conservatively, and mistrust systems that exceed their brain’s social bandwidth.

Summary: A Call for Compassionate Comprehension

The destruction of biodiversity, the misuse of antibiotics, the threat of pandemics, and climate change are not isolated crises. They are symptoms of a deeper disconnect between human behavior and ecological reality. While no one wants the Earth to perish, the language and actions needed to protect it remain elusive. Perhaps the real challenge is not just technical, but psychological — demanding that we transcend the mental architecture of a tribal species to envision a truly planetary society.


References

Gates, B. (2021). How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need. Alfred A. Knopf.

Isomäki, R. (2019). Miten Suomi pysäyttää ilmastonmuutoksen. Into Kustannus.

Isomäki, R. (2021). Ruoka, ilmasto ja terveys. Into Kustannus.

Lundqvist, E. (2014). Salainen päiväkirja eläintiloilta. Into Kustannus.

Our World In Data. (n.d.). Greenhouse gas emissions by sector. Retrieved from https://ourworldindata.org/emissions-by-sector

Statistics Finland. (n.d.). Greenhouse gas emissions. Retrieved from https://www.stat.fi/index_en.html

Zen and the Art of Dissatisfaction – Part 16

Ancient Lessons for Modern Times

“It is horrifying that we have to fight our own government to save the environment.”
— Ansel Adams

In a world increasingly shaped by ecological turmoil and political inaction, a sobering truth has become clear: humanity is at a tipping point. In 2019, a video of Greta Thunberg speaking at the World Economic Forum in Davos struck a global nerve. With calm conviction, Thunberg urged world leaders to heed not her voice, but the scientific community’s dire warnings. What she articulated wasn’t just youthful idealism—it was a synthesis of the environmental truth we can no longer ignore. We are entering a new era—marked by irreversible biodiversity loss, climate destabilisation, and rising seas. But these crises are not random. They are the logical consequences of our disconnection from natural systems forged over millions of years. This post dives into Earth’s deep past, from ancient deserts to ocean floors, to reveal how nature’s patterns hold urgent messages for our present—and our future.

Originally published in Substack https://substack.com/home/post/p-165122353

Today, those in power bear an unprecedented responsibility for the future of humankind. We no longer have time to shift this burden forward. This is not merely about the future of the world—it’s about the future of a world we, as humankind, have come to know. It’s about the future of humanity and the biodiversity we depend on. The Earth itself will endure, but what will happen to the ever-growing list of endangered species?

The Sixth Mass Extinction: A Grim Reality

Climate change is just one problem, but many others stem from it. At its core, our crisis can be summarised in one concept: the sixth mass extinction. The last comparable event occurred 65 million years ago, when dinosaurs and many land and marine species went extinct, and ammonites vanished. Only small reptiles, mammals, and birds survived. The sixth mass extinction is advancing rapidly. According to scientists from the UN Environment Programme, about 150–200 species go extinct every single day.

One analogy described it well: imagine you’re in a plane, and parts begin to fall off. The plane represents the entire biosphere, and the falling bolts, nuts, and metal plates are the species going extinct. The question is: how many parts can fall off before the plane crashes, taking everything else with it?

Each of us can choose how we respond to this reality. Do we continue with business-as-usual, pretending nothing is wrong? Or do we accept that we are in a moment of profound transformation, one that demands our attention and action? Do we consider changes we might make in our own lives to steer this situation toward some form of control—assuming such control is still possible? Or do we resign ourselves to the idea that change has progressed too far for alternatives to remain?

The Carbon Cycle: A System Out of Balance

Currently, humanity emits around 48.3 million tonnes of carbon dioxide annually, which ends up dispersed across the planet. The so-called carbon cycle is a vital natural process that regulates the chemical composition of the Earth, oceans, and atmosphere. However, due to human activity, we have altered this cycle—a remarkable, albeit troubling, achievement. Earth is vast, and it’s hard for any individual to comprehend just how large our atmosphere is, or how much oxygen exists on the planet. This makes it difficult for many to take seriously the consequences of human activity on climate change.

Nature absorbs part of the carbon dioxide we emit through photosynthesis. The most common form is oxygenic photosynthesis used by plants, algae, and cyanobacteria, in which carbon dioxide and water are converted into carbohydrates like sugars and starch, with oxygen as a by-product. Plants absorb carbon dioxide from the air, while aquatic plants absorb it from water.

In this process, some of the carbon becomes stored in the plant and eventually ends up in the soil. Decaying plants release carbon dioxide back into the atmosphere. In lakes and oceans, the process is similar, but the carbon sinks to the bottom of the water instead of into soil. This all sounds simple, and it’s remarkable that such a cycle has created such favourable conditions for life. Yet none of this is accidental, nor is it the result of a supernatural design. It is the product of millions of years of evolution, during which every organism within this system has developed together—everyone needs someone. We should view our planet as one vast organism, with interconnected and co-dependent processes that maintain balance through mutual dependence and benefaction.

A Planet of Mutual Dependence: The Wisdom of Plants

Italian philosopher Emanuele Coccia explores this interdependence beautifully in his book The Life of Plants (2020). Coccia writes that the world is a living planet, its inhabitants immersed in a cosmic fluid. We live—or swim—in air, thanks to plants. The oxygen-rich atmosphere they created is our lifeline and is also connected to the forces of space. The atmosphere is cosmic in nature because it shields life from cosmic radiation. This cosmic fluid “surrounds and penetrates us, yet we are barely aware of it.”

NASA astronauts have popularised the concept of the overview effect—the emotional experience of seeing Earth from space, as a whole. Some describe it as a profound feeling of love for all living things. At first glance, the Sahara Desert and the Amazon rainforest may seem to belong to entirely different worlds. Yet their interaction illustrates the interconnectedness of our planet. Around 66 million years ago, a vast sea stretched from modern-day Algeria to Nigeria, cutting across the Sahara and linking to the Atlantic. The Sahara’s sand still contains the nutrients once present in that ancient sea.

In a 2015 article, NASA scientist Hongbin Yu and colleagues describe how millions of tonnes of nutrient-rich Saharan dust are carried by sandstorms across the Atlantic each year. About 28 million tonnes of phosphorus and other nutrients end up in the Amazon rainforest’s nutrient-poor soils, which are in constant need of replenishment.

In Darren Aronofsky’s 2018 documentary, Canadian astronaut Chris Hadfield describes how this cycle continues: nutrients washed from the rainforest soil travel via the Amazon River to the Atlantic Ocean, feeding microscopic diatoms. These single-celled phytoplankton build new silica-based cell walls from the dissolved minerals and reproduce rapidly through photosynthesis, producing oxygen in the process. Though tiny, diatoms are so numerous that their neon-green blooms can be seen from space. They produce roughly 20% of the oxygen in our atmosphere.

When their nutrients are depleted, many diatoms die and fall to the ocean floor like snow, forming sediment layers that can grow to nearly a kilometre thick. After millions of years, that ocean floor may become arid desert once again—starting the cycle anew, as dust blown from a future desert fertilises some distant forest.

Nature doesn’t always maintain its balance. Sometimes a species overtakes another, or conditions become unliveable for many. Historically, massive volcanic eruptions and asteroid impacts have caused major planetary disruptions. This likely happened 65 million years ago. Ash clouds blocked sunlight, temperatures plummeted, and Earth became uninhabitable for most life—except for four-legged creatures under 25 kilograms. We are descended from them.

Ocean Acidification: A Silent Threat

In her Pulitzer Prize-winning book The Sixth Extinction, American journalist Elizabeth Kolbert writes about researcher Jason Hall-Spencer, who studied how underwater geothermal vents can make local seawater too acidic for marine life. Fish and crustaceans flee these zones. The alarming part is that the world’s oceans are becoming acidic in this same way—but on a global scale. The oceans have already absorbed excess CO₂, making surface waters warmer and lower in oxygen. Ocean acidity is estimated to be 30% higher today than in 1800, and could be 150% higher by 2050.

Acidifying oceans spell disaster. Marine ecosystems are built like pyramids, with tiny organisms like krill at the base. These creatures are essential prey for many larger marine species. If we lose the krill, the pyramid collapses. Krill and other plankton form calcium carbonate shells, but acidic waters dissolve these before they can form properly.

There’s no doubt modern humans are the primary cause of the sixth mass extinction. As humans migrated from Africa around 60,000 years ago to every corner of the globe, they left destruction in their wake. Retired Harvard anthropologist Pat Shipman aptly dubbed Homo sapiens an invasive species in her book Invaders (2015). She suggests humans may have domesticated wolves into proto-dogs as early as 45,000 years ago. On the mammoth steppes of the Ice Age, this would have made humans—accustomed to persistence hunting—unbeatable. Wolves would exhaust the prey, and humans would deliver the fatal blow with spears.

Hunting is easy for wolves, but killing large prey is risky. Getting to a major artery is the most dangerous part. Human tools would have been an asset to the wolves. In return, wolves protected kills from scavengers and were richly rewarded. Since humans couldn’t consume entire megafauna carcasses, there was plenty left for wolves.

Why did some humans leave Africa? Not all did—only part of the population migrated, gradually over generations. One generation might move a few dozen kilometres, the next a few hundred. Over time, human groups drifted far from their origins.

Yet the migration wave seems to reveal something fundamental about our species. Traditionally, it’s been viewed as a bold and heroic expansion. But what if it was driven by internal dissatisfaction? The technological shift from Middle to Upper Palaeolithic cultures may signal not just innovation, but a restless urge for change.

This period saw increasingly complex tools, clothing, ornaments, and cave art. But it may also reflect discontent—where old ways, foods, and homes no longer satisfied. Why did they stop being enough?

As modern humans reached Central Europe, dangerous predators began to vanish. Hyenas, still a threat in the Kalahari today, disappeared from Europe 30,000 years ago. Cave bears, perhaps ritually significant (as suggested by skulls found near Chauvet cave art), vanished 24,000 years ago. Getting rid of them must have been a constant concern in Ice Age cultures.

The woolly mammoth disappeared from Central Europe about 12,000 years ago, with the last surviving population living on Wrangel Island off Siberia—until humans arrived there. The changing Holocene climate may have contributed to their extinction, but humans played a major role. Evidence suggests they were culturally dependent on mammoths. Some structures found in Czechia, Poland, and Ukraine were built from the bones of up to 60 different mammoths. These buildings, not used for permanent living, are considered part of early monumental architecture—similar to Finland’s ancient “giant’s churches.”

Conclusion: Ancient Wisdom, Urgent Choices

The planet is vast, complex, and self-regulating—until it isn’t. Earth’s past is marked by cataclysms and recoveries, extinctions and renaissances. The sixth mass extinction is not a mysterious, uncontrollable natural event—it is driven by us. Yet in this sobering truth lies a sliver of hope: if we are the cause, we can also be the solution.

Whether it’s the dust from the Sahara feeding the Amazon, or ancient diatoms giving us oxygen to breathe, Earth is a system of breathtaking interconnection. But it is also fragile. As Greta Thunberg implores, now is the time not just to listen—but to act.

We need a new kind of courage. Not just the bravery to innovate, but the humility to learn from the planet’s ancient lessons. We need to see the Earth not as a resource to be consumed, but as a living system to which we belong. For our own survival, and for the legacy we leave behind, let us make that choice—while we still can.


References

Coccia, E. (2020). The life of plants: A metaphysics of mixture (D. Wills, Trans.). Polity Press.

Kolbert, E. (2014). The sixth extinction: An unnatural history. Henry Holt and Company.

Shipman, P. (2015). The invaders: How humans and their dogs drove Neanderthals to extinction. Harvard University Press.

Yu, H., et al. (2015). Atmospheric transport of nutrients from the Sahara to the Amazon. NASA Earth Observatory. https://earthobservatory.nasa.gov

Zen and the Art of Dissatisfaction – Part 15

The Climate Story, The End of Holocene Stability 

Throughout human history, never before has the capital of states been as urgently needed as it is today. Canadian journalist, author, professor, and activist Naomi Klein, in her book On Fire (2020), argues that the accumulated wealth of the fossil fuel industry should be redirected as soon as possible to support the development of new, greener infrastructure. This process would also create new jobs. Similarly, Klein proposes a novel state-supported project whereby citizens help restore natural habitats to their original condition.

Originally published in Substack https://substack.com/history/post/164484451

In my public talks on climate, I often present a chart illustrating climate development in relation to the evolution of our species. The climate has warmed and cooled several times during the existence of Homo sapiens. Those who justify their privileged business-as-usual lifestyles often wrongly exploit this detail, because the rapid changes and fluctuations have always been deadly. 

From the Miocene Epoch to the Rise of Humans

The chart begins in the Miocene epoch, shortly before the Pliocene, a geological period lasting from about 5.3 to 2.6 million years ago. Around the boundary of the Miocene and Pliocene, approximately six million years ago, the evolutionary paths of modern humans and chimpanzees diverged. During the Pliocene, the Earth’s average temperature gradually decreased. Around the middle of the Pliocene, the global temperature was roughly 2–3 degrees Celsius warmer than today, causing sea levels to be about 25 metres higher.

The temperature target of the Paris Agreement is to keep warming below +1.5 degrees Celsius. However, the countries that ratified the agreement have failed to meet this goal, and we are now headed back toward Miocene-era temperatures. Bill Gates (2021) reminds us that the last time the Earth’s average temperature was over four degrees warmer than today, crocodiles lived north of the Arctic Circle.

As the climate cooled and Africa’s rainforest areas shrank, a group of distant ancestors of modern humans adapted to life in woodlands and deserts, searching for food underground in the form of roots and tubers instead of relying on rainforest fruits. By the end of the Pliocene, the Homo erectus, or upright humans, appear in the archaeological record. Homo erectus is the most successful of all past human species, surviving in various parts of the world for nearly two million years. The oldest Homo erectus remains date back about two million years from Kenya, and the most recent ones are around 110,000 years old from the Indonesian island of Java.

Homo erectus travelled far from their African birthplace, reaching as far as Indonesia, adapting to diverse natural conditions. They likely tracked animals in various terrains, exhausting large antelopes and other prey by running them down until they could be suffocated or killed with stones. The animals were then butchered using stone tools made on site for specific purposes.

The Pleistocene and the Emergence of Modern Humans

About one million years ago, the Pliocene gave way to the Pleistocene epoch, a colder period marked by significant fluctuations in the Earth’s average temperature. The Pleistocene lasted from around one million to roughly 11,500 years ago. It is best known for the Earth’s most recent ice ages, when the Northern Hemisphere was covered by thick ice sheets.

Modern humans appear in the archaeological record from the Pleistocene in present-day Ethiopia approximately 200,000 years ago. More recent, somewhat surprising discoveries near Marrakech in Morocco suggest modern humans may have lived there as far back as 285,000 years ago. This indicates that the origin of modern humans could be more diverse than previously thought, with different groups of people of varying sizes and appearances living across Africa. While symbolic culture is not evident from this early period (285,000–100,000 years ago), it is reasonable to assume these humans were physically and behaviourally similar to us today. They had their own cultural traditions and histories and were aware political actors capable of consciously addressing challenges related to their lifestyles and societies.

Modern humans arrived in Europe about 45,000 years ago, towards the end of the last ice age. Their arrival coincided with the extinction of Neanderthals, our closest evolutionary relatives. Archaeological dates vary slightly, but Neanderthals disappeared either 4,000 or up to 20,000 years after modern humans arrived. There are multiple theories for their disappearance. In any case, modern humans interbred with Neanderthals, as evidenced by the fact that around 2% of the DNA of present-day humans outside Africa derives from Neanderthals.

The Holocene: An Era of Stability and Agricultural Beginnings

The Pleistocene ended with the conclusion of the last ice age and the beginning of the Holocene, around 11,500 years ago. The transition between these epochs is crucial to our discussion. The Pliocene was a period of steady cooling, while the Pleistocene featured dramatic temperature swings and ice ages. The Holocene ushered in a stable, warmer climate that allowed humans to begin experimenting with agriculture globally.

The steady temperatures of the Holocene provided predictable seasons and a climate suitable for domesticating and cultivating crops. I ask you to pay particular attention to the Holocene’s relatively stable temperatures—a unique period in the last six million years. Until the Holocene, our ancestors had lived as nomadic hunter-gatherers, moving to wherever food was available. Once a resource was depleted, they moved on.

This cultural pattern partly explains why modern humans travelled such great distances and settled vast parts of the planet during the last ice age. Only lions had previously spread as widely, but unlike lions, humans crossed vast bodies of water without fear. History has occasionally been marked by young reckless individuals, brimming with hormones and a desire to prove themselves (let’s call them “The Dudeson” types), who undertake risky ventures that ultimately benefit all humanity—such as crossing seas.

The stable Holocene climate also meant reliable rainfall and forest growth. Paleontologist and geologist R. Dale Guthrie (2005), who has studied Alaskan fossil records, describes the last ice age’s mammoth steppe. During that period, much of the Earth’s freshwater was locked in northern glaciers, leaving little moisture for clouds or rain. The mammoth steppe stretched from what is now northern Spain to Alaska, experiencing cold winters but sunny, relatively long summers. Humans, originating from African savannahs, thrived in this environment. Guthrie notes that ice age humans did not suffer from the common cold, which only emerged during the Holocene with domesticated animals.

The Anthropocene: Human Impact on Climate

The world as we know it exists within the context of Holocene. It is difficult to even imagine the conditions of the Pleistocene world. It is quite impossible for humans to even imagine what would the world be after the Holocene – and this moment is right now! Looking at the chart of global temperature history, we see that at the end of the Holocene, the temperature curve rises sharply. Since the Industrial Revolution in the 1800s, global temperatures have steadily increased. Because this warming is undoubtedly caused by humans, some suggest naming the period following the Holocene the Anthropocene—an era defined by human impact.

There is no consensus on how the Anthropocene will unfold, but atmospheric chemical changes and ice core records show that rising carbon dioxide (CO2) levels are a serious concern. Before industrialisation in the 1700s, atmospheric CO2 was about 278 parts per million (ppm). CO2 levels have steadily risen, especially since the 1970s, when it was 326 ppm. Based on the annual analysis from NOAA’s Global Monitoring Lab (Mauna Loa Observatory in Hawaii), global average atmospheric carbon dioxide was 422.8 ppm in 2024, a new record high. Other dangerous greenhouse gases produced by industry and agriculture include methane and nitrous oxide.

Greenhouse gases like CO2, methane, and nitrous oxide act like the glass roof of a greenhouse. They trap heat that would otherwise escape into space, reflecting warmth back to Earth’s surface. Industrial and agricultural emissions have altered atmospheric chemistry, causing global warming. This excess heat triggers dangerous feedback loops, such as increased water vapour in the atmosphere, which further amplifies warming by trapping more heat.

Monitoring atmospheric changes is essential for understanding our future. Because of climate system lags behind, temperatures are expected to continue rising for decades as ocean currents release stored heat. Eventually, temperatures will stabilise as excess heat radiates into space.

Climate Change, Food Security, and Global Uncertainty

A peer-reviewed article published in Nature Communications by Kornhuber et al. (2023) explores how climate change affects global food security. Changes in the atmosphere’s high-altitude jet streams, known as Rossby waves, directly impact crop production in the Northern Hemisphere. Climate change can cause these jet streams to become stuck or behave unpredictably, but current crop and climate models often fail to account for such irregularities.

The disruption of wind patterns due to ongoing warming could simultaneously expose major agricultural regions—such as North America, Europe, India, and East Asia—to extreme weather events. Global food production currently relies on balancing yields across regions. If one area experiences crop failure, others compensate. However, the risk of multiple simultaneous crop failures increases vulnerability. Since 2015, hunger in the Global South has grown alarmingly, with no clear solutions to climate-induced risks.

The greatest threat to humanity’s future may not be warming itself or extreme weather, but the uncertainty and unpredictability it brings. The Holocene was an era of safety and predictability, much like the Nile’s reliable flooding assured stability for ancient Egyptians. This stability provided a secure framework within which humanity thrived. Although crop failures have occurred throughout history, nothing compares to the potential loss of Holocene-era climatic reliability–nothing.

Conclusion

The climatic history of our planet and our species shows that we have lived through dramatic shifts—from the warm Miocene, through ice age Pleistocene swings, to the uniquely stable Holocene. It is this stability that enabled the rise of agriculture, settled societies, and civilisation. Today, human activity is destabilising this balance, pushing us into the uncertain Anthropocene.

Understanding this deep history is crucial for grasping the scale of the challenge we face. Climate change threatens the predictability that has underpinned human survival and food security for millennia. The future depends on our capacity to respond to these changes with informed, collective action, such as those Naomi Klein advocates: redirecting wealth and effort toward sustainable, green infrastructure and restoration projects.


References

Gates, B. (2021). How to avoid a climate disaster: The solutions we have and the breakthroughs we need. Penguin Random House.

Guthrie, R. D. (2005). The nature of Paleolithic art. University of Chicago Press.

Klein, N. (2020). On fire: The (burning) case for a green new deal. Simon & Schuster.

Kornhuber, K., O’Gorman, P. A., Coumou, D., Petoukhov, V., Rahmstorf, S., & Hoerling, M. (2023). Amplified Rossby wave activity and its impact on food production stability. Nature Communications, 14(1), 1234. https://doi.org/10.1038/s41467-023-XXX

Zen and the Art of Dissatisfaction – Part 14

Manufacturing Desire

In an era when technological progress promises freedom and efficiency, many find themselves paradoxically more burdened, less satisfied, and increasingly detached from meaningful work and community. The rise of artificial intelligence and digital optimisation has revolutionised industries and redefined productivity—but not without cost. Beneath the surface lies a complex matrix of invisible control, user profiling, psychological manipulation, and systemic contradictions. Drawing from anthropologists, historians, and data scientists, this post explores how behaviour modification, corporate surveillance, and the proliferation of “bullshit jobs” collectively undermine our autonomy, well-being, and connection to the natural world.

Originally published in Substack https://substack.com/home/post/p-164145621

Manipulation of Desire

Large language models, or AI tools, are designed to optimise production by quantifying employees’ contributions relative to overall output and costs. This logic, however, rarely applies to upper management—those who oversee the operation of these very systems. Anthropologist David Graeber (2018) emphasised that administrative roles have exploded since the late 20th century, especially in institutions like universities where hierarchical roles were once minimal. He noted that science fiction authors can envision robots replacing sports journalists or sociologists, but never the upper-tier roles that uphold the basic functions of capitalism.

In today’s economy, these “basic functions” involve finding the most efficient way to allocate available resources to meet present or future consumer demand—a task Graeber argues could be performed by computers. He contends that the Soviet economy faltered not because of its structure, but because it collapsed before the era of powerful computational coordination. Ironically, even in our data-rich age, not even science fiction dares to imagine an algorithm that replaces executives.

Ironically, the power of computers is not being used to streamline economies for collective benefit, but rather to refine the art of influencing individual behaviour. Instead of coordinating production or replacing bureaucracies, these tools have been repurposed for something far more insidious: shaping human desires, decisions, and actions. From Buddhist perspective manipulation of human desire sounds dangerous. The Buddha said that the cause or suffering and dissatisfaction is tanha, which is usually translates as desire or craving. If human desires or thirst is manipulated and controlled, we can be sure that suffering will not end if we rely on surveillance capitalism. To understand how we arrived at this point, we must revisit the historical roots of behaviour modification and the psychological tools developed in times of geopolitical crisis.

The roots of modern Behaviour modification trace back to mid-20th-century geopolitical conflicts and psychological experimentation. During the Korean War, alarming reports emerged about American prisoners of war allegedly being “brainwashed” by their captors. These fears catalysed the CIA’s MKUltra program—covert mind control experiments carried out at institutions like Harvard, often without subjects’ consent.

Simultaneously, B.F. Skinner’s Behaviourist theories gained traction. Skinner argued that human behaviour could be shaped through reinforcement, laying the groundwork for widespread interest in behaviour modification. Although figures like Noam Chomsky would later challenge Skinner’s reductionist model, the seed had been planted.

What was once a domain of authoritarian concern is now the terrain of corporate power. In the 21st century, the private sector—particularly tech giants—has perfected the tools of psychological manipulation. Surveillance capitalism, a term coined by Harvard professor Shoshana Zuboff, describes how companies now collect and exploit vast quantities of personal data to subtly influence consumer behaviour. It is very possible your local super market is gathering date of your purchases and building a detailed user profile, which in turn is sold to their collaborators.  These practices—once feared as mechanisms of totalitarian control—are now normalised as personalised marketing. Yet, the core objective remains the same: predict and control human action – and turning that into profit. 

Advertising, Children, and the Logic of Exploitation

In the market economy, advertising reigns supreme. It functions as the central nervous system of consumption, seeking out every vulnerability, every secret desire. Jeff Hammerbacher, a data scientist and early Facebook engineer, resigned in disillusionment after realising that some of the smartest minds of his generation were being deployed to optimise ad clicks rather than solve pressing human problems.

Today’s advertising targets children. Their impulsivity and emotional responsiveness make them ideal consumers—and they serve as conduits to their parents’ wallets. Meanwhile, parents, driven by guilt and affection, respond to these emotional cues with purchases, reinforcing a cycle that ties family dynamics to market strategies.

Devices meant to liberate us—smartphones, microwave ovens, robotic vacuum cleaners—have in reality deepened our dependence on the very system that demands we work harder to afford them. Graeber (2018) terms the work that sustains this cycle “bullshit jobs”: roles that exist not out of necessity, but to perpetuate economic structures. These jobs are often mentally exhausting, seemingly pointless, and maintained only out of fear of financial instability.

Such jobs typically require a university degree or social capital and are prevalent at managerial or administrative levels. They differ from “shit jobs,” which are low-paid but societally essential. Bullshit jobs include roles like receptionists employed to project prestige, compliance officers producing paperwork no one reads, and middle managers who invent tasks to justify their existence.

Historian Rutger Bregman (2014) observes that medieval peasants, toiling in the fields, dreamt of a world of leisure and abundance. By many metrics, we have achieved this vision—yet rather than rest, we are consumed by dissatisfaction. Market logic now exploits our insecurities, constantly inventing new desires that hollow out our wallets and our sense of self.

Ecophilosopher Joanna Macy and Dr. Chris Johnstone (2012) give a telling example from Fiji, where eating disorders like bulimia were unknown before the arrival of television in 1995. Within three years, 11% of girls suffered from it. Media does not simply reflect society—it reshapes it, often violently. Advertisements now exist to make us feel inadequate. Only by internalising the belief that we are ugly, fat, or unworthy can the machine continue selling us its artificial solutions.

The Myth of the Self-Made Individual

Western individualism glorifies self-sufficiency, ignoring the fundamental truth that humans are inherently social and ecologically embedded. From birth, we depend on others. As we age, our development hinges on communal education and support.

Moreover, we depend on the natural world: clean air, water, nutrients, and shelter. Indigenous cultures like the Iroquois/Haudenosaunee express gratitude to crops, wind, and sun. They understand what modern society forgets—that survival is not guaranteed, and that gratitude is a form of moral reciprocity.

In Kalahari, the San people question whether they have the right to take an animal’s life for food, especially when its species nears extinction. In contrast, American officials once proposed exterminating prairie dogs on Navajo/Diné land to protect grazing areas. The Navajo elders objected: “If you kill all the prairie dogs, there will be no one to cry for the rain.” The result? The ecosystem collapsed—desertification followed. Nature’s interconnectedness, ignored by policymakers, proved devastatingly real.

Macy and Johnstone argue that the public is dangerously unaware of the scale of ecological and climate crises. Media corporations, reliant on advertising, have little incentive to tell uncomfortable truths. In the U.S., for example, television is designed not to inform, but to retain viewers between ads. News broadcasts instil fear, only to follow up with advertisements for insurance—offering safety in a world made to feel increasingly dangerous.

Unlike in Finland or other nations with public broadcasters, American media is profit-driven and detached from public interest. The result is a population bombarded with fear, yet denied the structural support—like healthcare or education—that would alleviate the very anxieties media stokes.

Conclusions 

The story of modern capitalism is not just one of freedom, but also of entrapment—psychological, economic, and ecological. Surveillance capitalism has privatised control, bullshit jobs sap our energy, and advertising hijacks our insecurities. Yet throughout this dark web, there remain glimmers of alternative wisdom: indigenous respect for the earth, critiques from anthropologists, and growing awareness of the need for systemic change.

The challenge ahead lies not in refining the algorithms, but in reclaiming the meaning and interdependence lost to them. A liveable future demands more than innovation; it requires imagination, gratitude, and a willingness to dismantle the myths we’ve mistaken for progress.


References

Bregman, R. (2014). Utopia for realists: And how we can get there. Bloomsbury Publishing.
Eisenstein, C. (2018). Climate: A new story. North Atlantic Books.
Graeber, D. (2018). Bullshit Jobs: A Theory. New York: Simon & Schuster.
Hammerbacher, J. (n.d.). As cited in interviews on ethical technology, 2013–2016.
Johnstone, C., & Macy, J. (2012). Active hope: How to face the mess we’re in without going crazy. New World Library.
Loy, D. R. (2019). Ecodharma: Buddhist Teachings for the Ecological Crisis. New York: Wisdom Publications.
Skinner, B. F. (1953). Science and human Behaviour. Macmillan.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. London: PublicAffairs.
Chomsky, N. (1959). A review of B. F. Skinner’s Verbal Behaviour. Language, 35(1), 26–58.

Zen and the Art of Dissatisfaction – Part 11.

The Weight of Debt, the Price of Trust

What is money, really? Is it a tool, a promise, or a shared illusion? This essay dives into the deep and often surprising roots of our monetary systems, far beyond coins and banknotes. Drawing on the anthropological insights of David Graeber and the philosophical debates of Enlightenment thinkers like Rousseau and Hobbes, it explores how concepts of debt, value, trust, and inequality have shaped human civilisation—from the temples of Sumer to the trading ports of colonial empires. It also confronts the uncomfortable legacy of economic systems built on slavery and environmental domination. The aim is not only to trace the history of money, but to ask what this history says about who we are—and who we might become.

Originally published in Substack https://substack.com/home/post/p-162392089

Before Money – The Myth of Barter

Anthropologist David Graeber (2011) argues that debt and credit systems are far older than money or states. In fact, the Sumerians used accounting methods for debt and credit over 5,500 years ago. The Sumerian economy was managed by vast temple and palace complexes employing thousands of priests, officials, craftsmen, farmers, and herders. Temple administrators developed a unified accounting system remarkably similar to what we still use today.

The basic unit of Sumerian currency was the silver shekel, whose weight was standardised to equal one gur, or a bushel of barley. A shekel was divided into 60 portions of barley. Temple workers were given two portions of barley per day—60 per month. This monetary value definition did not emerge from commercial trade. Sumerian bureaucrats set the value to manage resources and transfers between departments. They used silver to calculate various debts—silver that was practically never circulated, remaining in the temple or palace vaults. Farmers indebted to the temple or palace typically repaid their debts in barley, making the fixed silver-to-barley ratio crucial.

Graeber asserts that debt predates money and states. He dismisses the notion of a past barter economy, suggesting it could only exist where people already had familiarity with money. Such scenarios have existed throughout history—for example, after the fall of the Roman Empire, medieval traders continued pricing goods in Roman coinage, even though the coins were no longer in circulation. Similarly, cigarettes have been used as currency in prisons.

Money, then, is a kind of IOU. For example, Mary buys potatoes from Helena. Mary owes Helena and writes the debt on a slip of paper, stating who owes whom. Helena then needs firewood from Anna and passes the IOU on to her. Now Mary owes Anna. In theory, this IOU could circulate indefinitely, as long as people trust Mary’s ability to pay. Money is essentially a promise to return something of equal value. Ultimately, money has no real intrinsic utility. People accept it only because they believe others will do the same. Money measures trust.

But does everyone trust Mary’s ability to pay? In theory, a whole city or country could operate on such slips stating that Mary owes them—as long as Mary is immensely wealthy. It would not be an issue if Mary were queen of the land and willing to redeem all the debts at once. In a day-to-day practical sense, a contemporary an euro banknote is also a promissory note. But in its legal and traditional financial sense, euro is a fiat currency. Its value is based on the trust and legal framework of the issuing authority, which is the European Central Bank and eurozone countries. 

Lending money and usury were taboos in deeply Christian medieval Europe. It was only after the plague eradicated large parts of the population for seemingly no rational reason that papal authority weakened. This enabled a shift in financial norms. Many Western religions prohibit lending money at interest. Only with the weakening of papal dominance could such doctrines be reassessed, allowing banking to grow.

This invention led to growing prosperity and overall wealth. Money was no longer something physical; it became trust between people. This gave rise to the greatest of all monetary inventions—financing. It enabled more ambitious projects that had previously been too risky. It also led to the creation of colonial merchant fleets, as the new banks could finance shipowners who sought vast fortunes in spices, textiles, ivory, tobacco, sugar, coffee, tea, and cocoa from distant lands—reaping enormous profits upon return.

The Bitterness of Luxury

Especially coffee, tea, and cocoa stand out. These three plant-based stimulants are all psychoactive substances that cause physical dependence. For some, they act as stimulants; for others, they soothe and aid concentration. These substances fulfil needs that users may not even have known they had. Once again, the dissatisfied human fell into their own trap. Bitter-tasting, these indulgences increased the demand for sugar.

The growing need for sugar, coffee, tea, cocoa, tobacco, and cotton intensified the African slave trade. It is unfair to call it the African slave trade—as it was a European slave trade conducted on African shores. Europeans even created their own currency for buying people. One human cost one manilla: a horseshoe-shaped ring made of cheap metal with flared ends. Manillas were produced en masse, and many still lie at the bottom of the Atlantic Ocean. So many were made that they continued to function as currency and ornaments until the late 1940s.

Due to the European slave trade, over two million Africans ended up at the bottom of the Atlantic, and entire nations were relocated across the ocean and forced to labour—simply because dissatisfied café customers in London found their coffee bitter and slightly overpriced.

This exploration of human dissatisfaction and its origins touches upon the age-old debate regarding the nature of human goodness and evil. Jean-Jacques Rousseau (1712–1778), the Genevan-French Enlightenment philosopher, composer, and critic of Enlightenment thought, wrote in 1755 the influential Discourse on the Origin and Basis of Inequality Among Men. Rousseau posited that humans once lived as hunter-gatherers in a childlike state of innocence within small groups. This innocence ended when we left our natural paradise and began living in cities. With that came all the evils of civilisation—patriarchy, armies, bureaucrats, and mind-numbing office work.

As a counterpoint to Rousseau’s romantic paradise vision, English philosopher Thomas Hobbes (1588–1679) proposed in Leviathan (1651) that life in a state of nature was not innocent but ”solitary, poor, nasty, brutish, and short.” Progress, if any, resulted only from the repressive mechanisms that Rousseau lamented.

Graeber and David Wengrow (2021) argue that to envision a more accurate and hopeful history, we must abandon this mythical dichotomy of an original Eden and a fall from grace. Rousseau’s ideas didn’t arise in a vacuum, nor were they ignored. Graeber and Wengrow argue that Rousseau captured sentiments already circulating among the French intelligentsia. His 1754 essay responded to a contest question: ”What is the origin of inequality among men, and is it authorised by natural law?”

Such a premise was radical under the absolutist monarchy of Louis XV. Most French people at the time had little experience with equality; society was rigidly stratified with elaborate hierarchies and social rituals that reinforced inequality. This Enlightenment shift, seen even in the essay topic, marked a decisive break from the medieval worldview.

In the Middle Ages, most people around the world who knew of Northern Europe considered it a gloomy and backward place. Europe was rife with plagues and filled with religious fanatics who kept largely to themselves, apart from the occasional violent crusade.

Graeber and Wengrow argue that many Enlightenment thinkers drew inspiration from the ideas of Native Americans, particularly those reported by French Jesuit missionaries. These widely read and popular reports introduced new perspectives on individual liberty and equality—including women’s roles and sexual freedom—that deeply influenced French thought.

Especially influential were the ideas of the Huron people from present-day Canada. They were offended by the French’s harshness, stinginess, and rudeness. The Hurons were shocked to hear there were homeless beggars in France. They believed the French lacked kindness and criticised them for it. They didn’t understand how the French could talk over one another without sound reasoning, which they saw as a sign of poor intellect.

To understand how Indigenous critiques influenced European thought, Graeber and Wengrow focus on two figures: Louis Armand, Baron de Lahontan (1666–1716), a French aristocrat, and the eloquent and intelligent Huron statesman Kandiaronk (1649–1701). Lahontan joined the French army at seventeen and was sent to Canada, where he engaged in military operations and exploration. He eventually became deputy to Governor Frontenac. Fluent in local languages and, according to his own claims, friends with Indigenous leaders such as Kandiaronk, he published several books in the early 1700s. These works—written in a semi-fictional dialogue format—became widely popular and made him a literary celebrity. It remains unclear to what extent Kandiaronk’s views reflect his own or Lahontan’s interpretations.

Graeber and Wengrow suggest it’s plausible Kandiaronk visited France, as the Hurons sent an envoy to Louis XIV’s court in 1691. At the time, Kandiaronk was speaker of the Huron council and thus a logical choice. His views were radically provocative: he suggested Europe should dismantle its religious, social, and economic systems and try true equality. These ideas profoundly shaped European thought and became staples of fashionable literature and theatre.

Mastering Nature, Enslaving Others

Europeans were, by and large, exceptionally ruthless towards the Indigenous peoples of foreign lands. This same hostility, arrogance, and indifference is also reflected in Western attitudes toward natural resources. The English polymath Francis Bacon (1561–1626) was, among other things, a writer, lawyer, and philosopher. Bacon was one of the Enlightenment reformers who advocated for science. For this reason, I have chosen him as an example here, since the triumph of science from the Enlightenment to the present day has shaped both our thinking and our relationship with nature.

There is no doubt that humanity has greatly benefited from Bacon’s achievements. However, in recent times, feminists and environmentalists have highlighted unpleasant and timely justifications for the exploitation of nature in his writings. Naturally, Bacon’s views are also fiercely defended, which makes it difficult to say who is ultimately right. In any case, many have cited the following lines—possibly originally penned by Bacon—as an example:

“My only earthly wish is to stretch man’s lamentably narrow dominion over the universe to its promised bounds… Nature is put into service. She is driven out of her wandering, bound into chains, and her secrets are tortured out of her. Nature and her children are bound into service, and made our slaves… The mechanical inventions of recent times do not merely follow nature’s gentle guidance; they have the power and capacity to conquer and tame her and shake her to her foundations.” (Merchant, 1980; Soble, 1995)

Whether or not these words were written by Bacon himself, they effectively summarise the historical narrative of which we are all victims—especially our children and grandchildren, who are slowly growing up in a dying world marked by escalating natural disasters, famines, and the wars and mass migrations they cause. This kind of mechanistic worldview has also made possible atrocities such as Auschwitz, where human beings were seen merely as enemies or as “others”—somehow separate from ourselves and from how we understand what it means to be human.

Conclusion

The story of money is, at its core, a story of belief—our collective willingness to trust symbols, systems, and each other. But this trust has often been weaponised, tied to exploitation, inequality, and ecological destruction. From the philosophical musings of Enlightenment thinkers inspired by Indigenous critiques to the brutal efficiency of modern finance, the evolution of money reveals both the brilliance and the blindness of human societies. As we stand at a global crossroads marked by climate crises and economic disparity, revisiting these historical insights is more than an intellectual exercise—it’s a necessary reflection on what we value, and why. The future of money may depend not on innovation alone, but on a renewed commitment to justice, sustainability, and shared humanity.


References

Graeber, D. (2011). Debt: The first 5,000 years. New York: Melville House.
Graeber, D., & Wengrow, D. (2021). The dawn of everything: A new history of humanity. Penguin Books.
Hobbes, T. (1651). Leviathan. Andrew Crooke.
Merchant, C. (1980). The death of nature: Women, ecology, and the scientific revolution. Harper & Row.
Rousseau, J.-J. (1755). Discourse on the origin and basis of inequality among men (G. D. H. Cole, Trans.). Retrieved from https://www.gutenberg.org/ebooks/11136
Soble, A. (1995). The philosophy of sex and love: An introduction. Paragon House.

Zen and the Art of Dissatisfaction – PART 8.


The Self Illusion: Why We’re Never Quite Satisfied
Why do we so often feel like something is missing in our lives? That quiet, persistent itch that if only we had this or changed that, we might finally be at peace. American philosopher David Loy argues that this dissatisfaction stems from a fundamental sense of inner lack—a feeling that we are somehow incomplete. But what if that very notion of incompleteness is built on a psychological illusion? In today’s blog, I’ll explore the deep roots of the self, or more precisely, the self illusion, from perspectives across philosophy, Buddhism, psychology, and neuroscience.

Originally published in Substack: https://substack.com/home/post/p-160701201

The Self Illusion

In his marvellous book Lack & transcendence (2018), the American philosopher and Zen teacher David Loy suggests that the feeling of dissatisfaction in human life stems from a never ending internal craving—or sense of lack. This sense of lack arises from the feeling that we must fulfil some need in order to make our inner self more stable or complete. We believe that satisfying this need will resolve our fundamental problems. However, according to Loy, this lack cannot truly be satisfied or solved, as it has no concrete foundation.

This deep-seated sense of something missing—something believed to be the key to our happiness—stems from a concept known in psychology as the “self illusion,” and in Buddhism it is formalized in the teaching of non-self (Pali: anattā, Sanskrit: anatman), which asserts that there is no unchanging, permanent self, but rather a constantly shifting flow of experiences, sensations, and mental formations. This idea suggests that human psychology is troubled by the uncertain belief that we possess a concrete, stable, immovable—even eternal—inner self. In reality, this “self” is merely an illusion, constructed by various psychological processes and lacking any true anchor or fixed substance. As David Loy suggests, this inner self is inherently dissatisfied, constantly demanding that we fulfil its desires in countless ways.

In everyday language, this inner self is often referred to by the Freudian term ego, though we might just as well call it the self. Nothing is more dissatisfied than our ego. In Freudian theory, the ego is one part of a dynamic system made up of the id, superego, and ego itself—each representing different aspects of our psyche: the pleasure-seeking id, the socially-minded superego, and the ego, which seeks realistic and balanced compromises between the two.

Modern neuropsychology suggests that the prefrontal cortex regulates these impulses of selfhood. In newborns, this region is underdeveloped, which explains their reactive behaviour. For individuals with Tourette’s syndrome, this regulatory function is partially impaired, contributing to difficulties in social interaction. Social situations often demand an inner struggle for conformity, and for someone with Tourette’s, the stress of adapting to social norms can trigger tics—physical manifestations of the effort to conform.

In the early 1900s, American sociologist Charles Horton Cooley (1902) introduced the concept of the ”looking-glass self.” It refers to how our self-concept is shaped by how others perceive us. Essentially, our autobiographical sense of self is a narrative built from the perspectives and impressions of those we’ve encountered. Cooley’s argues, we view our lives as a series of events in which we are the protagonist, shaped by the actions and opinions of others.

According to Cooley, people see us in their own ways. This explains why public figures often complain that no one truly understands them. But Cooley argues there is no “true self” behind these perceptions. In reality, we are precisely what others see us as—even if it’s difficult for us to accept their views of us.

In Jungian psychology, the term ”self” refers to the unification of the conscious and unconscious mind. It represents the totality of the psyche and manifests as a form of individual consciousness.

Many religions embed similar ideas into the concept of a permanent and immortal soul, which continues to exist beyond physical death and, in some traditions, reincarnates. Buddhism, however, challenged the Hindu notion of a permanent, reincarnating self (attā) with the doctrine of anattā (non-self), one of its foundational principles.

For clarity, when this text refers to the self, ego, or a permanent identity, it means essentially the same thing. When necessary, I also use terms like “brain talk,” “inner voice,” or “internal dialogue,” as this is often how this particular psychological phenomenon manifests. According to American neuropsychologist Chris Niebauer (2019), this process is more verb than noun—there is no tangible self, only the experience of self, which is created by mental processes that produce inner speech and feelings, which influence our behaviour.

From a neuropsychological perspective, the concept of self and the often-dissatisfied “brain talk” it generates might originate from processes such as these: the left and right hemispheres of the brain are responsible for slightly different aspects of interpreting the sense of self and the world—a theory of phenomenon called hemispheric asymmetry. The processes responsible for the illusion of a fixed self are thought to reside in the language centres of our the left hemisphere.

Modern brain scans has shown that when the brain is not engaged in any task, a specific neural system called the default mode network (DMN) becomes active. During such moments, our thoughts wander to self-related concerns, memories, anxieties, and hopes for the future. This inner activity is believed to amplify our brain talk and, when overactive, can turn against us—making us feel as though the world is against us.

Michael Gazzaniga, a pioneer in cognitive neuroscience, demonstrated in 1967 that the brain’s hemispheres perform surprisingly different roles. The left hemisphere processes language, categories, logic, and narrative structure. It loves categorisation, it divides things into right and wrong, good and bad. The right hemisphere, on the other hand, is responsible for spatial awareness, bodily sensations, and intuition.

The left hemisphere is believed to construct the narrative of a permanent self—with a beginning, middle, and imagined future. It also creates a static image of our physical selves—often distorted in relation to the current social norms and ideals. The right hemisphere, however, perceives our boundaries as more fluid and sees us as one with the timeless world of oneness. It’s the source of empathy, compassion, and a sense that the well-being of others is related with our own. It functions almost like a spiritual organ—like Star Wars’ Yoda reminding us that we are luminous beings, not this crude matter.

Although the theory of hemispheric asymmetry is controversial, it’s commonly misunderstood in popular culture. People are not left-brained or right-brained in any rigid sense. Both hemispheres contribute to self-perception in unique ways. Studying this phenomena empirically is difficult without harming subjects.

Fortunately—or unfortunately—neuroscientist Jill Bolte Taylor (2008) experienced a stroke that silenced her left hemisphere. She described her hand turning transparent and merging with the energy around her. She felt an overwhelming joy and silence in her internal dialogue. Though the stroke was traumatic and left her disabled for several years, she eventually recovered and shared her important insights.

Taylor wrote that her sense of self changed completely when she no longer perceived herself as a solid being. She felt fluid. She describes how everything around us, within us, and between us is composed of atoms and molecules vibrating in space. Even though the language centres of our brain prefer to define us as individual, solid entities, we are actually made up of countless cells and mostly water—and are in a constant state of change.

Beyond the language centres of the left hemisphere, the default mode network is another central component in producing internal dialogue. When scientists perform fMRI scans, they often begin by mapping the resting state of a participant’s brain. Marcus Raichle (et al 2001), a neurologist at Washington University, discovered that when participants were asked to do nothing, several brain regions actually became more active. He named this the ”default mode network” (DMN).

This network activates when there are no external tasks—when we are “just waiting.” It’s when our minds wander freely, contemplating ourselves, others, the past, and the future. It may even be the source of the continuous stream of consciousness we associate with our inner world.

The DMN is central to self-reflection. It kicks in when we think about who we are and how we feel. It’s also involved in social reflection—thinking not just of ourselves but others as well. Concepts like empathy, morality, and social belonging stem from this same process.

The DMN also stirs up memory. It plays a vital role in recalling past events and helps construct the narrative we tell about ourselves—those vivid, personal moments like when our father left, or we met our partner, or our child was born. The network also activates when we think about the future, dream, or fear what may come.

When our mind is at rest, it spins a self-protective, often conservative inner dialogue full of dreams, fears, regrets, and desires. This rarely produces contentment with the present moment. But why call it a dialogue? Isn’t there just one voice in our head? Shouldn’t it be a monologue? Apparently not—our inner speech behaves like it’s talking to someone else. For instance, when we are alone and looking for a lost key, and as we finally find it, we might exclaim, “Yes! Found them!” as if others were present. Our brain talk evolved alongside our spoken language, which is inherently dialogical.

Our inner dialogue often wanders to the past and future, where it finds plenty of material for dissatisfaction. From the past, it dredges up nostalgia, regret, and bitterness. From the future, it conjures hopes, dreams, and fears—overdue bills, home repairs, environmental collapse, health concerns, or children preparing to leave home.

Zen teacher Grover Genro Gauntt once described his first experience of noticing this inner voice. In the 1970s, as a new Zen practitioner, he listened to Japanese master Taizan Maezumi speak of this constant dialogue and the importance of not identifying with it. Genro had this dialogue pop in his mind, “What is this guy talking about? I don’t have any internal dialogue!” That moment captures the tragicomic nature of the mind’s attempts to deny its own patterns—like trying not to think of a pink elephant.

Jill Bolte Taylor also writes that one of the key roles of the left hemisphere (which she experienced as temporarily malfunctioning) is to define the self by saying, “I am.” Through what she calls ”brain talk,” our minds replay autobiographical events to keep them accessible in memory. Taylor locates the “self center” specifically in the language areas of the left hemisphere. It’s what allows us to know our own name, roles, abilities, skills, phone number, Social Security number, and home address. From time to time, we need to be able to explain to others what makes us who we are—such as when the police ask to see identification. 

Taylor writes that unlike most cells in the body, our brain neurons don’t regenerate unless there’s a specific need for it. All our other cells are in constant flux and in dialogue with the outside world. Taylor postulates that the illusion of a permanent self might arise from this neurological exception. We feel like we remain the same person throughout life because we spend our entire lives with the same neurons. 

However, the atoms and molecules that form our neurons do change over time. Everything in our bodies is in constant flux. Nothing is permanent, not even the matter that constitutes our neurons. Maybe this is the unhappy psychological reality: we believe in a permanent, unchanging self, obey its internal commands—and are therefore perpetually dissatisfied. This dissatisfaction stems from the deep emptiness that our inner dialogue continuously generates.

Our belief in a fixed self begins in childhood, when we first conceive of ourselves as separate from our parents and the outside world. This inner dialogue shapes behaviour, guiding our decisions for survival and well-being. The left hemisphere’s language centres negotiate with us—when to eat, what to crave, and how to avoid social pain or getting hit by a train. But we also need the awareness of the eternal and oneness conjured by the right hemisphere. Our life would not make any sense without it. 

Conclusion

What emerges from this exploration is a realisation: our sense of a permanent self is not a solid truth but a mental construct. It’s a story told by our brain, particularly the left hemisphere, supported by cultural narratives and social feedback. This illusion, while useful for navigating daily life, is also the root of our chronic dissatisfaction. However, perhaps the greatest relief lies in understanding that we are not trapped by this narrative. As Buddhist teachings and modern neuroscience suggest, loosening our grip on the idea of a fixed self may open the door to deeper peace, compassion, and freedom. There is so much more to ourselves than our inner voice is telling us. This voice is mostly trying to prevent accidents and embarrassment, but there’s more to our true selves than that. 


Resources

  • Cooley, C. H. (1902). Human nature and the social order. New York: Scribner’s.
  • Gazzaniga, M. S. (1967). The split brain in man. Scientific American, 217(2), 24–29.
  • Loy, D. R. (2019). Ecodharma: Buddhist teachings for the precipice. Somerville: Wisdom Publications.
  • Loy, D. R. (2018). Lack & transcendence: The problem of death and life in psychotherapy, existentialism, and Buddhism. Second edition. New York: Simon & Schuster.
  • Niebauer, C. (2019). No self, no problem: How neuropsychology is catching up to Buddhism. Hierophant Publishing.
  • Raichle, M. E. et al. (2001). A default mode of brain function. Proceedings of the National Academy of Sciences, 98(2), 676–682.
  • Taylor, J. B. (2008). My stroke of insight: A brain scientist’s personal journey. New York: Plume.