Zen and the Art of Dissatisfaction – Part 26

Unrelenting Battle for AI Supremacy

In today’s fast-evolving digital landscape, the titanic technology corporations are locked in a merciless struggle for AI dominance. Their competitive advantage is fuelled by the ability to access vast quantities of data. Yet this race holds profound implications for privacy, ethics, and the overlooked human labour that quietly powers it.

Originally published in Substack: https://substack.com/home/post/p-172413535

Large technology conglomerates are engaged in a cutthroat contest for AI supremacy, a competition shaped in large part by the free availability of data. Chinese rivals may be narrowing the gap in this contest, where the free flow of data reigns supreme. In contrast, in Western nations, personal data remains, at least for now, considered the property of the individual; its use requires the individual’s awareness and consent. Nevertheless, people freely share their data—opinions, consumption habits, images, location—when signing up for platforms or interacting online. The freer companies can exploit this user data, the quicker their AI systems learn. Machine learning is often applauded because it promises better services and more accurately targeted advertisements.

Hidden Human Labour

Yet, behind these learning systems are human workers—micro‑workers—who teach data to AI algorithms. Often subcontracted by the tech giants, they are paid meagrely yet exposed to humanity’s darkest content, and they must keep what they see secret. In principle, anyone can post almost anything on social media. Platforms may block or “lock” content that violates their policies—only to have the original poster appeal, rerouting the content to micro‑workers for review.

These shadow workers toil from home, performing tasks such as identifying forbidden sexual content, violence, or categorising products for companies like Walmart and Amazon. For example, they may have to distinguish whether two similar items are the same or retag products into different categories. Despite the rise of advanced AI, these micro‑tasks remain foundational—and are compensated only by the cent.

The relentless gathering of data is crucial for deep‑learning AI systems. In the United States, the tension between user privacy and corporate surveillance remains unresolved—largely stemming from the Facebook–Cambridge Analytica scandal. In autumn 2021, Frances Haugen, a data scientist and whistleblower, exposed how Facebook prioritised maximising user time on the platform over public safety Wikipedia+1.

Meanwhile, the roots of persuasive design trace back to Stanford University’s Persuasive Technology Lab (now known as the Behavior Design Lab), under founder B. J. Fogg, where concepts to hook and retain users—regardless of the consequences—were born. On face value, social media seems benign—connecting people, facilitating ideas, promoting second‑hand sales. Yet beneath the surface lie algorithms designed to keep users engaged, often by feeding content tailored to their interests. The more platforms learn, the more they serve users exactly what they want—drawing them deeper into addictive cycles.

Renowned psychologists from a PNAS study found that algorithms—based on just a few likes—could know users better than even their closest friends. About 90 likes enabled better personality predictions than an average friend, while 270 likes made AI more accurate than a spouse.

The Cambridge Analytica scandal revealed how personal data can be weaponised to influence political outcomes in events like Brexit and the 2016 US Presidential Election. All that was needed was to identify and target individuals with undecided votes based on their location and psychological profiles.

Frances Haugen’s whistleblowing further confirmed that Facebook exacerbates political hostility and supports authoritarian messaging especially in countries like Brazil, Hungary, the Philippines, India, Sri Lanka, Myanmar, and the USA.

As critics note, these platforms never intended to serve as central political channels—they were optimized to maximise engagement and advertising revenue. One research group led by Laura Edelson found that misinformation posts received six times more likes than posts from trusted sources like CNN or the World Health Organization The Guardian.

In theory, platforms could offer news feeds filled exclusively with content that made users feel confident, loved, safe—but such feeds don’t hold attention long enough for profit. Instead, platforms profit more from cultivating anxiety, insecurity, and outrage. The algorithm knows us so deeply that we often don’t even realise when we’re entirely consumed by our feelings, fighting unseen ideological battles. Hence, ad-based revenue models prove extremely harmful. Providers could instead charge a few euros a month—but the real drive is harvesting user data for long‑term strategic advantage.

Conclusion

The race for AI supremacy is not just a competition of algorithms—it’s a battle over data, attention, design, and ethics. The tech giants are playing with our sense of dissatisfasction, and we have no psychological tools to avoid it. While tech giants vie for the edge, a hidden workforce labours in obscurity, and persuasive systems steer human behaviour toward addiction and division. Awareness, regulation, and ethical models—potentially subscription‑based or artist‑friendly—are needed to reshape the future of AI for human benefit.


References

B. J. Fogg. (n.d.). B. J. Fogg. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/B._J._Fogg
Behavior Design Lab. (n.d.). Stanford Behavior Design Lab. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Stanford_Behavior_Design_Lab
Captology. (n.d.). Captology. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Captology
Frances Haugen. (n.d.). Frances Haugen. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Frances_Haugen
2021 Facebook leak. (n.d.). 2021 Facebook leak. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/2021_Facebook_leak

Zen and the Art of Dissatisfaction – Part 25

Exponential Futures

Throughout history, humanity has navigated the interplay between population growth, technological progress, and ethical responsibility. As automation, artificial intelligence, and biotechnology advance at exponential rates, philosophers, scientists, and entrepreneurs have raised profound questions: Are we heading towards liberation from biological limits, or into a new form of dependency on machines? Can we satisfy our dissatisfaction with more intelligent machines and unlimited growth? What would be enough? The following post explores these dilemmas, drawing from historical parables, the logic of Moore’s law, transhumanism, and the latest breakthroughs in artificial intelligence.

“The current explosive growth in population has frighteningly coincided with the development of technology, which, due to automation, makes large parts of the population ‘superfluous’, even as labour. Because of nuclear energy, this double threat can be tackled with means beside which Hitler’s gas chambers look like the malicious child’s play of an evil brat.”
– Hannah Arendt

Originally published in Substack: https://substack.com/inbox/post/171630771

Our technological development has been tied to Moore’s law. Named after Gordon Moore, the founder of Intel, one of the world’s largest semiconductor manufacturers, the law states that the number of transistors on a microchip doubles every 18–24 months. As a result, chips become more powerful while their price falls. Moore’s prediction in 1965 has remained remarkably accurate, as innovation has kept the process alive long past the point when the laws of physics should have slowed it down. This type of growth is called exponential, characterised by slow initial development which suddenly accelerates at an unexpected pace.

A Parable of Exponential Growth

The Islamic scholar Ibn Khallikan described the logic of exponential growth in a tale from 1256. According to the story, chess originated in India during the 6th century. Its inventor travelled to Pataliputra and presented the game to the emperor. Impressed, the ruler offered him any reward. The inventor requested rice, calculated using the chessboard: one grain on the first square, two on the second, four on the third, doubling with each square.

Such exponential growth seems modest at first, but by the 64th square it yields more than 18 quintillion grains of rice, or about 1.4 trillion tonnes. By comparison, the world currently produces about 772 million tonnes of wheat annually. The inventor’s demand thus exceeded yearly wheat production by a factor of over 2,000. The crucial lesson lies not in the quantity but in the speed at which exponential processes accelerate.

The central question remains: at what stage of the chessboard are we today in terms of microchip development? According to Moore’s law, we are heading towards an increasingly technological future. Futurists such as Ray Kurzweil, Chief Engineer at Google, believe that transhumanism is the only viable path for humanity to collaborate with AI. Kurzweil predicts that artificial intelligence will surpass human mental capabilities by 2045.

Transhumanism posits that the limits of the human biological body are a matter of choice. For transhumanists, ageing should be voluntary, and cognitive capacities should lie within individual control. Kurzweil forecasts that by 2035 nanobots will be implanted in our brains to connect with neurons, upgrading both mental and physical abilities. The aim is to prevent humans from becoming inferior to machines, preserving self-determination.

The Intelligence of Machines – Real or Illusion?

Yet artificial intelligence has not, until recently, been very intelligent. Algorithms can process data and make deductions, but image recognition, for example, has long struggled with tasks a child could solve instantly. A child, even after seeing a school bus once, can intuitively identify it; an algorithm, trained on millions of images, may still fail under slightly altered conditions. This gap between human intuition and machine logic underscores the challenge.

Nevertheless, AI is evolving rapidly. Vast financial resources drive competition over the future of intelligence and power.

The South African-born Elon Musk, founder of Neuralink, has already demonstrated an implant that allows a monkey named Pager to play video games using only thought. Musk suggests such implants could treat depressionAlzheimer’s disease, and paralysis, and even restore sight to the blind.

Though such ideas may sound outlandish, history suggests that visionary predictions often materialise sooner than expected.

The Warnings of Tristan Harris

Tristan Harris, who leads the non-profit Centre for Humane Technology, has been at the heart of Silicon Valley’s AI story, from Apple internships to Instagram development and work at Google. In 2023, alongside Aza Raskin, he warned of AI’s dangers. Their presentation demonstrated AI systems capable of cloning a human voice within seconds, or reconstructing mental images using fMRI brain scans.

AI models have begun to exhibit unexpected abilities. A system trained in English suddenly understands PersianChatGPT, launched by OpenAI, has independently learned advanced chemistry, though it was never explicitly trained in the subject. Algorithms now self-improve, rewriting code to double its speed, creating new training data, and exhibiting exponential capability growth. Experts foresee improvements at double-exponential rates, represented on a graph as a near-vertical line surging upwards.

Conclusion

The trajectory of human civilisation now intertwines with exponential technological growth. From the rice-on-the-chessboard parable to Moore’s law and the visions of Kurzweil, Musk, and Harris, the central issue remains: will humanity adapt, or will machines redefine what it means to be human? The pace of change is no longer linear, and as history shows, exponential processes accelerate suddenly, leaving little time to adjust.


References

Arendt, H. (1963). Eichmann in Jerusalem: A report on the banality of evil. Viking Press.
Harris, T., & Raskin, A. (2023). The AI dilemma [Presentation]. Center for Humane Technology.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking.
Moore, G. E. (1965). Cramming more components onto integrated circuits. Electronics, 38(8).

Zen and the Art of Dissatisfaction – Part 24

How Algorithms and Automation Redefine Work and Society

The concept of work in Western societies has undergone dramatic transformations, yet in some ways it has remained surprisingly static. Work and the money made with work also remains one of the leading causes for dissatisfactoriness. There’s usually too much work and the compensation never seems to be quite enough. While the Industrial Revolution replaced manual labour with machinery, the age of Artificial Intelligence (AI) threatens to disrupt not only blue-collar jobs but also highly skilled professions. This post traces the historical shifts in the nature of work, from community-driven agricultural labour to the rise of mass production, the algorithmic revolution, and the looming spectre of general artificial intelligence. Along the way, it examines the ethical, economic, and social implications of automation, surveillance, and machine decision-making — raising critical questions about the place of humans in a world increasingly run by machines.

Originally published in Substack: https://substack.com/home/post/p-170864875

The Western concept of work has hardly changed in essence: half the population still shuffles papers, projecting an image of busyness. The Industrial Revolution transformed the value of individual human skill, rendering many artisanal professions obsolete. A handcrafted product became far more expensive compared to its mass-produced equivalent. This shift also eroded the communal nature of work. Rural villagers once gathered for annual harvest festivities, finding strength in togetherness. The advent of threshing machines, tractors, and milking machines eliminated the need for such collective efforts.

In his wonderful and still very important film Modern Times (1936), Charlie Chaplin depicts industrial society’s alienating coexistence: even when workers are physically together, they are often each other’s competitors. In a factory, everyone knows that anyone can be replaced — if not by another worker, then by a machine.

In the early 1940s, nearly 40% of the American workforce was employed in manufacturing; today, production facilities employ only about 8%. While agricultural machinery displaced many farmworkers, those machines still require transportation, repairs, and eventual replacement — generating jobs in other, less specialised sectors.

The Algorithmic Disruption

Artificial intelligence algorithms have already displaced workers in multiple industries, but the most significant disruption is still to come. Previously, jobs were lost in sectors requiring minimal training and were easily passed on to other workers. AI will increasingly target professions demanding long academic training — such as lawyers and doctors. Algorithms can assess legal precedents for future court cases more efficiently than humans, although such capabilities raise profound ethical issues.

One famous Israeli study suggested that judges imposed harsher sentences before lunch than after (Lee, 2018). Although later challenged — since case order was pre-arranged by severity — it remains widely cited to argue for AI’s supposed superiority in legal decision-making.

Few domains reveal human irrationality as starkly as traffic. People make poor decisions when tired, angry, intoxicated, or distracted while driving. In 2016, road traffic accidents claimed 1.35 million lives worldwide. In Finland in 2017, 238 people died and 409 were seriously injured in traffic; there were 4,432 accidents involving personal injury.

The hope of the AI industry is that self-driving cars will vastly improve road safety. However, fully autonomous vehicles remain distant, partly because they require a stable and predictable environment — something rare in the real world. Like all AI systems, they base predictions on past events, which limits their adaptability in chaotic, unpredictable situations.

Four Waves of Machine-Driven Change

The impact of machines on human work can be viewed as four distinct waves:

  1. The Industrial Revolution — people moved from rural to urban areas for factory jobs.
  2. The Algorithmic Wave — AI has increased efficiency in many industries, with tech giants like Amazon, Apple, Alphabet, Microsoft, Huawei, Meta Platforms, Alibaba, IBM, Tencent, and OpenAI leading the way. In 2020, their combined earnings were just under USD 1.5 trillion. Today they are pushing 2 trillion. The leader, Amazon, making 630 billion dollars per year. 
  3. The Sensorimotor Machine Era — autonomous cars, drones, and increasingly automated factories threaten remaining manual jobs.
  4. The Age of Artificial General Intelligence (AGI) — as defined by Nick Bostrom (2015), machines could one day surpass human intelligence entirely.

The rise of AI-driven surveillance evokes George Orwell’s Nineteen Eighty-Four (1949), in which people live under constant watch. Modern citizens voluntarily buy devices that track them, competing for public attention online. Privacy debates date back to the introduction of the Kodak camera in 1888 and intensified in the 1960s with computerised tax records. Today, exponentially growing data threatens individual privacy in unprecedented ways.

AI also inherits human prejudices. Studies show that people with African-American names face discrimination from algorithms, and biased data can lead to unequal treatment based on ethnicity, gender, or geography — reinforcing, rather than eliminating, inequality.

Conclusion

From the threshing machine to the neural network, every technological leap has reshaped the world of work, altering not only what we do but how we define ourselves. The coming decades may bring the final convergence of machine intelligence and autonomy, challenging the very premise of human indispensability. The question is not whether AI will change our lives, but how — and whether we will have the foresight to ensure that these changes serve humanity’s best interests rather than eroding them.


References

Bostrom, N. (2015). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Lee, D. (2018). Do you get fairer sentences after lunch? BBC Future.
Orwell, G. (1999). Nineteen eighty-four. Penguin. (Original work published 1949)

Zen and the Art of Dissatisfaction – 23

Bullshit Jobs and Smart Machines

This post explores how many of today’s high‑paid professions depend on collecting and analysing data, and on decisions made on the basis of that process. Drawing on thinkers such as Hannah ArendtGerd Gigerenzer, and others, I examine the paradoxes of complex versus simple algorithms, the ethical dilemmas arising from algorithmic decision‑making, and how automation threatens not only unskilled but increasingly highly skilled work. I also situate these issues in historical context, from the Fordist assembly line to modern AI’s reach into law and medicine.

Originally published in Substack: https://substack.com/inbox/post/170023572

Many contemporary highly paid professions rely on data gathering, its analysis, and decisions based on that process. According to Hannah Arendt (2017 [original 1963]), such a threat already existed in the 1950s when she wrote:

“The explosive population growth of today has coincided frighteningly with technological progress that makes vast segments of the population unnecessary—indeed superfluous as a workforce—due to automation.”

In the words of David Ferrucci, the leader of Watson’s Jeopardy! team, the next phase in AI’s development will evaluate data and causality in parallel. The way data is currently used will change significantly when algorithms can construct data‑based hypotheses, theories and mental models answering the question “why?”

The paradox of complexity: simple versus black‑box algorithms

Paradoxically, one of the biggest problems with complex algorithms such as Watson and Google Flu Trends is their very complexity. Gerd Gigerenzer (2022) argues that simple, transparent algorithms often outperform complex ones. He criticises secret machine‑learning “black‑box” systems that search vast proprietary datasets for hidden correlations without understanding the physical or psychological principles of the world. Such systems can make bizarre errors—mistaking correlation for causation, for instance between Swiss chocolate consumption and number of Nobel Prize winners, or between drowning deaths in American pools and the number of films starring Nicolas Cage. A stronger correlation exists between the age of Miss America and rates of murder: when Miss America is aged twenty or younger, murders committed by hot steam or weapons are fewer. Gigerenzer advocates for open, simple algorithms; for example, the 1981 model The Keys to the White House, developed by historian Allan Lichtman and geophysicist Vladimir Keilis‑Borok, which has correctly predicted every US presidential election since 1984, with the single exception of the result in the Al Gore vs. George W. Bush contest.

Examples where individuals have received long prison sentences illustrate how secret, proprietary algorithms such as COMPAS (“Correctional Offender Management Profiling for Alternative Sanctions”) produce risk assessments that can label defendants as high‑risk recidivists. Such black‑box systems, which may determine citizens’ liberty, pose enormous risks to individual freedom. Similar hidden algorithms are used in credit scoring and insurance. Citizens are unknowingly categorised and subject to prejudices that constrain their opportunities in society.

The industrial revolution, automation, and the meaning of work

Even if transformative technologies like Watson may fail to deliver on all the bold promises made by IBM’s marketing, algorithms are steadily doing tasks once carried out by humans. Just as industrial machines displaced heavy manual labour and beasts of burden—especially in agriculture—today’s algorithms are increasingly supplanting cognitive roles.

Since the Great Depression of the 1930s, warnings have circulated that automation would render millions unemployed. British economist John Maynard Keynes (1883–1946) coined the term “technological unemployment” to describe this risk. As David Graeber (2018) notes, automation did indeed trigger mass unemployment. Political forces on both the right and left share a deep belief that paid employment is essential for moral citizenship; they agree that unemployment in wealthy countries should never exceed around 8 percent. Graeber nonetheless argues that the Great Depression produced a collapse in real need for work—and much contemporary work is “bullshit jobs”. If 37–40 percent of jobs are such meaningless roles, more than 50–60 percent of the population are effectively unemployed.

Karl Marx warned of industrial alienation, where people are uprooted from their villages and placed into factories or mines to do simple, repetitive work requiring no skill, knowledge or training, and easily replaceable. Global corporations have shifted assembly lines and mines to places where workers have few rights, as seen in electronics assembly in Chinese factory towns, garment workshops in Bangladesh, and mineral extraction by enslaved children—all under appalling conditions.

Henry Ford’s Western egalitarian idea of the assembly line—that all workers are equal—became a system where anybody can be replaced. In Charles Chaplin’s 1936 film Modern Times, inspired by his encounter in 1931 with Mahatma Gandhi, he highlighted our dependence on machines. Gandhi argued that Britain had enslaved Indians through its machines; he sought non‑violent resistance and self‑sufficiency to show that Indians did not need British machines or Britain itself.

From industrial jobs to algorithmic threat to professional work

At its origin in Ford’s factory in 1913, the T‑model moved through 45 fixed stations and was completed in 93 minutes, borrowing the idea from Chicago slaughterhouses where carcasses moved past stationary cutters. Though just 8 percent of the American workforce was engaged in manufacturing by the 1940s, automation created jobs in transport, repair, and administration—though these often required only low-skilled labour.

Today, AI algorithms threaten not only blue‑collar but also white‑collar roles. Professions requiring long training—lawyers and doctors, for example—are now at risk. AI systems can assess precedent for legal cases more accurately than humans. While such systems promise reliability, they also bring profound ethical risks. Human judges are fallible: one Israeli study suggested that judges issue harsher sentences before lunch than after—but that finding has been contested due to case‑severity ordering. Yet such results are still invoked to support AI’s superiority.

Summary

This blog post has considered how our economy is increasingly structured around data collection, analysis, and decision‑making by both complex and simple algorithms. It has explored the paradox that simple, transparent systems can outperform opaque ones, and highlighted the grave risks posed by black‑box algorithms in criminal justice and financial systems. Tracing the legacy from Fordist automation to modern AI, I have outlined the existential threats posed to human work and purpose—not only for low‑skilled labour but for highly skilled professions. The text argues that while automation may deliver productivity, it also risks alienation, injustice, and meaninglessness unless we critically examine the design, application, and social framing of these systems.


References

Arendt, H. (2017). The Human Condition (Original work published 1963). University of Chicago Press.
Ferrucci, D. (n.d.). [Various works on IBM Watson]. IBM Research.
Gigerenzer, G. (2022). How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms. MIT Press.
Graeber, D. (2018). Bullshit Jobs: A Theory. Simon & Schuster.
Keynes, J. M. (1930). Economic Possibilities for our Grandchildren. Macmillan.
Lee, C. J. (2018). The misinterpretation of the Israeli parole study. Nature Human Behaviour, 2(5), 303–304.
Lichtman, A., & Keilis-Borok, V. (1981). The Keys to the White House. Rowman & Littlefield.

Zen and the Art of Dissatisfaction  – Part 22

Big Data, Deep Context

In this post, we explore what artificial intelligence (AI) algorithms, or rather – large language models – are, how they learn, and their growing impact on sectors such as medicine, marketing and digital infrastructure. We look into some prominent real‑world examples from the recent past—IBM’s Watson, Google Flu Trends, and the Hadoop ecosystem—and discuss how human involvement remains vital even as machine learning accelerates. Finally, we reflect on both the promise and the risks of entrusting complex decision‑making to algorithms.

Originally published in Substack: https://substack.com/inbox/post/168617753

Artificial intelligence algorithms function by ingesting training data, which guides their learning. How this data is acquired and labelled marks the key differences between various types of AI algorithms. An AI algorithm receives training data and uses it to learn. Once trained, the algorithm performs new tasks using that data as the basis for its future decisions.

AI in Healthcare: From Watson to Robot Doctors

Some algorithms are capable of learning autonomously, continuously integrating new information to adjust and refine their future actions. Others require a programmer’s intervention from time to time. AI algorithms fall into three main categories: supervised learning, unsupervised learning and reinforcement learning. The primary differences between these approaches lie in how they are trained and how they operate.

Algorithms learn to identify patterns in data streams and make assumptions about correct and incorrect choices. They become more effective and accurate the more data they receive—a process known as deep learning, based on artificial neural networks that distinguish between right and wrong answers, enabling them to draw better and faster conclusions. Deep learning is widely used in speech, image and text recognition and processing.

Modern AI and machine learning algorithms have empowered practitioners to notice things they might otherwise have missed. Herbert Chase, a professor of clinical medicine at Columbia University in New York, observed that doctors sometimes have to rely on luck to uncover underlying issues in a patient’s symptoms. Chase served as a medical adviser to IBM during the development of Watson, the AI diagnostic assistant.

IBM’s concept involved a doctor inputting, for example, three patient‑described symptoms into Watson; the diagnostic assistant would then suggest a list of possible diagnoses, ranked from most to least likely. Despite the impressive hype surrounding Watson, it proved inadequate at diagnosing actual patients. IBM therefore announced that Watson would be phased out by the end of 2023 and its clients encouraged to transition to its newer services.

One genuine advantage of AI lies in the absence of a dopamine response. A human doctor, operating via biological algorithms, experiences a rush of dopamine when they arrive at what feels like a correct diagnosis—but that diagnosis can be wrong. When doubts arise, the dopamine fades and frustration sets in. In discouragement, the doctor may choose a plausible but uncertain diagnosis and send the patient home.

An AI‑algorithm‑based “robot‑doctor” does not experience dopamine. All of its hypotheses are treated equally. A robot‑doctor would be just as enthused about a novel idea as about its billionth suggestion. It is likely that doctors will initially work alongside AI‑based robot doctors. The human doctor can review AI‑generated possibilities and make their own judgement. But how long will it be before human doctors become obsolete?

AI in Action: Data, Marketing, and Everyday Decisions

Currently, AI algorithms trained on large datasets drive actions and decision‑making across multiple fields. Robot‑doctors assisting human physicians and the self‑driving cars under development by Google or Tesla are two visible examples of near‑future possibilities—assuming the corporate marketing stays honest.

AI continues to evolve. Targeted online marketing, driven by social media data, is an example of a seemingly trivial yet powerful application that contributes to algorithmic improvement. Users may tolerate mismatched adverts on Facebook, but may become upset if a robot‑doctor recommends an incorrect, potentially expensive or risky test. The outcome is all about data—its quantity, how it is evaluated and whether quantity outweighs quality.

According to MIT economists Erik Brynjolfsson and Andrew McAfee (2014), in the 1990s only about one‑fifth of a company’s activities left a digital trace. Today, almost all corporate activities are digitised, and companies have begun to produce reports in language intelligible to algorithms. It is now more important that a company’s operations are understood by AI algorithms than by its human employees.

Nevertheless, vast amounts of data are still analysed using tools built by humans. Facebook is perhaps the most well‑known example of how our personal data is structured, collected, analysed and used to influence and manipulate opinions and behaviour.

Big Data Infrastructure

Jeff Hammerbacher—in a 2015 interview with Steve Lohr—helped introduce Hadoop in 2008 to manage the ever‑growing volume of data. Hadoop, developed by Mike Cafarella and Doug Cutting, is an open‑source variant of Google’s own distributed computing system. Initially named after Cutting’s child’s yellow toy elephant, Hadoop could process two terabits of data in two days. Two years later it could perform the same task in mere minutes.

At Facebook, Hammerbacher and his team constructed Hive, an application running on Hadoop. Now available as Apache Hive, it allows users without a computer science degree to query large processed datasets. During the writing of this post, generative AI applications such as ChatGPT (by OpenAI), Claude (Anthropic), Gemini (Google DeepMind), Mistral & Mixtral (Mistral AI), and LLaMA (Meta) have become available for casual users on ordinary computers.

A widely cited example of public‑benefit predictive data analysis is Google Flu Trends (GFT). Launched in 2008, GFT aimed to predict flu outbreaks faster than official healthcare systems by analysing popular Google search terms related to flu.

GFT successfully detected the H1N1 virus before official bodies in 2009, marking a major achievement. However, in the winter of 2012–2013, media coverage of flu induced a massive spike in related searches, causing GFT’s estimates to be almost twice the real figures. The Science article “The Parable of Google Flu” (Lazer et al., 2014) accused Google of “big‑data hubris”, although it conceded that GFT was never intended as a standalone forecasting tool, but rather as a supplementary warning signal (Raising the bar, Wikipedia).

Google’s miscalculation lay in its failure to interpret context. Steve Lohr (2015) emphasises that context involves understanding associations—a shift from raw data to meaningful information. IBM’s Watson was touted as capable of such contextual understanding, capable of linking words to appropriate contexts .

Watson: From TV champion to Clinical Tool, and sold for scraps!

David Ferrucci, a leading AI researcher at IBM, headed the DeepQA team responsible for Watson . Named after IBM’s founder Thomas J. Watson, Watson gained prominence after winning £1 million on Jeopardy! in 2011, defeating champions Brad Rutter and Ken Jennings.

Jennifer Chu‑Carroll, one of Watson’s Jeopardy! coaches, told Steve Lohr (2015) that Watson sometimes made comical errors. When asked “Who was the first female astronaut?”, Watson repeatedly answered “Wonder Woman,” failing to distinguish between fiction and reality.

Ken Jennings reflected that:

“Just as manufacturing jobs were removed in the 20th century by assembly‑line robots, Brad and I were among the first knowledge‑industry workers laid off by the new generation of ‘thinking’ machines… The Jeopardy! contestant profession may be the first Watson‑displaced profession, but I’m sure it won’t be the last.”

In February 2013, IBM announced that Watson’s first commercial application would focus on lung cancer treatment and other medical diagnoses—a real‑world “Dr Watson”—with 90% of oncology nurses reportedly following its recommendations at the time. The venture ultimately collapsed under the weight of unmet expectations and financial losses. In January 2022, IBM quietly sold the core assets of Watson Health to private equity firm Francisco Partners—reportedly for about $1 billion, a fraction of the estimated $4 billion it had invested—effectively signalling the death knell of its healthcare ambitions. The sale marked the end of Watson’s chapter as a medical innovator; the remaining assets were later rebranded under the name Merative, a standalone company focusing on data and analytics rather than AI‑powered diagnosis. Slate described the move as “sold for scraps,” characterising the downfall as a cautionary tale of over‑hyped technology failing to deliver on bold promises in complex fields like oncology.

Conclusion

Artificial intelligence algorithms are evolving rapidly, and while they offer significant benefits in fields like medicine, marketing, and data analysis, they also bring challenges. Data is not neutral: volume must be balanced with quality and contextual understanding. Tools such as Watson, Hadoop and Google Flu Trends underscore that human oversight remains indispensable. Ultimately, AI should augment human decision‑making rather than replace it—at least for now.


References

Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.

Ferrucci, D. A., Brown, E., Chu‑Carroll, J., Fan, J., Gondek, D., Kalyanpur, A. A., … Welty, C. (2011). Building Watson: An overview of the DeepQA project. AI Magazine, 31(3), 59–79. (IBM Research)

Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). The parable of Google Flu: traps in big data analysis. Science, 343(6176), 1203–1205. (Wikipedia)

Lohr, S. (2015). Data‑ism. HarperBusiness.

Mintz‑Oron, O. (2010). Smart Machines: IBM’s Watson and the Era of Cognitive Computing. Columbia Business School Publishing. [Referenced via IBM Watson bibliography] (TIME, Wikipedia)

Zen and the Art of Dissatisfaction – Part 21

Data: The Oil of the Digital Age

Data applications rely fundamentally on data—its extraction, collection, storage, interpretation, and monetisation—making them arguably the most significant feature of our contemporary world. Often referred to as ”the new oil,” data is, from the perspective of persistent capitalists, a valuable resource capable of sustaining economic growth even after conventional natural reserves have been exhausted. This new form of capitalism has been titled Surveillance Capitalism (Zuboff 2019).

Originally published in Substack: https://substack.com/@mikkoijas

Data matters more than opinions. For developers of data applications, the key goal is that we browse online, click “like,” follow links, spend time on their platforms, and accept cookies. What we think or do does not matter; what matters is the digital behavioural surplus, a trace we leave and our consent to tracking. That footprint has become immensely valuable—companies are willing to pay for it, and sometimes break laws to get it.

Cookies and Consumer Privacy in Europe

European legislation like the General Data Protection Regulation (GDPR) ensures some personal protection, but we still leave traces even if we refuse to share personal data. Websites are legally obligated to request our cookie consent, making privacy violations more visible. Rejecting cookies and clearing them out later becomes a time-consuming and frustrating chore.

In stark contrast, China’s data laws are much more relaxed, granting companies broader operational freedom. The more data a company gathers, the more fine-tuned its predictive algorithms can be. It’s much like environmental regulation: European firms are restricted from drilling for oil in protected areas, which reduces profit but protects nature. Chinese firms, unrestrained by such limits, may harm ecosystems while driving profits. In the data realm, restrictive laws narrow the available datasets. Whereas Chinese firms harvest freely, they might gain a major competitive edge that could help them lead the global AI market.

Data for Good: Jeff Hammerbacher’s Vision

American data scientist Jeff Hammerbacher is one of the field’s most influential figures. As journalist Steve Lohr (2015) reports, Hammerbacher started on Wall Street and later helped build Facebook’s data infrastructure. Today, he curates data collection and interpretation for the purpose of improving human lives—a fundamental ethos across the data industry. According to Hammerbacher, we must understand the current data landscape to predict the future. Practically, this means equipping everything we care about with sensors that collect data. His current focus? Transforming medicine by centring it on data. Data science is one of the most promising fields, where evidence trumps intuition.

Hammerbacher has been particularly interested in mental health and how data can improve psychological wellbeing. His close friend and former classmate, Steven Snyder, tragically died by suicide after struggling with bipolar disorder. This event, combined with Hammerbacher’s own breakdown at age 27—after being diagnosed with bipolar disorder and generalised anxiety disorder—led him to rethink his life. He notes that mental illness is a major cause of workforce dropout and ranks third among causes of early death. Researchers are now collecting neurobiological data from those with mental health conditions. Hammerbacher calls this “one of the most necessary and challenging data problems of our time.”

Pharmaceuticals haven’t solved the issue. Selective serotonin reuptake inhibitors(SSRIs), introduced in the 1980s, have failed to deliver a breakthrough for mood disorders. These remain a leading cause of death; roughly 90% of suicides involve untreated or poorly treated mood disorders, and about 50% of Western populations are affected at some point. The greater challenge lies in defining mental wellness—should people simply adapt to lives that feel unfit?

“Bullshit Jobs” and Social Systems

Investigative anthropologist David Graeber (2018) reported that 37–40% of Western workers view their jobs as “bullshit”—work they see as socially pointless. Thus, the problem isn’t merely psychological; our entire social structure normalises employment that values output over wellbeing.

Data should guide smarter decisions. Yet as our world digitises, data accumulates faster than our ability to interpret it. As Steve Lohr (2015) notes, a 20-bed intensive care unit can generate around 160,000 data points per second—a torrent demanding constant vigilance. Still, this data deluge offers positive outcomes: continuous patient monitoring enables proactive, personalised care.

Data-driven forecasting is set to reshape society, concentrating power and wealth. Not long ago, anyone could found a company; now a single corporation could dominate an entire sector with superior data. A case in point is the partnership between McKesson and IBM. In 2009, Kaan Katircioglu (IBM researcher) sought data for predictive modelling. He found it at McKesson—clean datasets recording medication inventory, prices, and logistics. IBM used this to build a predictive model, enabling McKesson to optimise its warehouse near Memphis and improve delivery accuracy from 90% to 99%.

At present, data-mining algorithms behave as clever tools. An algorithm is simply a set of steps for solving problems—think cooking recipes or coffee machine programming. Even novices can produce impressive outcomes by following a good set of instructions.

Historian Yuval Noah Harari (2015) provocatively suggests we are ourselves algorithms. Unlike machines, our algorithms run through emotions, perceptions, and thoughts—biological processes shaped by evolution, environment, and culture.

Summary

Personal data is the new source of extraction and exploitation—vital for technological progress yet governed by uneven regulations that determine competitive advantage. Pioneers like Jeff Hammerbacher highlight its potential for social good, especially in mental health, while revealing our complex psychology. We collect data abundantly, yet face the challenge of interpreting it effectively. Predictive systems can drive efficiency, but they can also foster monopolies. Ultimately, whether data serves or subsumes us depends on navigating its ethical, legal, and societal implications.


References

Graeber, D. (2018). Bullshit Jobs: A Theory. New York: Simon & Schuster.
Hammerbacher, J. (n.d.). [Interview in Lohr 2015].
Harari, Y. N. (2015). Homo Deus: A History of Tomorrow. New York: Harper.
Lohr, S. (2015). Data-ism: The Revolution Transforming Decision Making, Consumer Behavior, and Almost Everything Else. New York: Harper Business.
Zuboff, Shoshana (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

Zen and the Art of Dissatisfaction – Part 20

The Triple Crisis of Civilisation

“At the time I climbed the mountain or crossed the river, I existed, and the time should exist with me. Since I exist, the time should not pass away. […] The ‘three heads and eight arms’ pass as my ‘sometimes’; they seem to be over there, but they are now.”

Dōgen

Introduction

This blog post explores the intertwining of ecology, technology, politics and data collection through the lens of modern civilisation’s crises. It begins with a quote by the Japanese Zen master Dōgen, drawing attention to the temporal nature of human existence. From climate emergency to digital surveillance, from Brexit to barcodes, the post analyses how personal data has become the currency of influence and control.


Originally published in Substack: https://mikkoijas.substack.com/

The climate emergency currently faced by humanity is only one of the pressing concerns regarding the future of civilisation. A large-scale ecological crisis is an even greater problem—one that is also deeply intertwined with social injustice. A third major concern is the rapidly developing situation created by technology, which is also connected to problems related to nature and the environment.

Cracks in the System: Ecology, Injustice, and the Digital Realm

The COVID-19 pandemic  revealed new dimensions of human interaction. We are dependent on technology-enabled applications to stay connected to the world through computers and smart devices. At the same time, large tech giants are generating immense profits while all of humanity struggles with unprecedented challenges.

Brexit finally came into effect at the start of 2021. On Epiphany of that same year, angry supporters of Donald Trump stormed the United States Capitol. Both Brexit and Trump are children of the AI era. Using algorithms developed by Cambridge Analytica, the Brexit campaign and Trump’s 2016 presidential campaign were able to identify voters who were unsure of their decisions. These individuals were then targeted via social media with marketing and curated news content to influence their opinions. While the data for this manipulation was gathered online, part of the campaigning also happened offline, as campaign offices knew where undecided voters lived and how to sway them.

I have no idea how much I am being manipulated when browsing content online or spending time on social media. As I move from one website to another, cookies are collected, offering me personalised content and tailored ads. Algorithms working behind websites monitor every click and search term, and AI-based systems form their own opinion of who I am.

Surveillance and the New Marketplace

A statistical analysis algorithm in a 2013 study analysed the likes of 58,000 Facebook users. The algorithm guessed users’ sexual orientation with 88% accuracy, skin colour with 95% accuracy, and political orientation with 85% accuracy. It also guessed with 75% accuracy whether a user was a smoker (Kosinski et al., 2013).

Companies like Google and Meta Platforms—which includes Facebook, Instagram, Messenger, Threads, and WhatsApp—compete for users’ attention and time. Their clients are not individuals like me, but advertisers. These companies operate under an advertising-based revenue model. Individuals like me are the users whose attention and time are being competed for.

Facebook and other similar companies that collect data about users’ behaviour will presumably have a competitive edge in future AI markets. Data is the oil of the future. Steve Lohr, long-time technology journalist at the New York Times, wrote in 2015 that data-driven applications will transform our world and behaviour just as telescopes and microscopes changed our way of observing and measuring the universe. The main difference with data applications is that they will affect every possible field of action. Moreover, they will create entirely new fields that have not previously existed.

In computing, the word ”data” refers to various numbers, letters or images as such, without specific meaning. A data point is an individual unit of information. Generally, any single fact can be considered a data point. In a statistical or analytical context, a data point is derived from a measurement or a study. A data point is often the same as data in singular form.

From Likes to Lives: How Behaviour Becomes Prediction

Decisions and interpretations are created from data points through a variety of processes and methods, enabling individual data points to form applicable information for some purpose. This process is known as data analysis, through which the aim is to derive interesting and comprehensible high-level information and models from collected data, allowing for various useful conclusions to be drawn.

A good example of a data point is a Facebook like. A single like is not much in itself and cannot yet support major interpretations. But if enough people like the same item, even a single like begins to mean something significant. The 2016 United States presidential election brought social media data to the forefront. The British data analytics firm Cambridge Analytica gained access to the profile data of millions of Facebook users.

The data analysts hired by Cambridge Analytica could make highly reliable stereotypical conclusions based on users’ online behaviour. For example, men who liked the cosmetics brand MAC were slightly more likely to be homosexual. One of the best indicators of heterosexuality was liking the hip-hop group Wu-Tang Clan. Followers of Lady Gaga were more likely to be extroverted. Each such data point is too weak to provide a reliable prediction. But when there are tens, hundreds or thousands of data points, reliable predictions about users’ thoughts can be made. Based on 270 likes, social media knows as much about a user as their spouse does.

The collection of data is a problem. Another issue is the indifference of users. A large portion of users claim to be concerned about their privacy, while simultaneously worrying about what others think of them on social platforms that routinely violate their privacy. This contradiction is referred to as the Privacy Paradox. Many people claim to value their privacy, yet are unwilling to pay for alternatives to services like Facebook or Google’s search engine. These platforms operate under an advertising-based revenue model, generating profits by collecting user data to build detailed behavioural profiles. While they do not sell these profiles directly, they monetise them by selling highly targeted access to users through complex ad systems—often to the highest bidder in real-time auctions. This system turns user attention into a commodity, and personal data into a tool of influence.

The Privacy Paradox and the Illusion of Choice

German psychologist Gerd Gigerenzer, who has studied the use of bounded rationality and heuristics in decision-making, writes in his excellent book How to Stay Smart in a Smart World (2022) that targeted ads usually do not even reach consumers, as most people find ads annoying. For example, eBay no longer pays Google for targeted keyword advertising because they found that 99.5% of their customers came to their site outside paid links.

Gigerenzer calculates that Facebook could charge users for its service. Facebook’s ad revenue in 2022 was about €103.04 billion. The platform had approximately 2.95 billion users. So, if each user paid €2.91 per month for using Facebook, their income would match what they currently earn from ads. In fact, they would make significantly more profit because they would no longer need to hire staff to sell ad space, collect user data, or develop new analysis tools for ad targeting.

According to Gigerenzer’s study, 75% of people would prefer that Meta Platforms’ services remain free, despite privacy violations, targeted ads, and related risks. Of those surveyed, 18% would be willing to pay a maximum of €5 per month, 5% would be willing to pay €6–10, and only 2% would be willing to pay more than €10 per month.

But perhaps the question is not about money in the sense that Facebook would forgo ad targeting in exchange for a subscription fee. Perhaps data is being collected for another reason. Perhaps the primary purpose isn’t targeted advertising. Maybe it is just one step toward something more troubling.

From Barcodes to Control Codes: The Birth of Modern Data

But how did we end up here? Today, data is collected everywhere. A good everyday example of our digital world is the barcode. In 1948, Bernard Silver, a technology student in Philadelphia, overheard a local grocery store manager asking his professors whether they could develop a system that would allow purchases to be scanned automatically at checkout. Silver and his friend Norman Joseph Woodland began developing a visual code based on Morse code that could be read with a light-based scanner. Their research only became standardised as the current barcode system in the early 1970s. Barcodes have enabled a new form of logistics and more efficient distribution of products. Products have become data, whose location, packaging date, expiry date, and many other attributes can be tracked and managed by computers in large volumes.

Conclusion

We are living in a certain place in time, as Dōgen described—an existence with a past and a future. Today, that future is increasingly built on data: on clicks, likes, and digital traces left behind.

As ecological, technological, and political threats converge, it is critical that we understand the tools and structures shaping our lives. Data is no longer neutral or static—it has become currency, a lens, and a lever of power.


References

Gigerenzer, G. (2022). How to stay smart in a smart world: Why human intelligence still beats algorithms. Penguin.

Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behaviour. Proceedings of the National Academy of Sciences, 110(15), 5802–5805. https://doi.org/10.1073/pnas.1218772110

Lohr, S. (2015). Data-ism: The revolution transforming decision making, consumer behavior, and almost everything else. HarperBusiness.

Dōgen / Sōtō Zen Text Project. (2023). Treasury of the True Dharma Eye: Dōgen’s Shōbōgenzō (Vols. I–VII, Annotated trans.). Sōtōshū Shūmuchō, Administrative Headquarters of Sōtō Zen Buddhism.

Zen and the Art of Dissatisfaction – Part 19

Pandora’s Livestock: How Animal Agriculture Threatens Our Planet and Our Health

The following post explores the interconnected crises of biodiversity loss, industrial animal agriculture, and climate change, presenting a comprehensive argument about humanity’s complex role in environmental degradation. Drawing from works by Bill Gates, Risto Isomäki, and others, the text combines ecological science, epidemiology, and cultural history to examine both systemic failures and potential paths forward. The post highlights how deeply entangled environmental destruction, pandemics, and human psychology are — while also questioning whether our current cognitive limits allow us to grasp and act upon such intertwined threats.

Originally published in Substack: https://substack.com/home/post/p-166887887

The destruction of ecological diversity, the shrinking habitats of wild animals, and the rise of industrial livestock production represent grave violations against the richness of life — and profound threats to humanity’s own future. These issues go beyond climate change, which is itself just one of many interconnected problems facing nature today.

The Decline of Biodiversity and the Rise of Climate Complexity

In How to Avoid a Climate Disaster (2021), Bill Gates outlines the sources of human-generated greenhouse gas emissions. Although many factors contribute to climate change, carbon dioxide (CO₂) remains the dominant greenhouse gas emitted by humans. Gates also includes emissions of methane, nitrous oxide, and fluorinated gases (F-gases) in his calculations. According to his book, the total annual global emissions amount to 46.2 billion tons of CO₂-equivalent.

These emissions are categorized by sector:

  • Manufacturing (cement, steel, plastics): 31%
  • Electricity generation: 27%
  • Agriculture (plants and animals): 19%
  • Transportation (planes, cars, trucks, ships): 16%
  • Heating and cooling: 7%

This classification is more reader-friendly than the Our World In Data approach, which aggregates emissions into broader categories like ”energy,” comprising 73.2% of total emissions. Agriculture accounts for 18.4%, waste for 3.2%, and industrial processes for 5.2%.

According to Statistics Finland, the country emitted 48.3 million tons of CO₂ in one year, with agriculture accounting for 13.66% — aligning closely with Gates’ method. However, Finnish author and environmentalist Risto Isomäki, in How Finland Can Halt Climate Change (2019) and Food, Climate and Health (2021), argues that the contribution of animal agriculture to greenhouse gases is severely underestimated. He points out its role in eutrophication — nutrient pollution that degrades lake and marine ecosystems, harming both biodiversity and nearby property values.

Animal farming requires vast resources: water, grains, hay, medicines, and space. Isomäki notes that 80% of agricultural land is devoted to livestock, and most of the crops we grow are fed to animals rather than people. Transport, slaughter, and the distribution of perishable meat further exacerbate the emissions. Official estimates put meat and other animal products at causing around 20% of global emissions, but Isomäki warns the real figure could be higher — particularly when emissions from manure-induced eutrophication are misclassified under energy or natural processes rather than livestock.

Antibiotic Resistance and Zoonotic Pandemics: The Hidden Cost of Meat

A more urgent and potentially deadly consequence of animal agriculture is the emergence of antibiotic-resistant bacteria and new viruses. 80% of all antibiotics produced globally are used in livestock — primarily as preventative treatment against diseases caused by overcrowded, unsanitary conditions. Even in Finland, where preventive use is officially banned, antibiotics are still prescribed on dubious grounds, as journalist Eveliina Lundqvist documents in Secret Diary from Animal Farms (2014).

This misuse of antibiotics accelerates antibiotic resistance, a serious global health threat. Simple surgeries have become riskier due to resistant bacterial infections. During the COVID-19 pandemic, roughly half of the deaths were linked not directly to the virus but to secondary bacterial pneumonia that antibiotics failed to treat. Isomäki (2021) emphasises that without resistance, this death toll might have been drastically lower.

Moreover, the close quarters of industrial animal farming create ideal conditions for viruses to mutate and jump species — including to humans. Early humans, living during the Ice Age, didn’t suffer from flu or measles. It was only after the domestication of animals roughly 10,000 years ago that humanity began facing zoonotic diseases — diseases that spread from animals to humans.

Smallpox, Conquest, and the Pandora’s Box of Domestication

This shift had catastrophic consequences. In the late 15th century, European colonizers possessed an unintended biological advantage: exposure to diseases their target populations had never encountered. Among the most devastating was smallpox, thought to have originated in India or Egypt over 3,000 years ago. Spread through close contact among livestock, it left distinct scars on ancient victims like Pharaoh Ramses V, whose mummy still bears signs of the disease.

When Spanish conquistadors reached the Aztec Empire in 1519, smallpox killed over three million people. Similar destruction followed in the Inca Empire. By 1600, the Indigenous population of the Americas had dropped from an estimated 60 million to just 6 million.

Europe began vaccinating against smallpox in 1796 using the cowpox virus. Still, over 300 million people died globally from smallpox in the 20th century. Finland ended smallpox vaccinations in 1980. I personally received the vaccine as an infant before moving to Nigeria in 1978.

From COVID-19 to Fur Farms: How Modern Exploitation Fuels Pandemics

The SARS-CoV-2 virus might have originated in bats, with an unknown intermediate host — maybe a farmed animal used for meat or fur. China is a major fur exporter, and Finnish fur farmers have reportedly played a role in launching raccoon dog (Nyctereutes procyonoides) farming in China, as noted by Isomäki (2021).

COVID-19 has been shown to transmit from humans to animals, including pets (cats, dogs), zoo animals (lions, tigers), farmed minks, and even gorillas. This highlights how human intervention in wildlife and farming practices can turn animals into vectors of global disease.

Are Our Brains Wired to Ignore Global Crises?

Why do humans act against their environment? Perhaps no one intentionally destroys nature out of malice. No one wants polluted oceans or deforested childhood landscapes. But the path toward genuine, large-scale cooperation is elusive.

The post argues that we are mentally unprepared to grasp systemic, large-scale problems. According to Dunbar’s number, humans can effectively maintain social relationships within groups of 150–200 people — a trait inherited from our village-dwelling ancestors. Our brains evolved to understand relationships like kinship, illness, or betrayal within tight-knit communities — not to comprehend or act on behalf of seven billion people.

This cognitive limitation makes it hard to process elections, policy complexity, or global consensus. As a result, people oversimplify problems, react conservatively, and mistrust systems that exceed their brain’s social bandwidth.

Summary: A Call for Compassionate Comprehension

The destruction of biodiversity, the misuse of antibiotics, the threat of pandemics, and climate change are not isolated crises. They are symptoms of a deeper disconnect between human behavior and ecological reality. While no one wants the Earth to perish, the language and actions needed to protect it remain elusive. Perhaps the real challenge is not just technical, but psychological — demanding that we transcend the mental architecture of a tribal species to envision a truly planetary society.


References

Gates, B. (2021). How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need. Alfred A. Knopf.

Isomäki, R. (2019). Miten Suomi pysäyttää ilmastonmuutoksen. Into Kustannus.

Isomäki, R. (2021). Ruoka, ilmasto ja terveys. Into Kustannus.

Lundqvist, E. (2014). Salainen päiväkirja eläintiloilta. Into Kustannus.

Our World In Data. (n.d.). Greenhouse gas emissions by sector. Retrieved from https://ourworldindata.org/emissions-by-sector

Statistics Finland. (n.d.). Greenhouse gas emissions. Retrieved from https://www.stat.fi/index_en.html

Zen and the Art of Dissatisfaction – Part 18

Humanity’s Legacy of Extinction and Exploitation

For centuries, human societies—whether ancient hunter-gatherers or modern industrial empires—have played a central role in the extinction of Earth’s largest animals. Although we often romanticise early humans as living in harmony with nature, archaeological and ecological evidence tells a different story. This blog post explores the global impact of Homo sapiens on megafauna, marine ecosystems, and keystone species across continents and millennia, from prehistoric Africa to industrial Japan. It also highlights the ongoing environmental and ethical consequences of our actions.

Originally published in Substack: https://substack.com/@mikkoijas

Humans have consistently driven megafauna to extinction wherever they have migrated. While we may associate the last remaining hunter-gatherers in Africa, Australia, or the Americas with sustainable living, historical patterns suggest otherwise. Wherever Homo sapiens arrived, they rapidly exterminated dangerous predators, large herbivores, and flightless birds.

The Human Legacy of Megafauna Extinction

One striking exception is Africa, where large land mammals have coexisted with humans far longer. This prolonged co-evolution allowed these animals to adapt to human presence. In other parts of the world, some megafauna managed to survive alongside humans—such as various species of bears, moose, deer, and the American bison. Europe’s bison relative, the wisent, nearly went extinct in the 20th century but was saved by zoos.

Even so, ancient hunter-gatherers eventually reached a balance with their prey. Among the San people of the Kalahari, for instance, there’s a known reluctance to hunt declining species. This balance was disrupted by European settlers, leaving San communities today unable to practice their traditions freely.

In North America, indigenous peoples coexisted with the American bison until European settlers deliberately disrupted the balance. Settlers intentionally slaughtered bison to deprive native populations of their primary resource. In the 1700s, 25–30 million bison roamed the plains. By 1880, systematic hunting—sometimes by the U.S. Army—reduced their population to under 100 individuals.

Human impact has extended deep into marine ecosystems. Although coastal communities have fished for thousands of years, their practices rarely led to ecological collapse. According to Curtis Marean, a professor of archaeology at Arizona State University, early Homo sapiens may have survived an extreme ice age (c. 195,000–123,000 years ago) by turning to coastal diets. Marean’s work at Pinnacle Point near Mossel Bay has shown that ancient humans relied on seafood like shellfish and marine mammals. This dietary shift played a crucial role in the survival of early humans during a population bottleneck when their numbers dropped to a few hundred individuals.

Nearby Blombos Cave, studied by archaeologists like Christopher Henshilwood, has yielded the earliest evidence of symbolic thought and advanced tools, including beads and bone-tipped spears.

Although early coastal communities scavenged stranded whales, they did not hunt them at scale. The Romans may have initiated the first industrial whale hunts, particularly off the Gibraltar peninsula, as confirmed by recent findings from Ana Rodrigues’ research team (2018). Later, the Basques became renowned whale hunters, operating from the 1000s to the 1500s across the North Atlantic. By the early 1900s, the North Atlantic right whale population had dropped to about 100. Recent estimates suggest there are only 336 left today.

Tuna, Greed, and the Cold Economics of Extinction

Whales are not the only marine giants hunted to the brink. Species like the bluefin tuna have faced similar pressure. On the Western Atlantic, tuna catches jumped from 1,000 tonnes in 1960 to 18,000 tonnes by 1964—only to collapse by 80% within the same decade. In the Mediterranean, overfishing continued longer but reached catastrophic levels by 1998, leading the IUCN to classify the species as endangered.

The surge in demand came from Japan, where raw tuna is essential for sushi and sashimi. In particular, the fatty underbelly known as otoro became a luxury delicacy in the 1960s. Meanwhile, in the West, tuna was mostly used for cat food.

Today, approximately 80% of all bluefin tuna caught globally is shipped to Japan. The Japanese conglomerate Mitsubishi controls about 40% of the global market, freezing and stockpiling tuna to artificially inflate scarcity and profit margins. Ironically, the Fukushima nuclear disaster compromised these stores when the electricity failed, ruining thousands of tonnes of frozen fish.

From an ecological viewpoint, Mitsubishi’s actions are deeply unethical. From an economic lens, however, they are brutally rational—rarity increases value. As stocks dwindle, prices rise, and shareholders benefit. The more endangered tuna become, the more lucrative they are.

All signs suggest that the oceans are under enormous pressure due to climate change. Seas are warming, acidifying, and absorbing unprecedented levels of carbon dioxide from human activity. In addition, they are polluted and eutrophicated by agriculture and industry.

The Baltic Sea, for example, is the most polluted marine area in the world—thanks in part to the impacts of livestock farming. The same agricultural runoff pollutes Finland’s lakes and rivers.

Ocean ecosystems are remarkably sensitive. A 2°C rise may seem minor—until we compare it to the human body. If your body temperature increased by two degrees and stayed there, you’d die. The sea is no different.

In her book On Fire (2020), journalist Naomi Klein reflects on the 2010 Deepwater Horizon oil spill in the Gulf of Mexico. Operated by Transocean and leased by BP, it remains the largest marine oil spill in history. Witnesses described the ocean as if it were bleeding. Klein recalls being struck by how the oil’s swirling patterns resembled prehistoric cave paintings—one shape even resembled a bird gasping for air, its eyes staring skyward.

Conclusion

From mammoths and bison to whales and tuna, humanity has left a trail of extinction and ecosystem collapse in its wake. Whether through hunting, pollution, or industrial overreach, our actions have irreversibly altered life on Earth. The myth of ancient ecological harmony dissolves under the weight of archaeological evidence and ecological reality. If we are to prevent the next wave of mass extinctions, we must confront the past honestly and reshape our relationship with the natural world—before there is nothing left to save.


References

Henshilwood, C. S. (2002). The Blombos Cave and the origins of symbolic thinking. Science, 295(5558), 1278–1280. https://doi.org/10.1126/science.1067575

Hickman, M. (2009). Mitsubishi and the bluefin tuna trade. The Independent. Retrieved from https://www.independent.co.uk

Klein, N. (2020). On Fire: The Burning Case for a Green New Deal. Penguin Books.

Lindsay, J. (2011). Mitsubishi loses tons of tuna after Fukushima power failure. Environmental News Network. Retrieved from https://www.enn.com

Marean, C. W. (2010). When the Sea Saved Humanity. Scientific American, 303(2), 54–61.

Rodrigues, A. et al. (2018). Forgotten whales: Evidence of ancient whaling by the Romans in the Gibraltar region. Proceedings of the Royal Society B: Biological Sciences, 285(1873). https://doi.org/10.1098/rspb.2018.1088

IUCN. (1998). Bluefin tuna listed as endangered. International Union for Conservation of Nature. https://www.iucn.org

Zen and the Art of Dissatisfaction – Part 17

From Mammoth Graves to Aurochs Temples

The archaeological record offers profound insights into the lives, beliefs, and practices of our prehistoric ancestors. From elaborate burials in Russia to monumental structures in Finland, and from intricate cave paintings in France to the extinction of megafauna across continents, these remnants challenge modern perceptions of early human societies. This article delves into various significant prehistoric sites and phenomena, shedding light on the complexity and richness of early human culture.

Originally published in Substack: https://substack.com/@mikkoijas

On the territory of present-day Russia, in Sungir some 34,000 years ago, Upper Palaeolithic humans left behind something truly extraordinary. In Sungir, an ancient grave has been discovered where two physically disabled children were buried together with precious treasures. The children of Sungir were adorned with beads carved from mammoth ivory—over 10,000 of them in total. Also found in the grave were 20 bracelets, 300 perforated fox teeth, 16 spears made from mammoth tusks, reindeer antlers, and other ornamental objects.

Unique Traces of Ancient Peoples and Lost Giants of the Ice Age

A common misconception suggests that ancient hunter-gatherers were nomadic wanderers trailing game animals, leaving behind little of note. This, however, is a misconception. We know that hunter-gatherer cultures constructed massive monuments even here in Finland. The 4,500-year-old “Giant’s Church” or Kastelli in Pattijoki is astonishing by any measure. The stone enclosure covers an area of about 2,200–2,300 square metres, with its walls rising on average 1–1.5 metres above the surrounding ground, and in some places nearly 2 metres.

Teotihuacán, located on the southern part of Mexico’s central plateau, is not necessarily ancient, but it too was built by hunter-gatherers. The city was founded in the 3rd century, and what makes it special is the complete absence of advanced technology. The inhabitants of Teotihuacán did not use sophisticated metal tools, did not practice agriculture, nor did they leave behind any administrative documents. The people who founded this city of around 100,000 inhabitants did not use draft animals or even the wheel in its construction. The city boasts two large pyramids, with the Pyramid of the Sun featuring 215-metre-long sides and a height of 60 metres.

In the Dordogne region of central France lies a particularly fascinating cave. After entering the cave, visitors board an electric train in a vast entrance hall, descending deep into the earth. The cave is, in places, so tall that the beam of a torch does not reach the ceiling. In other areas, it is so low that archaeologists had to crawl with their backs pressed against the ceiling to advance further in. After travelling about a kilometre and a half, the train stops, and the guide points to the cave wall. On the wall is an image of a woolly rhinoceros. A little later, the guide illuminates a beautiful depiction of two mammoths looking into each other’s eyes. Rhinoceroses and mammoths… in France! Like mammoths, woolly rhinoceroses disappeared from France after the end of the Ice Age.

In 1991, French diver Henri Cosquer accidentally discovered a cave sealed by an air pocket off the coast of Marseille in the Mediterranean. Now named Cosquer Cave, it lies 37 metres below sea level. Its walls are adorned with paintings of seals, auks, and lions.

Before the rise of modern humans, the lion was the most widely spread land mammal, present wherever land routes allowed. Upon the arrival of modern humans in Central Europe, large prides of cave lions roamed the mammoth steppe. Such prides are vividly depicted on the walls of Chauvet Cave, dating to around 35,000 years ago. Cave lions, likely dangerous to modern humans much like cave bears, went extinct around the same time as the most beautiful cave paintings were created in the Lascaux cave.

The Lascaux cave paintings are especially famous for their massive ceiling frescoes depicting aurochs. The production of these paintings appears to have taken place on an almost industrial scale. The large ceiling artworks were executed using temporarily erected scaffolding, upon which trained artists, working by the refined light of tallow lamps, painted anatomically precise depictions of wild animals as if floating weightlessly, upside down.

The cave is often compared to the Sistine Chapel. A visit to the replica of the Lascaux cave was an equally moving experience. In the first chamber of the cave, known as the Hall of the Bulls, the aurochs painted on the ceiling seem dreamlike. The bulls, wild horses, and other animals appear to fly in weightless space. This is a considerable achievement, especially for paintings made without any live models. The prehistoric artists were highly skilled. At the rear of the cave is a rock featuring a depiction of a horse floating upside down. Even from this two-dimensional image, one can see the animal has been rendered with flawless anatomical accuracy—an achievement that would be rare even among the finest animal illustrators in art history.

French archaeologist André Leroi-Gourhan (1911–1986) published several studies on French cave paintings, the most famous of which entered public discourse, especially in the 1960s, once translated into English. Leroi-Gourhan’s great achievement was his detailed mapping of caves and the precise counting of depicted motifs. Aurochs appear 137 times in the 72 caves he studied. However, the aurochs were less common than horses, which appear 610 times, bison 510 times, woolly mammoths 205 times, and the easily recognisable ibex with its majestic horns 176 times (Leroi-Gourhan 1967).

The aurochs held particular symbolic significance for Ice Age modern humans. South African archaeologist David Lewis-Williams, an expert on rock art, along with his colleague David Pearce (2011), have proposed that the depiction of aurochs in Central European caves may have led to the first organised religions, as modern humans settled into agricultural life. In southern Turkey, Çatalhöyük was, about 7,000 years ago, one of the first cities where people lived settled lives, farming the land and consuming domesticated animals. Lewis-Williams and Pearce suggest that the locals practised a form of religion centred on the aurochs.

At Çatalhöyük, there are rooms that appear to have been entered by crawling, with sculptures on the walls resembling the heads and horns of aurochs. According to Lewis-Williams and Pearce, at the core of this aurochs cult was a priesthood responsible for the domestication of sacrificial animals. Therefore, we can only speculate: did humans settle due to practical agricultural needs or because of religious practices? These rooms might also simply be domestic spaces with decorative aurochs heads.

Ritual, Settlement, and the Mystery of Agriculture

Today, we know that the people of Çatalhöyük did not consume domesticated aurochs. They had been domesticated a thousand years earlier in the Fertile Crescent. The inhabitants of Çatalhöyük continued to hunt wild aurochs but also farmed and raised sheep and goats.

Cities like Çatalhöyük—or even older archaeological sites in Turkey such as Göbekli Tepe—may have served as important religious gathering places, prompting the emergence of agricultural and pastoral lifestyles. But there is no certainty about which came first. Did people first settle and then begin farming? Hunter-gatherer societies may have gathered for seasonal ceremonies yet continued living in smaller, dispersed groups for parts of the year. Alternatively, such gatherings might have led to more permanent settlement—though other, likely very complex, factors were surely also involved.

Modern humans did not start farming universally because it was the best option. Plants have been cultivated in different parts of the world for a long time, but some cultures abandoned agriculture and returned to hunting, fishing, and gathering. Large civilisations have also been built in the Americas without agriculture. In these societies, the land and environment were sometimes altered to support certain plants and animals, and rivers were dammed to enhance fishing.

The Fall of the Aurochs and the Great Auk’s Last Stand

The last aurochs lived in the Jaktorów Forest near Warsaw in Poland as late as 1627. The habitat of the aurochs gradually shrank everywhere, and its meat was especially prized. The largest aurochs were bigger than modern cattle. Later aurochs living in Denmark and Germany reached around 180 centimetres in height and weighed about 700 kilograms, but Ice Age aurochs were even larger. The aurochs immortalised on the ceiling of Lascaux Cave may have weighed up to 1,500 kilograms.

Aurochs, woolly mammoths, woolly rhinoceroses, lions, and cave bears have disappeared from Europe. The great auk (possibly the flightless Pinguinus impennis)depicted on the walls of Cosquer Cave survived in places in great numbers until the 1800s, even though its use as game is evident from Stone Age excavations wherever it once lived.

Elisabeth Kolbert (2016) movingly recounts thestory of the flightless great auk. Before human interference, the auk lived along the eastern Atlantic coast from Norway to Italy, and across the western Atlantic from Canada to Florida. Iceland’s first settlers dined on the easily caught bird. The auk was unafraid of humans and could be caught simply by walking up and tapping it with a stick. With the rise of cod fishing, European fishermen in the 1500s began visiting islands off Newfoundland in northeast Canada.

Funk Island, north of Newfoundland, was known for its auks. An estimated 100,000 auk pairs lived there, potentially producing 100,000 eggs. Early European sailors easily filled their ships with these birds. People found many imaginative uses for the defenceless auk: as fish bait, for mattress stuffing with their feathers, and oil from their bodies was burned for fuel on the treeless, remote Atlantic islands. By the early 1800s, no auks remained on the North American coast. As Kolbert put it, the last American auk had been plucked, salted, and deep-fried.

Afterwards, the auks were confined to Geirfuglasker, an island off Iceland and their last significant habitat. A volcanic eruption destroyed the island in 1830, after which the remaining auks lived on the islet of Eldey. As they became rarer, wealthy European gentlemen competed for specimens and their eggs. The last two auks on Eldey were killed in 1844. A dozen Icelanders rowed to the islet. There they found two auks and a single egg. Sigurður Iselfsson, Ketil Ketilsson, and Jón Brandsson caught and strangled the birds. The last auk egg was broken during the struggle. The birds were sold to a private collector, and one of them is now part of the collection at the Natural History Museum of Los Angeles.

Giants Lost Across Continents

Large land-dwelling animals have also been forced out by humans outside of Europe. One of the best-known examples is Australia. Over 85 percent of Australian terrestrial species weighing more than 44 kilograms went extinct shortly after the arrival of modern humans around 50,000 years ago. Diprotodon, the largest known marsupial and a relative of the modern wombat, disappeared around 44,000 years ago. Diprotodon was about three metres long, two metres tall, and weighed up to three tonnes—a giant wombat. The same genus included Zygomaturus, weighing about 300–500 kilograms, which may have survived until about 35,000 years ago.

Around the same time, Palorchestes also vanished from Australia. This “ancient dancer” weighed about a tonne and may have been related to the ground sloths (Megalonychidae) that lived in North and South America, and which likewise went extinct after the Ice Age and the arrival of humans—although some individuals lived until the 1550s on the islands of Haiti and Cuba. The giant Megatherium, a ground sloth, lived mainly in South and Central America but became extinct around 12,000 years ago with the arrival of modern humans. Megatherium measured about six metres in length and weighed four tonnes.

Almost all land animals in the Americas weighing over 44 kilograms disappeared after the arrival of humans—giant armadillos weighing around a tonne, giant beavers over 100 kilograms, woolly mammoths, and nearly tonne-sized, cold-adapted camel relatives. Around the same time, Smilodon, the 400-kilogram, lion-height but far more robust sabre-toothed cat, also became extinct in both North and South America.

Conclusion

The archaeological and paleontological records underscore the complexity, adaptability, and impact of early human societies. From constructing monumental architecture and creating intricate art to influencing the extinction of megafauna, our ancestors demonstrated remarkable ingenuity and left enduring legacies that continue to inform our understanding of human history.


References

Kolbert, E. (2016). The Sixth Extinction: An Unnatural History. Bloomsbury Publishing.

Leroi-Gourhan, A. (1967). The Dawn of European Art: An Introduction to Palaeolithic Cave Painting. Cambridge University Press.

Lewis-Williams, D., & Pearce, D. (2011). Inside the Neolithic Mind: Consciousness, Cosmos and the Realm of the Gods. Thames & Hudson.

Roberts, R. G., Flannery, T. F., Ayliffe, L. K., et al. (2001). New Ages for the Last Australian Megafauna: Continent-Wide Extinction About 46,000 Years Ago. Science, 292(5523), 1888–1892. https://doi.org/10.1126/science.1060264