Zen and the Art of Dissatisfaction – 23

Bullshit Jobs and Smart Machines

This post explores how many of today’s high‑paid professions depend on collecting and analysing data, and on decisions made on the basis of that process. Drawing on thinkers such as Hannah ArendtGerd Gigerenzer, and others, I examine the paradoxes of complex versus simple algorithms, the ethical dilemmas arising from algorithmic decision‑making, and how automation threatens not only unskilled but increasingly highly skilled work. I also situate these issues in historical context, from the Fordist assembly line to modern AI’s reach into law and medicine.

Originally published in Substack: https://substack.com/inbox/post/170023572

Many contemporary highly paid professions rely on data gathering, its analysis, and decisions based on that process. According to Hannah Arendt (2017 [original 1963]), such a threat already existed in the 1950s when she wrote:

“The explosive population growth of today has coincided frighteningly with technological progress that makes vast segments of the population unnecessary—indeed superfluous as a workforce—due to automation.”

In the words of David Ferrucci, the leader of Watson’s Jeopardy! team, the next phase in AI’s development will evaluate data and causality in parallel. The way data is currently used will change significantly when algorithms can construct data‑based hypotheses, theories and mental models answering the question “why?”

The paradox of complexity: simple versus black‑box algorithms

Paradoxically, one of the biggest problems with complex algorithms such as Watson and Google Flu Trends is their very complexity. Gerd Gigerenzer (2022) argues that simple, transparent algorithms often outperform complex ones. He criticises secret machine‑learning “black‑box” systems that search vast proprietary datasets for hidden correlations without understanding the physical or psychological principles of the world. Such systems can make bizarre errors—mistaking correlation for causation, for instance between Swiss chocolate consumption and number of Nobel Prize winners, or between drowning deaths in American pools and the number of films starring Nicolas Cage. A stronger correlation exists between the age of Miss America and rates of murder: when Miss America is aged twenty or younger, murders committed by hot steam or weapons are fewer. Gigerenzer advocates for open, simple algorithms; for example, the 1981 model The Keys to the White House, developed by historian Allan Lichtman and geophysicist Vladimir Keilis‑Borok, which has correctly predicted every US presidential election since 1984, with the single exception of the result in the Al Gore vs. George W. Bush contest.

Examples where individuals have received long prison sentences illustrate how secret, proprietary algorithms such as COMPAS (“Correctional Offender Management Profiling for Alternative Sanctions”) produce risk assessments that can label defendants as high‑risk recidivists. Such black‑box systems, which may determine citizens’ liberty, pose enormous risks to individual freedom. Similar hidden algorithms are used in credit scoring and insurance. Citizens are unknowingly categorised and subject to prejudices that constrain their opportunities in society.

The industrial revolution, automation, and the meaning of work

Even if transformative technologies like Watson may fail to deliver on all the bold promises made by IBM’s marketing, algorithms are steadily doing tasks once carried out by humans. Just as industrial machines displaced heavy manual labour and beasts of burden—especially in agriculture—today’s algorithms are increasingly supplanting cognitive roles.

Since the Great Depression of the 1930s, warnings have circulated that automation would render millions unemployed. British economist John Maynard Keynes (1883–1946) coined the term “technological unemployment” to describe this risk. As David Graeber (2018) notes, automation did indeed trigger mass unemployment. Political forces on both the right and left share a deep belief that paid employment is essential for moral citizenship; they agree that unemployment in wealthy countries should never exceed around 8 percent. Graeber nonetheless argues that the Great Depression produced a collapse in real need for work—and much contemporary work is “bullshit jobs”. If 37–40 percent of jobs are such meaningless roles, more than 50–60 percent of the population are effectively unemployed.

Karl Marx warned of industrial alienation, where people are uprooted from their villages and placed into factories or mines to do simple, repetitive work requiring no skill, knowledge or training, and easily replaceable. Global corporations have shifted assembly lines and mines to places where workers have few rights, as seen in electronics assembly in Chinese factory towns, garment workshops in Bangladesh, and mineral extraction by enslaved children—all under appalling conditions.

Henry Ford’s Western egalitarian idea of the assembly line—that all workers are equal—became a system where anybody can be replaced. In Charles Chaplin’s 1936 film Modern Times, inspired by his encounter in 1931 with Mahatma Gandhi, he highlighted our dependence on machines. Gandhi argued that Britain had enslaved Indians through its machines; he sought non‑violent resistance and self‑sufficiency to show that Indians did not need British machines or Britain itself.

From industrial jobs to algorithmic threat to professional work

At its origin in Ford’s factory in 1913, the T‑model moved through 45 fixed stations and was completed in 93 minutes, borrowing the idea from Chicago slaughterhouses where carcasses moved past stationary cutters. Though just 8 percent of the American workforce was engaged in manufacturing by the 1940s, automation created jobs in transport, repair, and administration—though these often required only low-skilled labour.

Today, AI algorithms threaten not only blue‑collar but also white‑collar roles. Professions requiring long training—lawyers and doctors, for example—are now at risk. AI systems can assess precedent for legal cases more accurately than humans. While such systems promise reliability, they also bring profound ethical risks. Human judges are fallible: one Israeli study suggested that judges issue harsher sentences before lunch than after—but that finding has been contested due to case‑severity ordering. Yet such results are still invoked to support AI’s superiority.

Summary

This blog post has considered how our economy is increasingly structured around data collection, analysis, and decision‑making by both complex and simple algorithms. It has explored the paradox that simple, transparent systems can outperform opaque ones, and highlighted the grave risks posed by black‑box algorithms in criminal justice and financial systems. Tracing the legacy from Fordist automation to modern AI, I have outlined the existential threats posed to human work and purpose—not only for low‑skilled labour but for highly skilled professions. The text argues that while automation may deliver productivity, it also risks alienation, injustice, and meaninglessness unless we critically examine the design, application, and social framing of these systems.


References

Arendt, H. (2017). The Human Condition (Original work published 1963). University of Chicago Press.
Ferrucci, D. (n.d.). [Various works on IBM Watson]. IBM Research.
Gigerenzer, G. (2022). How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms. MIT Press.
Graeber, D. (2018). Bullshit Jobs: A Theory. Simon & Schuster.
Keynes, J. M. (1930). Economic Possibilities for our Grandchildren. Macmillan.
Lee, C. J. (2018). The misinterpretation of the Israeli parole study. Nature Human Behaviour, 2(5), 303–304.
Lichtman, A., & Keilis-Borok, V. (1981). The Keys to the White House. Rowman & Littlefield.

Zen and the Art of Dissatisfaction – Part 21

Data: The Oil of the Digital Age

Data applications rely fundamentally on data—its extraction, collection, storage, interpretation, and monetisation—making them arguably the most significant feature of our contemporary world. Often referred to as ”the new oil,” data is, from the perspective of persistent capitalists, a valuable resource capable of sustaining economic growth even after conventional natural reserves have been exhausted. This new form of capitalism has been titled Surveillance Capitalism (Zuboff 2019).

Originally published in Substack: https://substack.com/@mikkoijas

Data matters more than opinions. For developers of data applications, the key goal is that we browse online, click “like,” follow links, spend time on their platforms, and accept cookies. What we think or do does not matter; what matters is the digital behavioural surplus, a trace we leave and our consent to tracking. That footprint has become immensely valuable—companies are willing to pay for it, and sometimes break laws to get it.

Cookies and Consumer Privacy in Europe

European legislation like the General Data Protection Regulation (GDPR) ensures some personal protection, but we still leave traces even if we refuse to share personal data. Websites are legally obligated to request our cookie consent, making privacy violations more visible. Rejecting cookies and clearing them out later becomes a time-consuming and frustrating chore.

In stark contrast, China’s data laws are much more relaxed, granting companies broader operational freedom. The more data a company gathers, the more fine-tuned its predictive algorithms can be. It’s much like environmental regulation: European firms are restricted from drilling for oil in protected areas, which reduces profit but protects nature. Chinese firms, unrestrained by such limits, may harm ecosystems while driving profits. In the data realm, restrictive laws narrow the available datasets. Whereas Chinese firms harvest freely, they might gain a major competitive edge that could help them lead the global AI market.

Data for Good: Jeff Hammerbacher’s Vision

American data scientist Jeff Hammerbacher is one of the field’s most influential figures. As journalist Steve Lohr (2015) reports, Hammerbacher started on Wall Street and later helped build Facebook’s data infrastructure. Today, he curates data collection and interpretation for the purpose of improving human lives—a fundamental ethos across the data industry. According to Hammerbacher, we must understand the current data landscape to predict the future. Practically, this means equipping everything we care about with sensors that collect data. His current focus? Transforming medicine by centring it on data. Data science is one of the most promising fields, where evidence trumps intuition.

Hammerbacher has been particularly interested in mental health and how data can improve psychological wellbeing. His close friend and former classmate, Steven Snyder, tragically died by suicide after struggling with bipolar disorder. This event, combined with Hammerbacher’s own breakdown at age 27—after being diagnosed with bipolar disorder and generalised anxiety disorder—led him to rethink his life. He notes that mental illness is a major cause of workforce dropout and ranks third among causes of early death. Researchers are now collecting neurobiological data from those with mental health conditions. Hammerbacher calls this “one of the most necessary and challenging data problems of our time.”

Pharmaceuticals haven’t solved the issue. Selective serotonin reuptake inhibitors(SSRIs), introduced in the 1980s, have failed to deliver a breakthrough for mood disorders. These remain a leading cause of death; roughly 90% of suicides involve untreated or poorly treated mood disorders, and about 50% of Western populations are affected at some point. The greater challenge lies in defining mental wellness—should people simply adapt to lives that feel unfit?

“Bullshit Jobs” and Social Systems

Investigative anthropologist David Graeber (2018) reported that 37–40% of Western workers view their jobs as “bullshit”—work they see as socially pointless. Thus, the problem isn’t merely psychological; our entire social structure normalises employment that values output over wellbeing.

Data should guide smarter decisions. Yet as our world digitises, data accumulates faster than our ability to interpret it. As Steve Lohr (2015) notes, a 20-bed intensive care unit can generate around 160,000 data points per second—a torrent demanding constant vigilance. Still, this data deluge offers positive outcomes: continuous patient monitoring enables proactive, personalised care.

Data-driven forecasting is set to reshape society, concentrating power and wealth. Not long ago, anyone could found a company; now a single corporation could dominate an entire sector with superior data. A case in point is the partnership between McKesson and IBM. In 2009, Kaan Katircioglu (IBM researcher) sought data for predictive modelling. He found it at McKesson—clean datasets recording medication inventory, prices, and logistics. IBM used this to build a predictive model, enabling McKesson to optimise its warehouse near Memphis and improve delivery accuracy from 90% to 99%.

At present, data-mining algorithms behave as clever tools. An algorithm is simply a set of steps for solving problems—think cooking recipes or coffee machine programming. Even novices can produce impressive outcomes by following a good set of instructions.

Historian Yuval Noah Harari (2015) provocatively suggests we are ourselves algorithms. Unlike machines, our algorithms run through emotions, perceptions, and thoughts—biological processes shaped by evolution, environment, and culture.

Summary

Personal data is the new source of extraction and exploitation—vital for technological progress yet governed by uneven regulations that determine competitive advantage. Pioneers like Jeff Hammerbacher highlight its potential for social good, especially in mental health, while revealing our complex psychology. We collect data abundantly, yet face the challenge of interpreting it effectively. Predictive systems can drive efficiency, but they can also foster monopolies. Ultimately, whether data serves or subsumes us depends on navigating its ethical, legal, and societal implications.


References

Graeber, D. (2018). Bullshit Jobs: A Theory. New York: Simon & Schuster.
Hammerbacher, J. (n.d.). [Interview in Lohr 2015].
Harari, Y. N. (2015). Homo Deus: A History of Tomorrow. New York: Harper.
Lohr, S. (2015). Data-ism: The Revolution Transforming Decision Making, Consumer Behavior, and Almost Everything Else. New York: Harper Business.
Zuboff, Shoshana (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

Zen and the Art of Dissatisfaction – Part 14

Manufacturing Desire

In an era when technological progress promises freedom and efficiency, many find themselves paradoxically more burdened, less satisfied, and increasingly detached from meaningful work and community. The rise of artificial intelligence and digital optimisation has revolutionised industries and redefined productivity—but not without cost. Beneath the surface lies a complex matrix of invisible control, user profiling, psychological manipulation, and systemic contradictions. Drawing from anthropologists, historians, and data scientists, this post explores how behaviour modification, corporate surveillance, and the proliferation of “bullshit jobs” collectively undermine our autonomy, well-being, and connection to the natural world.

Originally published in Substack https://substack.com/home/post/p-164145621

Manipulation of Desire

Large language models, or AI tools, are designed to optimise production by quantifying employees’ contributions relative to overall output and costs. This logic, however, rarely applies to upper management—those who oversee the operation of these very systems. Anthropologist David Graeber (2018) emphasised that administrative roles have exploded since the late 20th century, especially in institutions like universities where hierarchical roles were once minimal. He noted that science fiction authors can envision robots replacing sports journalists or sociologists, but never the upper-tier roles that uphold the basic functions of capitalism.

In today’s economy, these “basic functions” involve finding the most efficient way to allocate available resources to meet present or future consumer demand—a task Graeber argues could be performed by computers. He contends that the Soviet economy faltered not because of its structure, but because it collapsed before the era of powerful computational coordination. Ironically, even in our data-rich age, not even science fiction dares to imagine an algorithm that replaces executives.

Ironically, the power of computers is not being used to streamline economies for collective benefit, but rather to refine the art of influencing individual behaviour. Instead of coordinating production or replacing bureaucracies, these tools have been repurposed for something far more insidious: shaping human desires, decisions, and actions. From Buddhist perspective manipulation of human desire sounds dangerous. The Buddha said that the cause or suffering and dissatisfaction is tanha, which is usually translates as desire or craving. If human desires or thirst is manipulated and controlled, we can be sure that suffering will not end if we rely on surveillance capitalism. To understand how we arrived at this point, we must revisit the historical roots of behaviour modification and the psychological tools developed in times of geopolitical crisis.

The roots of modern Behaviour modification trace back to mid-20th-century geopolitical conflicts and psychological experimentation. During the Korean War, alarming reports emerged about American prisoners of war allegedly being “brainwashed” by their captors. These fears catalysed the CIA’s MKUltra program—covert mind control experiments carried out at institutions like Harvard, often without subjects’ consent.

Simultaneously, B.F. Skinner’s Behaviourist theories gained traction. Skinner argued that human behaviour could be shaped through reinforcement, laying the groundwork for widespread interest in behaviour modification. Although figures like Noam Chomsky would later challenge Skinner’s reductionist model, the seed had been planted.

What was once a domain of authoritarian concern is now the terrain of corporate power. In the 21st century, the private sector—particularly tech giants—has perfected the tools of psychological manipulation. Surveillance capitalism, a term coined by Harvard professor Shoshana Zuboff, describes how companies now collect and exploit vast quantities of personal data to subtly influence consumer behaviour. It is very possible your local super market is gathering date of your purchases and building a detailed user profile, which in turn is sold to their collaborators.  These practices—once feared as mechanisms of totalitarian control—are now normalised as personalised marketing. Yet, the core objective remains the same: predict and control human action – and turning that into profit. 

Advertising, Children, and the Logic of Exploitation

In the market economy, advertising reigns supreme. It functions as the central nervous system of consumption, seeking out every vulnerability, every secret desire. Jeff Hammerbacher, a data scientist and early Facebook engineer, resigned in disillusionment after realising that some of the smartest minds of his generation were being deployed to optimise ad clicks rather than solve pressing human problems.

Today’s advertising targets children. Their impulsivity and emotional responsiveness make them ideal consumers—and they serve as conduits to their parents’ wallets. Meanwhile, parents, driven by guilt and affection, respond to these emotional cues with purchases, reinforcing a cycle that ties family dynamics to market strategies.

Devices meant to liberate us—smartphones, microwave ovens, robotic vacuum cleaners—have in reality deepened our dependence on the very system that demands we work harder to afford them. Graeber (2018) terms the work that sustains this cycle “bullshit jobs”: roles that exist not out of necessity, but to perpetuate economic structures. These jobs are often mentally exhausting, seemingly pointless, and maintained only out of fear of financial instability.

Such jobs typically require a university degree or social capital and are prevalent at managerial or administrative levels. They differ from “shit jobs,” which are low-paid but societally essential. Bullshit jobs include roles like receptionists employed to project prestige, compliance officers producing paperwork no one reads, and middle managers who invent tasks to justify their existence.

Historian Rutger Bregman (2014) observes that medieval peasants, toiling in the fields, dreamt of a world of leisure and abundance. By many metrics, we have achieved this vision—yet rather than rest, we are consumed by dissatisfaction. Market logic now exploits our insecurities, constantly inventing new desires that hollow out our wallets and our sense of self.

Ecophilosopher Joanna Macy and Dr. Chris Johnstone (2012) give a telling example from Fiji, where eating disorders like bulimia were unknown before the arrival of television in 1995. Within three years, 11% of girls suffered from it. Media does not simply reflect society—it reshapes it, often violently. Advertisements now exist to make us feel inadequate. Only by internalising the belief that we are ugly, fat, or unworthy can the machine continue selling us its artificial solutions.

The Myth of the Self-Made Individual

Western individualism glorifies self-sufficiency, ignoring the fundamental truth that humans are inherently social and ecologically embedded. From birth, we depend on others. As we age, our development hinges on communal education and support.

Moreover, we depend on the natural world: clean air, water, nutrients, and shelter. Indigenous cultures like the Iroquois/Haudenosaunee express gratitude to crops, wind, and sun. They understand what modern society forgets—that survival is not guaranteed, and that gratitude is a form of moral reciprocity.

In Kalahari, the San people question whether they have the right to take an animal’s life for food, especially when its species nears extinction. In contrast, American officials once proposed exterminating prairie dogs on Navajo/Diné land to protect grazing areas. The Navajo elders objected: “If you kill all the prairie dogs, there will be no one to cry for the rain.” The result? The ecosystem collapsed—desertification followed. Nature’s interconnectedness, ignored by policymakers, proved devastatingly real.

Macy and Johnstone argue that the public is dangerously unaware of the scale of ecological and climate crises. Media corporations, reliant on advertising, have little incentive to tell uncomfortable truths. In the U.S., for example, television is designed not to inform, but to retain viewers between ads. News broadcasts instil fear, only to follow up with advertisements for insurance—offering safety in a world made to feel increasingly dangerous.

Unlike in Finland or other nations with public broadcasters, American media is profit-driven and detached from public interest. The result is a population bombarded with fear, yet denied the structural support—like healthcare or education—that would alleviate the very anxieties media stokes.

Conclusions 

The story of modern capitalism is not just one of freedom, but also of entrapment—psychological, economic, and ecological. Surveillance capitalism has privatised control, bullshit jobs sap our energy, and advertising hijacks our insecurities. Yet throughout this dark web, there remain glimmers of alternative wisdom: indigenous respect for the earth, critiques from anthropologists, and growing awareness of the need for systemic change.

The challenge ahead lies not in refining the algorithms, but in reclaiming the meaning and interdependence lost to them. A liveable future demands more than innovation; it requires imagination, gratitude, and a willingness to dismantle the myths we’ve mistaken for progress.


References

Bregman, R. (2014). Utopia for realists: And how we can get there. Bloomsbury Publishing.
Eisenstein, C. (2018). Climate: A new story. North Atlantic Books.
Graeber, D. (2018). Bullshit Jobs: A Theory. New York: Simon & Schuster.
Hammerbacher, J. (n.d.). As cited in interviews on ethical technology, 2013–2016.
Johnstone, C., & Macy, J. (2012). Active hope: How to face the mess we’re in without going crazy. New World Library.
Loy, D. R. (2019). Ecodharma: Buddhist Teachings for the Ecological Crisis. New York: Wisdom Publications.
Skinner, B. F. (1953). Science and human Behaviour. Macmillan.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. London: PublicAffairs.
Chomsky, N. (1959). A review of B. F. Skinner’s Verbal Behaviour. Language, 35(1), 26–58.