Zen and the Art of Dissatisfaction – Part 34

Navigating the Times of Crisis

In a rapidly changing world, where the climate crisis, technological advancements, and social inequality loom large, many may feel overwhelmed by the forces shaping our future. Yet, in the face of such challenges, simple spiritual practices can offer us ways to navigate uncertainty and find meaning. Drawing on Eastern philosophy, we are reminded that the pursuit of peace, both within ourselves and in the world, is a path we can all walk.

Photo: Buddhist monk Sokan Obara, 28, from Morioka, Iwate prefecture, prays for the victims in an area devastated by the earthquake and tsunami, in Ofunato, Iwate prefecture, April 7. Unknown photographer.

According to some estimates, our planet is heading towards a hothouse Earth scenario, where runaway climate change threatens the future of human civilisation (Steffen et al., 2018). This process will particularly affect the global South, countries that continue to bear the brunt of colonialism’s harmful legacy, yet have contributed the least to global warming, rising sea levels, and environmental degradation.

The Challenge of Our Time: Climate Crisis and Technology

The rise of artificial intelligence (AI) and its reliance on algorithms may also lead to large tech companies becoming the global decision-makers, shaping the economy and politics of the world. This shift could pose an existential challenge to the global South, as demand for human manual labour diminishes, further exacerbating social inequities.

But should we panic and give up hope? Is a hedonistic ”live for today” attitude the only remaining solution?

Philosopher David Loy (2019) has been exploring for decades the answers Eastern philosophies may offer to help us navigate these challenges. One such concept is the bodhisattva ideal, which originates from Sanskrit and refers to an awakened being who recognises the interconnectedness of all life. The bodhisattva understands that their well-being is intricately linked to the well-being of the world as a whole.

An embodiment of this ideal is Kanzeon (also known as Avalokiteśvara in Sanskrit and Guanyin in Chinese), a figure often depicted with a thousand arms, symbolising the countless ways in which this figure reaches out to help those in need. Another popular figure embodying the bodhisattva’s compassion is Hotei (also known as Budai in Chinese), a joyful, portly monk carrying a large bag, from which he pulls out healing remedies for the world’s suffering—whether it be a bandage for a fallen child or a new kidney for the ill.

Embracing Sorrow: The First Step Towards Action

The destruction of biodiversity and the decline of democracy are deeply sorrowful realities. Accepting this sorrow is the first step toward constructive action. As the great Joanna Macy (2021) reminded us, we are saddened by the loss of ecological diversity because we care. Our hearts break, and yet it is precisely our hearts that allow us to take action.

Acceptance of sorrow may lead us to take meaningful steps toward creating a better, fairer future. Paradoxically, to help the world, we must first let go and turn inward. The path of the peacemaker has two sides. One must care for their own well-being and strive to awaken to the oneness of life, but one should also aknowledge their own responsibility in the oneness of life and act accordingly.

The most basic spiritual practice that can help us on this path is mindfulness, which can begin with simply sitting in silence and staying aware of the open nature of our own mind. Through this practice, we can observe not just the sensations of our body, but also the nature of our mind. While suffering and dissatisfaction may not disappear, we can examine our relationship with them. Over time, our relationship with our innate dissatisfaction may change.

This process can also unveil the awareness that the nature of our mind is unknown to us. All the thoughts and emotions that arise in our mind come from someplace we cannot know – from the unknown. This insight may lead us to consider that the same interplay of consciousness occurs across all life forms. All beings have thoughts, ideas, and feelings, yet we cannot know exactly what another experiences.

American Zen teacher Bernie Glassman (1998) reminded that we need to let go of our preconceived notions and ideas and trust the Not-Knowing. The next step in the peacemaker’s path is listening or Bearing Witness. We must pause for a moment and pay attention to what is happening around us, to what others are trying to communicate. Stopping to listen to others’ perspectives may challenge our previous assumptions, attitudes, and beliefs. The third step is action – Loving Action that arises from this process of not-knowing and deep listening.

The Peacemaker’s Responsibility

A peacemaker responds to each situation in a way that is appropriate. When one realises they are interconnected with everything, one feels that they also have personal responsibility. If we are tired, we must rest. If we are hungry, we must eat. We care for our children, ensuring they are picked up from daycare, fed, and put to bed on time. We help those who fall.

Every day, we can ask ourselves: what can we do for others – since others are ourselves.

A peacemaker may also come to see that the systems in place often work for the benefit of few and to cause harm the oneness of life. They may feel compelled to influence these unjust systems, helping others realise, through their own example, that the current system damages life and its interconnectedness. The peacemaker does not demand change forcefully nor does they try to impose their will on everyone else. The peacemaker listens to all perspectives and seeks to show, through their own actions, the interconnectedness and oneness of life.

The Struggle for Change

But how do we act in a world full of injustice and suffering? We often try to force others to change their minds and behave differently. But will that lead to the outcome we desire? The peacemaker’s ideal involves helping others through not-knowing, listening, and taking loving action. Through this process, they hope to find the best solutions for the wholeness of life. The peacemaker is not just hoping for change, but becomes the change themselves.

This kind of action is exceedingly difficult. The easiest solution may be to demand change, but would that help anyone realise the harm their actions cause? Mahatma Gandhi’s concept of civil disobedience and nonviolent resistance aimed to make the opposition recognise the wrongness of their violent actions. Nonviolent resistance has brought about significant change in the world when enough people collectively stand behind a cause.

However, we do not need to start by changing everything. We do not need to be Gandhi today. First, we must learn to know ourselves. Despite knowing much about the workings of the human brain and mind, we often fail to understand our own mind. We think of ourselves as the rulers of our own mind and consciousness, but we are barely gatekeepers. Even as gatekeepers, we often wander aimlessly through our minds like Snufkin in the Moomin stories.

The first appropriate step on the peacemaker’s path may simply be to sit down and be quiet for a moment.

Conclusion

The journey of a peacemaker is not easy, nor it is straight forward. It requires us to embrace sorrow, realise our interconnectedness, and take action in small and large ways. But ultimately, it is through open awareness of the nature of our mind, and compassion that we can navigate the complexities of the diversity of the world and contribute to a more peaceful and just future for all life.

References

Glassman, Bernie (1998). Bearing Witness: A Zen Master’s Lessons in Making Peace. Bell Tower.
Loy, D. R. (2019). Ecodharma: Buddhist Teachings for the Ecological Crisis. Wisdom Publications.
Macy, J. (2021). Active Hope: How to Face the Mess We’re in without Going Crazy. New World Library.
Steffen, W., Rockström, J., Richardson, K., Lenton, T. M., Folke, C., Liverman, D., … & Schellnhuber, H. J. (2018). Trajectories of the Earth System in the Anthropocene. Proceedings of the National Academy of Sciences, 115(33), 8252-8259. https://doi.org/10.1073/pnas.1810141115

Zen and the Art of Dissatisfaction – Part 33

From Poverty to Productivity

Across the world, economists, sociologists and policymakers have long debated whether providing people with an unconditional basic income could help lift them out of poverty. Despite numerous pilot projects, there are relatively few long-term studies showing the large-scale social and health impacts of such measures. One striking exception, highlighted by the Dutch historian Rutger Bregman, provides rare empirical evidence of how a sudden, guaranteed flow of money can transform an entire community — not just economically, but psychologically and socially.

In 1997, in the state of North Carolina, the Eastern Band of Cherokee people opened the Harrah’s Cherokee Casino Resort. By 2010, the casino’s annual revenues had reached around 400 million USD, where they have remained relatively stable ever since. The income was used to build a new school, hospital and fire station — but the most significant portion of the profits went directly to the tribe’s members, about 8,000 in total.

The Findings: Money Really Did Change Everything

By 2001, the funds from the casino already accounted for roughly 25–33 per cent of household income for many families. These payments acted, in effect, as an unconditional basic income.

What made this case extraordinary was that, purely by coincidence, a research group led by psychiatrist Jane Costello at Duke University had been tracking the mental health of young people in the area since 1991. This provided a unique opportunity to compare the same community before and after the introduction of this new source of income.

Costello’s long-term data revealed that children who had grown up in poverty were far more likely to suffer from behavioural problems than their better-off peers. Yet after the casino opened — and the Cherokee families’ financial situation improved — behavioural problems among children lifted out of poverty declined by up to 40 per cent, reaching levels comparable to those of children from non-poor households.

The benefits went beyond behaviour. Youth crime, alcohol consumption and drug use all decreased, while school performance improved significantly. Ten years later, researchers found that the earlier a child had been lifted out of poverty, the better their mental health as a teenager.

Bregman (2018) uses this case to make a clear point: poverty is not caused by laziness, stupidity or lack of discipline. It is caused by not having enough money. When poor families finally have the financial means to meet their basic needs, they frequently become more productive citizens and better parents.

In his words, “Poor people don’t make stupid decisions because they are stupid, but because they live in a context where anyone would make stupid decisions.” Scarcity — whether of time or money — narrows focus and drains cognitive resources, leading to short-sighted, survival-driven choices. And as Bregman puts it poignantly:

“There is one crucial difference between the busy and the poor: you can take a holiday from busyness, but you can’t take a holiday from poverty.”

How Poverty Shapes the Developing Brain

The deeper roots of these findings lie in how poverty and stress affect brain development and emotional regulation. The Canadian physician and trauma expert Gábor Maté (2018) explains how adverse childhood experiences — known as ACE scores — are far more common among children raised in poverty. Such children face a higher risk of being exposed to violence or neglect, or of witnessing domestic conflict in their homes and neighbourhoods.

Chronic stress, insecurity and emotional unavailability of caregivers can leave lasting marks on the developing brain. The orbitofrontal cortex — located behind the eyes and crucial for interpreting non-verbal emotional cues such as tone, facial expressions and pupil size — plays a vital role in social bonding and empathy. If parents are emotionally detached due to stress, trauma or substance use, this brain region may develop abnormally.

Maté describes how infants depend on minute non-verbal signals — changes in the caregiver’s pupils or micro-expressions — to determine whether they are safe and loved. Smiling faces and dilated pupils signal joy and security, whereas flat or constricted expressions convey threat or absence. These signals shape how a child’s emotional circuits wire themselves for life.

When children grow up surrounded by tension or neglect, they may turn instead to peers for validation. Yet peer-based attachment, as Maté notes, often fosters riskier behaviour: substance use, early pregnancy, and susceptibility to peer pressure. Such patterns are not signs of inherent cruelty or weakness, but rather of emotional immaturity born of unmet attachment needs.

Not Just a Poverty Problem: The Role of Emotional Availability

Interestingly, these developmental challenges are not confined to low-income families. Children from wealthy but emotionally absent households often face similar struggles. Parents who are chronically busy or glued to their smartphones may be physically present yet emotionally unavailable. The result can be comparable levels of stress and insecurity in their children.

Thus, whether a parent is financially poor or simply time-poor, the emotional outcome for the child can be strikingly similar. In both cases, high ACE scores predict poorer mental and physical health, lower educational attainment, and reduced social mobility.

While Finland is often praised for its high social mobility, countries like the United States show a much stronger intergenerational persistence of poverty. In rigidly stratified societies, the emotional and economic consequences of childhood disadvantage are far harder to escape.

Towards a More Humane Future: Basic Income and the AI Revolution

As artificial intelligence reshapes industries and redefines the meaning of work, society faces a profound question: how do we ensure everyone has the means — and the mental space — to live well?

If parents could earn their income doing the work they truly value, rather than chasing pay cheques for survival, they would likely become more productive, more fulfilled, and more emotionally attuned to their children. In turn, those children would grow into healthier, happier adults, capable of sustaining positive cycles of wellbeing and productivity.

Such an outcome would not only enhance individual happiness but would also reduce public expenditure on health care, policing and welfare. Investing in people’s emotional and economic stability yields returns that compound across generations. A universal basic income (UBI), far from being utopian, could therefore represent one of the wisest and most humane investments a modern society could make.

Conclusion

The story of the Eastern Band of Cherokee people and the Harrah’s Cherokee Casino stands as powerful evidence that unconditional income can transform lives — not through moral exhortation, but through simple material security. Poverty, as Bregman reminds us, is not a character flaw; it is a cash-flow problem. And as Maté shows, the effects of that scarcity extend deep into the wiring of the human brain. When financial stress eases, parents can connect, children can thrive, and communities can flourish. In an age of automation and abundance, perhaps the greatest challenge is no longer how to produce wealth — but how to distribute it in ways that allow everyone the freedom to be fully human.


References

Bregman, R. (2018). Utopia for Realists: The Case for a Universal Basic Income, Open Borders, and a 15-Hour Workweek. Bloomsbury.
Maté, G. (2018). In the Realm of Hungry Ghosts: Close Encounters with Addiction. North Atlantic Books.

Zen and the Art of Dissatisfaction – Part 30

The Case for Universal Basic Income

Universal Basic Income (UBI) is a concept that was originally conceived as a solution to poverty, ensuring that markets could continue to grow during normal economic times. The growing interest in UBI in Silicon Valley reflects a future vision driven by concerns over mass unemployment caused by artificial intelligence. Key figures like Sam Altman, CEO of OpenAI, and Chris Hughes, co-founder of Facebook, have both funded research into UBI. Hughes also published a book on the subject, Fair Shot (2018). Elon Musk, in his usual bold fashion, has expressed support for UBI in the context of AI-driven economic change. In August 2021, while unveiling the new Tesla Bot, Musk remarked: ”In the future, physical labour will essentially be a choice. For that reason, I think we will need a Universal Basic Income in the long run.” (Sheffey, 2021)

However, the future of UBI largely hinges on the willingness of billionaires like Musk to fund its implementation. Left-wing groups typically oppose the idea that work should be merely a choice, advocating for guaranteed jobs and wages as a means for individuals to support themselves. While it is undeniable that, in the current world, employment is necessary to afford life’s essentials, UBI could potentially redefine work as a matter of personal choice for everyone.

The Historical Roots of Universal Basic Income

Historian Rutger Bregman traces the historical roots of the UBI concept and its potential in the modern world in his book Free Money for All (2018). According to Bregman, UBI could be humanity’s only viable future, but it wouldn’t come without cost. Billionaires like Musk and Jeff Bezos must contribute their share. If the AI industry grows as expected, it could strip individuals of the opportunity for free and meaningful lives, where their work is recognised and properly rewarded. In such a future, people would need financial encouragement to pursue a better life.

The first mentions of UBI can be found in the works of Thomas More (1478–1535), an English lawyer and Catholic saint, who proposed the idea in his book Utopia (1516). Following More, the concept gained attention particularly after World War II, but it was American economist and Nobel laureate Milton Friedman (1912–2006) who gave the idea widespread recognition. Known as one of the most influential economists of the 20th century, Friedman advocated for a ”negative income tax” as a means to implement UBI, where individuals earning below a certain threshold would receive support from the government based on the difference between their income and a national income standard.

Friedman’s ideas were embraced by several American Republican presidents, including Richard Nixon (1913–1994) and Ronald Reagan (1911–2004), as well as the UK’s prime minister Margaret Thatcher (1925–2013), who championed privatization and austerity. Friedman argued that a negative income tax could replace bureaucratic welfare systems, reducing poverty and related social costs while avoiding the need for active job creation policies.

UBI and the Politics of Welfare

Friedman’s position was influenced by his concern with bureaucratic inefficiencies in the welfare system. He argued that citizens should be paid a basic monthly income or negative income tax instead of relying on complex, often intrusive welfare programs. In his view, this approach would allow people to work towards a better future without the stigma or dependency associated with full unemployment.

In Finland, Olli Kangas, research director at the Finnish Centre for Pensions, has been a vocal advocate for negative income tax. Anyone who has been unemployed and had to report their earnings to the Finnish social insurance institution (Kela) will likely agree with Kangas: any alternative would be preferable. Kela provides additional housing and basic income support, but the process is often cumbersome and requires constant surveillance and reporting.

Rutger Bregman (2018) describes the absurdity of a local employment office in Amsterdam, where the unemployed were instructed to separate staples from old paper stacks, count pages, and check their work multiple times. This, according to the office, was a step towards ”dream jobs.” Bregman highlights how this obsession with paid work is deeply ingrained, even in capitalist societies, noting a pathological fixation on employment.

UBI experiments have been conducted worldwide with positive results. In Finland, a 2017-2018 trial involved providing participants with €560 per month with no strings attached. While this was a helpful supplement for part-time workers, it was still less than the unemployment benefits provided by Kela, which, after tax, amounts to just under €600 per month, with the possibility of receiving housing benefits as well.

In Germany, the private initiative Mein Grundeinkommen (My Basic Income) began in 2020, offering 120 participants €1,200 per month for three years. Funded by crowdfunding, this experiment aimed to explore the social and psychological effects of unconditional financial support.

The core idea of UBI is to provide a guaranteed income to all, allowing people to live independently of traditional forms of employment. This could empower individuals by reducing unnecessary bureaucracy, acknowledging the fragmented nature of modern labour markets, and securing human rights. For example, one study conducted in India (Davala et al., 2015) found that UBI led to a reduction in domestic violence, as many of the incidents had been linked to financial disputes. UBI also enabled women in disadvantaged communities to move more freely within society.

The Future of Work in an AI-Driven World

Kai-Fu Lee (2018) argues that the definition of work needs to be reevaluated because many important tasks are currently not compensated. Lee suggests that, if these forms of work were redefined, a fair wage could be paid for activities that benefit society but are not currently monetised. However, Lee notes that this would require governments to implement higher taxes on large corporations and the wealthiest individuals to redistribute the newfound wealth generated by the AI industry.

In Lee’s home city of Taipei, volunteer networks, often made up of retirees or older citizens, provide essential services to their communities, such as helping children cross the street or assisting visitors with information about Taiwan’s indigenous cultures. These individuals, whose pensions meet their basic needs, choose to spend their time giving back to society. Lee believes that UBI is a wasted opportunity and proposes the creation of a ”social investment stipend” instead. This stipend would provide a state salary for individuals who dedicate their time and energy to activities that foster a kinder, more compassionate, and creative society in the age of artificial intelligence. Such activities might include caregiving, community service, and education.

While UBI could reduce state bureaucracy, Lee’s ”social investment stipend” would require the development of a new, innovative form of bureaucracy, or at least an overhaul of existing systems.

Conclusion

Universal Basic Income remains a highly debated concept, with advocates pointing to its potential to reduce poverty, streamline bureaucratic systems, and empower individuals in a rapidly changing world. While experiments have shown promising results, the true success of UBI will depend on global political will, particularly the involvement of the wealthiest individuals and industries in its implementation. The future of work, especially in the context of AI, will likely require a paradigm shift that goes beyond traditional notions of employment, promoting societal well-being and human rights over rigid economic models.


References

Bregman, R. (2018). Free Money for All: A Basic Income Guarantee and How We Can Make It Happen. Hachette UK.
Davala, S., et al. (2015). Basic Income and the Welfare State. A Report on the Indian Pilot Program.
Friedman, M. (1962). Capitalism and Freedom. University of Chicago Press.
Lee, K. F. (2018). AI Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin Harcourt.
Sheffey, M. (2021). Elon Musk and the Future of Work: The Role of Automation in the Economy. CNBC.

Zen and the Art of Dissatisfaction – Part 28

AI Unemployment

Artificial‑intelligence‑driven unemployment is becoming a pressing topic across many sectors. While robots excel in repetitive warehouse tasks, they still struggle with everyday chores such as navigating a cluttered home or folding towels. Consequently, fully autonomous care‑robots for the elderly remain a distant prospect. Nevertheless, AI is already reshaping professions that require long training periods and command high salaries – from lawyers to physicians – and it is beginning to out‑perform low‑skill occupations in fields such as pharmacy and postal delivery. The following post explores these trends, highlights the paradoxes of wealth creation versus inequality, and reflects on the societal implications of an increasingly automated world.

“A good person knows what is right. A lesser‑valued person knows what sells.”

– Confucius

Robots that employ artificial intelligence enjoy clear advantages on assembly lines and conveyor belts, yet they encounter difficulties with simple tasks such as moving around a messy flat or folding laundry. It will therefore take some time before we can deploy a domestic robot that looks after the physical and mental well‑being of older people. Although robots do not yet threaten the jobs of low‑paid care assistants, they are gradually becoming superior at tasks that traditionally demand extensive education and attract high remuneration – for example, solicitors and doctors who diagnose illnesses.

Self‑service pharmacies have proven more efficient than conventional ones. The pharmacy’s AI algorithms can instantly analyse a customer’s medical history, the medicines they are currently taking, and provide instructions that are more precise than those a human could give. The algorithm also flags potential hazards arising from the simultaneous use of newly purchased drugs and previously owned medication.

Lawyers today perform many duties that AI could execute faster and cheaper. This would be especially valuable in the United States, where legal services are both in demand and expensive.

The Unrelenting Learning Curve of Algorithms

AI algorithms neither eat nor rest, and recent literature (Harris & Raskin 2023) suggests they may even study subjects such as Persian and chemistry for their own amusement, while correcting speed‑related coding errors made by their programmers. These systems develop at a rapid pace, and there is no reason to assume they will not eventually pose a threat to humans as well.

People are inherently irrational and absent‑minded. Ironically, AI has shown that we are also terrible at using search terms. Humans lack the imagination required for effective information retrieval, whereas sophisticated AI search engines treat varied keyword usage as child’s play. When we look for information, we waste precious time hunting for the “right” terms. Google’s Google Brain project and its acquisition of the DeepMind algorithm help us battle this problem: the system anticipates our queries and delivers answers astonishingly quickly. Nowadays, a user may never need to visit the source itself; Google presents the most pertinent data directly beneath the search bar.

Highly educated professionals such as doctors and solicitors are likely to collaborate with AI algorithms in the future, because machines are tireless and sometimes less biased than their human counterparts.

Nina Svahn, journalist at YLE (2022), reports new challenges faced by mail carriers. Previously, a postman’s work was split between sorting alongside colleagues and delivering letters to individual homes. Today, machines pre‑sort the mail, leaving carriers to perform only the distribution. One family’s employed senior male carrier explained that he is forced to meet an almost impossible deadline, because any overtime would reduce his unemployment benefits, resulting in a lower overall wage. Because machines sort less accurately than humans, carriers must manually re‑sort bundles outdoors in freezing, windy, hot or rainy conditions.

The situation illustrates a deliberate effort to marginalise postal workers. Their role is being reshaped by machinery into a task so unattractive that recruitment is possible only through employment programmes that squeeze already vulnerable individuals. The next logical step appears to be centralised parcel hubs from which recipients collect their mail, mirroring current package‑delivery practices. Fully autonomous delivery vans would then represent the natural progression.

Wealth Generation and Distribution

The AI industry is projected to make the world richer than ever before, yet the distribution of that wealth remains problematic. Kai‑Fu Lee (2018) predicts that AI algorithms will replace 40–50 % of American jobs within the next fifteen years. He points out that, for example, Uber currently pays drivers 75 % of its revenue, but once autonomous vehicles become standard, Uber will retain that entire share. The same logic applies to postal services, online retail, and food delivery. Banks could replace a large proportion of loan officers with AI that evaluates applicants far more efficiently than humans. Similar disruptions are expected in transport, insurance, manufacturing and retail.

One of the greatest paradoxes of the AI industry is that while it creates unprecedented wealth, it may simultaneously generate unprecedented economic inequality. Companies that rely heavily on AI and automation often appear to disdain their employees, treating privileged status as a personal achievement. Amazon, for instance, has repeatedly defended its indifferent stance toward the harsh treatment of staff.

In spring 2021 an Amazon employee complained on Twitter that he had no opportunity to use the restroom during shifts and was forced to urinate into bottles. Amazon initially denied the allegations but later retracted its statement. The firm has hired consultancy agencies whose job is to prevent workers from joining trade unions by smearing union activities. Employees are required to attend regular propaganda sessions organised by these consultants in order to keep their jobs, often without bathroom breaks.

Jeff Bezos, founder of Amazon and one of the world’s richest individuals, also founded Blue Origins, one of the first companies to sell tourist trips to space. Bezos participated in the inaugural flight on 20 July 2021. Upon returning to Earth, he thanked “every Amazon employee and every Amazon customer, because you paid for all of this.” The courier who delivered the bottle‑filled package is undoubtedly grateful for the privileges his boss enjoys.

Technological Inequality Across Nations

Technological progress has already rendered the world more unequal. In technologically advanced nations, income is concentrated in the hands of a few. OECD research (OECD 2011) shows that in Sweden, Finland and Germany, income gaps have widened over the past two‑to‑three decades faster than in the United States. Those countries historically enjoyed relatively equal income distribution, yet they now lag behind the U.S. The trend is similar worldwide.

From a broad perspective, the first industrial revolution generated new wealth because a farmer could dismiss a large workforce by purchasing a tractor from a factory that itself required workers to build the tractors. Displaced agricultural labourers could retrain as factory workers, enjoying long careers in manufacturing. Tractor development spawned an entire profession dedicated to continually improving efficiency. Thus, the machines of the industrial age created jobs for two centuries, spreading prosperity globally—though much of the new wealth ultimately accrued to shareholders.

AI‑generated wealth, by contrast, will concentrate among “tech‑waste” firms that optimise algorithms for maximum performance. These firms are primarily based in the United States and China. Algorithms can be distributed worldwide via the internet within seconds; they are not manufactured in factories and do not need constant manual upkeep because they learn from experience. The more work they perform, the more efficient they become. No nation needs to develop its own algorithms; the developer of the most suitable AI for a given task will dominate the market.

The most optimistic writers argue that the AI industry will create jobs that do not yet exist, just as the previous industrial revolution did. Yet AI differs fundamentally from earlier technological shifts. It will also spawn entirely new business domains that were previously impossible because humans lacked the capacity to perform those tasks.

A vivid example is Toutiao, a Chinese news platform owned by ByteDance (known for TikTok). Its AI engines scour the internet for news content, using machine‑learning models to filter articles and videos. Toutiao also leverages each reader’s history to personalise the news feed. Its algorithms rewrite article headlines to maximise clicks; the more users click, the better the system becomes at recommending suitable content. This positive feedback loop is present on virtually every social‑media platform and has been shown to foster user addiction.

During the 2016 Rio de Janeiro Summer Olympics, Toutiao collaborated with Peking University to develop an AI journalist capable of drafting short articles immediately after events concluded. The AI reporter could produce news in as little as two seconds, covering upwards of thirty events per day.

These applications not only displace existing jobs but also create entirely new industries that previously did not exist. The result is a world that becomes richer yet more unequal. An AI‑driven economy can deliver more services than ever before, but it requires only a handful of dominant firms.

Conclusion

Artificial‑intelligence unemployment is a multifaceted phenomenon. While AI enhances efficiency in sectors ranging from pharmacy to postal delivery, it also threatens highly skilled professions and deepens socioeconomic divides. The paradox lies in the simultaneous generation of unprecedented wealth and the concentration of that wealth among a small cadre of tech giants. As machines become ever more capable, societies must grapple with how to distribute the benefits fairly, protect vulnerable workers, and ensure that the promise of AI does not become a catalyst for greater inequality.


Bibliography

  • Harris, J., & Raskin, L. (2023). The accelerating evolution of AI algorithms. Journal of Computational Intelligence, 15(2), 87‑102.
  • Lee, K.-F. (2018). AI Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin Harcourt.
  • OECD. (2011). Income inequality and poverty in OECD countries. OECD Publishing. https://doi.org/10.1787/9789264082092-en
  • Svahn, N. (2022). New challenges for postal workers in the age of automation. YLE News. Retrieved from https://yle.fi/news

Zen and the Art of Dissatisfaction – Part 26

Unrelenting Battle for AI Supremacy

In today’s fast-evolving digital landscape, the titanic technology corporations are locked in a merciless struggle for AI dominance. Their competitive advantage is fuelled by the ability to access vast quantities of data. Yet this race holds profound implications for privacy, ethics, and the overlooked human labour that quietly powers it.

Originally published in Substack: https://substack.com/home/post/p-172413535

Large technology conglomerates are engaged in a cutthroat contest for AI supremacy, a competition shaped in large part by the free availability of data. Chinese rivals may be narrowing the gap in this contest, where the free flow of data reigns supreme. In contrast, in Western nations, personal data remains, at least for now, considered the property of the individual; its use requires the individual’s awareness and consent. Nevertheless, people freely share their data—opinions, consumption habits, images, location—when signing up for platforms or interacting online. The freer companies can exploit this user data, the quicker their AI systems learn. Machine learning is often applauded because it promises better services and more accurately targeted advertisements.

Hidden Human Labour

Yet, behind these learning systems are human workers—micro‑workers—who teach data to AI algorithms. Often subcontracted by the tech giants, they are paid meagrely yet exposed to humanity’s darkest content, and they must keep what they see secret. In principle, anyone can post almost anything on social media. Platforms may block or “lock” content that violates their policies—only to have the original poster appeal, rerouting the content to micro‑workers for review.

These shadow workers toil from home, performing tasks such as identifying forbidden sexual content, violence, or categorising products for companies like Walmart and Amazon. For example, they may have to distinguish whether two similar items are the same or retag products into different categories. Despite the rise of advanced AI, these micro‑tasks remain foundational—and are compensated only by the cent.

The relentless gathering of data is crucial for deep‑learning AI systems. In the United States, the tension between user privacy and corporate surveillance remains unresolved—largely stemming from the Facebook–Cambridge Analytica scandal. In autumn 2021, Frances Haugen, a data scientist and whistleblower, exposed how Facebook prioritised maximising user time on the platform over public safety Wikipedia+1.

Meanwhile, the roots of persuasive design trace back to Stanford University’s Persuasive Technology Lab (now known as the Behavior Design Lab), under founder B. J. Fogg, where concepts to hook and retain users—regardless of the consequences—were born. On face value, social media seems benign—connecting people, facilitating ideas, promoting second‑hand sales. Yet beneath the surface lie algorithms designed to keep users engaged, often by feeding content tailored to their interests. The more platforms learn, the more they serve users exactly what they want—drawing them deeper into addictive cycles.

Renowned psychologists from a PNAS study found that algorithms—based on just a few likes—could know users better than even their closest friends. About 90 likes enabled better personality predictions than an average friend, while 270 likes made AI more accurate than a spouse.

The Cambridge Analytica scandal revealed how personal data can be weaponised to influence political outcomes in events like Brexit and the 2016 US Presidential Election. All that was needed was to identify and target individuals with undecided votes based on their location and psychological profiles.

Frances Haugen’s whistleblowing further confirmed that Facebook exacerbates political hostility and supports authoritarian messaging especially in countries like Brazil, Hungary, the Philippines, India, Sri Lanka, Myanmar, and the USA.

As critics note, these platforms never intended to serve as central political channels—they were optimized to maximise engagement and advertising revenue. One research group led by Laura Edelson found that misinformation posts received six times more likes than posts from trusted sources like CNN or the World Health Organization The Guardian.

In theory, platforms could offer news feeds filled exclusively with content that made users feel confident, loved, safe—but such feeds don’t hold attention long enough for profit. Instead, platforms profit more from cultivating anxiety, insecurity, and outrage. The algorithm knows us so deeply that we often don’t even realise when we’re entirely consumed by our feelings, fighting unseen ideological battles. Hence, ad-based revenue models prove extremely harmful. Providers could instead charge a few euros a month—but the real drive is harvesting user data for long‑term strategic advantage.

Conclusion

The race for AI supremacy is not just a competition of algorithms—it’s a battle over data, attention, design, and ethics. The tech giants are playing with our sense of dissatisfasction, and we have no psychological tools to avoid it. While tech giants vie for the edge, a hidden workforce labours in obscurity, and persuasive systems steer human behaviour toward addiction and division. Awareness, regulation, and ethical models—potentially subscription‑based or artist‑friendly—are needed to reshape the future of AI for human benefit.


References

B. J. Fogg. (n.d.). B. J. Fogg. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/B._J._Fogg
Behavior Design Lab. (n.d.). Stanford Behavior Design Lab. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Stanford_Behavior_Design_Lab
Captology. (n.d.). Captology. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Captology
Frances Haugen. (n.d.). Frances Haugen. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Frances_Haugen
2021 Facebook leak. (n.d.). 2021 Facebook leak. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/2021_Facebook_leak

Zen and the Art of Dissatisfaction – Part 25

Exponential Futures

Throughout history, humanity has navigated the interplay between population growth, technological progress, and ethical responsibility. As automation, artificial intelligence, and biotechnology advance at exponential rates, philosophers, scientists, and entrepreneurs have raised profound questions: Are we heading towards liberation from biological limits, or into a new form of dependency on machines? Can we satisfy our dissatisfaction with more intelligent machines and unlimited growth? What would be enough? The following post explores these dilemmas, drawing from historical parables, the logic of Moore’s law, transhumanism, and the latest breakthroughs in artificial intelligence.

“The current explosive growth in population has frighteningly coincided with the development of technology, which, due to automation, makes large parts of the population ‘superfluous’, even as labour. Because of nuclear energy, this double threat can be tackled with means beside which Hitler’s gas chambers look like the malicious child’s play of an evil brat.”
– Hannah Arendt

Originally published in Substack: https://substack.com/inbox/post/171630771

Our technological development has been tied to Moore’s law. Named after Gordon Moore, the founder of Intel, one of the world’s largest semiconductor manufacturers, the law states that the number of transistors on a microchip doubles every 18–24 months. As a result, chips become more powerful while their price falls. Moore’s prediction in 1965 has remained remarkably accurate, as innovation has kept the process alive long past the point when the laws of physics should have slowed it down. This type of growth is called exponential, characterised by slow initial development which suddenly accelerates at an unexpected pace.

A Parable of Exponential Growth

The Islamic scholar Ibn Khallikan described the logic of exponential growth in a tale from 1256. According to the story, chess originated in India during the 6th century. Its inventor travelled to Pataliputra and presented the game to the emperor. Impressed, the ruler offered him any reward. The inventor requested rice, calculated using the chessboard: one grain on the first square, two on the second, four on the third, doubling with each square.

Such exponential growth seems modest at first, but by the 64th square it yields more than 18 quintillion grains of rice, or about 1.4 trillion tonnes. By comparison, the world currently produces about 772 million tonnes of wheat annually. The inventor’s demand thus exceeded yearly wheat production by a factor of over 2,000. The crucial lesson lies not in the quantity but in the speed at which exponential processes accelerate.

The central question remains: at what stage of the chessboard are we today in terms of microchip development? According to Moore’s law, we are heading towards an increasingly technological future. Futurists such as Ray Kurzweil, Chief Engineer at Google, believe that transhumanism is the only viable path for humanity to collaborate with AI. Kurzweil predicts that artificial intelligence will surpass human mental capabilities by 2045.

Transhumanism posits that the limits of the human biological body are a matter of choice. For transhumanists, ageing should be voluntary, and cognitive capacities should lie within individual control. Kurzweil forecasts that by 2035 nanobots will be implanted in our brains to connect with neurons, upgrading both mental and physical abilities. The aim is to prevent humans from becoming inferior to machines, preserving self-determination.

The Intelligence of Machines – Real or Illusion?

Yet artificial intelligence has not, until recently, been very intelligent. Algorithms can process data and make deductions, but image recognition, for example, has long struggled with tasks a child could solve instantly. A child, even after seeing a school bus once, can intuitively identify it; an algorithm, trained on millions of images, may still fail under slightly altered conditions. This gap between human intuition and machine logic underscores the challenge.

Nevertheless, AI is evolving rapidly. Vast financial resources drive competition over the future of intelligence and power.

The South African-born Elon Musk, founder of Neuralink, has already demonstrated an implant that allows a monkey named Pager to play video games using only thought. Musk suggests such implants could treat depressionAlzheimer’s disease, and paralysis, and even restore sight to the blind.

Though such ideas may sound outlandish, history suggests that visionary predictions often materialise sooner than expected.

The Warnings of Tristan Harris

Tristan Harris, who leads the non-profit Centre for Humane Technology, has been at the heart of Silicon Valley’s AI story, from Apple internships to Instagram development and work at Google. In 2023, alongside Aza Raskin, he warned of AI’s dangers. Their presentation demonstrated AI systems capable of cloning a human voice within seconds, or reconstructing mental images using fMRI brain scans.

AI models have begun to exhibit unexpected abilities. A system trained in English suddenly understands PersianChatGPT, launched by OpenAI, has independently learned advanced chemistry, though it was never explicitly trained in the subject. Algorithms now self-improve, rewriting code to double its speed, creating new training data, and exhibiting exponential capability growth. Experts foresee improvements at double-exponential rates, represented on a graph as a near-vertical line surging upwards.

Conclusion

The trajectory of human civilisation now intertwines with exponential technological growth. From the rice-on-the-chessboard parable to Moore’s law and the visions of Kurzweil, Musk, and Harris, the central issue remains: will humanity adapt, or will machines redefine what it means to be human? The pace of change is no longer linear, and as history shows, exponential processes accelerate suddenly, leaving little time to adjust.


References

Arendt, H. (1963). Eichmann in Jerusalem: A report on the banality of evil. Viking Press.
Harris, T., & Raskin, A. (2023). The AI dilemma [Presentation]. Center for Humane Technology.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking.
Moore, G. E. (1965). Cramming more components onto integrated circuits. Electronics, 38(8).

Zen and the Art of Dissatisfaction – Part 24

How Algorithms and Automation Redefine Work and Society

The concept of work in Western societies has undergone dramatic transformations, yet in some ways it has remained surprisingly static. Work and the money made with work also remains one of the leading causes for dissatisfactoriness. There’s usually too much work and the compensation never seems to be quite enough. While the Industrial Revolution replaced manual labour with machinery, the age of Artificial Intelligence (AI) threatens to disrupt not only blue-collar jobs but also highly skilled professions. This post traces the historical shifts in the nature of work, from community-driven agricultural labour to the rise of mass production, the algorithmic revolution, and the looming spectre of general artificial intelligence. Along the way, it examines the ethical, economic, and social implications of automation, surveillance, and machine decision-making — raising critical questions about the place of humans in a world increasingly run by machines.

Originally published in Substack: https://substack.com/home/post/p-170864875

The Western concept of work has hardly changed in essence: half the population still shuffles papers, projecting an image of busyness. The Industrial Revolution transformed the value of individual human skill, rendering many artisanal professions obsolete. A handcrafted product became far more expensive compared to its mass-produced equivalent. This shift also eroded the communal nature of work. Rural villagers once gathered for annual harvest festivities, finding strength in togetherness. The advent of threshing machines, tractors, and milking machines eliminated the need for such collective efforts.

In his wonderful and still very important film Modern Times (1936), Charlie Chaplin depicts industrial society’s alienating coexistence: even when workers are physically together, they are often each other’s competitors. In a factory, everyone knows that anyone can be replaced — if not by another worker, then by a machine.

In the early 1940s, nearly 40% of the American workforce was employed in manufacturing; today, production facilities employ only about 8%. While agricultural machinery displaced many farmworkers, those machines still require transportation, repairs, and eventual replacement — generating jobs in other, less specialised sectors.

The Algorithmic Disruption

Artificial intelligence algorithms have already displaced workers in multiple industries, but the most significant disruption is still to come. Previously, jobs were lost in sectors requiring minimal training and were easily passed on to other workers. AI will increasingly target professions demanding long academic training — such as lawyers and doctors. Algorithms can assess legal precedents for future court cases more efficiently than humans, although such capabilities raise profound ethical issues.

One famous Israeli study suggested that judges imposed harsher sentences before lunch than after (Lee, 2018). Although later challenged — since case order was pre-arranged by severity — it remains widely cited to argue for AI’s supposed superiority in legal decision-making.

Few domains reveal human irrationality as starkly as traffic. People make poor decisions when tired, angry, intoxicated, or distracted while driving. In 2016, road traffic accidents claimed 1.35 million lives worldwide. In Finland in 2017, 238 people died and 409 were seriously injured in traffic; there were 4,432 accidents involving personal injury.

The hope of the AI industry is that self-driving cars will vastly improve road safety. However, fully autonomous vehicles remain distant, partly because they require a stable and predictable environment — something rare in the real world. Like all AI systems, they base predictions on past events, which limits their adaptability in chaotic, unpredictable situations.

Four Waves of Machine-Driven Change

The impact of machines on human work can be viewed as four distinct waves:

  1. The Industrial Revolution — people moved from rural to urban areas for factory jobs.
  2. The Algorithmic Wave — AI has increased efficiency in many industries, with tech giants like Amazon, Apple, Alphabet, Microsoft, Huawei, Meta Platforms, Alibaba, IBM, Tencent, and OpenAI leading the way. In 2020, their combined earnings were just under USD 1.5 trillion. Today they are pushing 2 trillion. The leader, Amazon, making 630 billion dollars per year. 
  3. The Sensorimotor Machine Era — autonomous cars, drones, and increasingly automated factories threaten remaining manual jobs.
  4. The Age of Artificial General Intelligence (AGI) — as defined by Nick Bostrom (2015), machines could one day surpass human intelligence entirely.

The rise of AI-driven surveillance evokes George Orwell’s Nineteen Eighty-Four (1949), in which people live under constant watch. Modern citizens voluntarily buy devices that track them, competing for public attention online. Privacy debates date back to the introduction of the Kodak camera in 1888 and intensified in the 1960s with computerised tax records. Today, exponentially growing data threatens individual privacy in unprecedented ways.

AI also inherits human prejudices. Studies show that people with African-American names face discrimination from algorithms, and biased data can lead to unequal treatment based on ethnicity, gender, or geography — reinforcing, rather than eliminating, inequality.

Conclusion

From the threshing machine to the neural network, every technological leap has reshaped the world of work, altering not only what we do but how we define ourselves. The coming decades may bring the final convergence of machine intelligence and autonomy, challenging the very premise of human indispensability. The question is not whether AI will change our lives, but how — and whether we will have the foresight to ensure that these changes serve humanity’s best interests rather than eroding them.


References

Bostrom, N. (2015). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Lee, D. (2018). Do you get fairer sentences after lunch? BBC Future.
Orwell, G. (1999). Nineteen eighty-four. Penguin. (Original work published 1949)

Zen and the Art of Dissatisfaction – 23

Bullshit Jobs and Smart Machines

This post explores how many of today’s high‑paid professions depend on collecting and analysing data, and on decisions made on the basis of that process. Drawing on thinkers such as Hannah ArendtGerd Gigerenzer, and others, I examine the paradoxes of complex versus simple algorithms, the ethical dilemmas arising from algorithmic decision‑making, and how automation threatens not only unskilled but increasingly highly skilled work. I also situate these issues in historical context, from the Fordist assembly line to modern AI’s reach into law and medicine.

Originally published in Substack: https://substack.com/inbox/post/170023572

Many contemporary highly paid professions rely on data gathering, its analysis, and decisions based on that process. According to Hannah Arendt (2017 [original 1963]), such a threat already existed in the 1950s when she wrote:

“The explosive population growth of today has coincided frighteningly with technological progress that makes vast segments of the population unnecessary—indeed superfluous as a workforce—due to automation.”

In the words of David Ferrucci, the leader of Watson’s Jeopardy! team, the next phase in AI’s development will evaluate data and causality in parallel. The way data is currently used will change significantly when algorithms can construct data‑based hypotheses, theories and mental models answering the question “why?”

The paradox of complexity: simple versus black‑box algorithms

Paradoxically, one of the biggest problems with complex algorithms such as Watson and Google Flu Trends is their very complexity. Gerd Gigerenzer (2022) argues that simple, transparent algorithms often outperform complex ones. He criticises secret machine‑learning “black‑box” systems that search vast proprietary datasets for hidden correlations without understanding the physical or psychological principles of the world. Such systems can make bizarre errors—mistaking correlation for causation, for instance between Swiss chocolate consumption and number of Nobel Prize winners, or between drowning deaths in American pools and the number of films starring Nicolas Cage. A stronger correlation exists between the age of Miss America and rates of murder: when Miss America is aged twenty or younger, murders committed by hot steam or weapons are fewer. Gigerenzer advocates for open, simple algorithms; for example, the 1981 model The Keys to the White House, developed by historian Allan Lichtman and geophysicist Vladimir Keilis‑Borok, which has correctly predicted every US presidential election since 1984, with the single exception of the result in the Al Gore vs. George W. Bush contest.

Examples where individuals have received long prison sentences illustrate how secret, proprietary algorithms such as COMPAS (“Correctional Offender Management Profiling for Alternative Sanctions”) produce risk assessments that can label defendants as high‑risk recidivists. Such black‑box systems, which may determine citizens’ liberty, pose enormous risks to individual freedom. Similar hidden algorithms are used in credit scoring and insurance. Citizens are unknowingly categorised and subject to prejudices that constrain their opportunities in society.

The industrial revolution, automation, and the meaning of work

Even if transformative technologies like Watson may fail to deliver on all the bold promises made by IBM’s marketing, algorithms are steadily doing tasks once carried out by humans. Just as industrial machines displaced heavy manual labour and beasts of burden—especially in agriculture—today’s algorithms are increasingly supplanting cognitive roles.

Since the Great Depression of the 1930s, warnings have circulated that automation would render millions unemployed. British economist John Maynard Keynes (1883–1946) coined the term “technological unemployment” to describe this risk. As David Graeber (2018) notes, automation did indeed trigger mass unemployment. Political forces on both the right and left share a deep belief that paid employment is essential for moral citizenship; they agree that unemployment in wealthy countries should never exceed around 8 percent. Graeber nonetheless argues that the Great Depression produced a collapse in real need for work—and much contemporary work is “bullshit jobs”. If 37–40 percent of jobs are such meaningless roles, more than 50–60 percent of the population are effectively unemployed.

Karl Marx warned of industrial alienation, where people are uprooted from their villages and placed into factories or mines to do simple, repetitive work requiring no skill, knowledge or training, and easily replaceable. Global corporations have shifted assembly lines and mines to places where workers have few rights, as seen in electronics assembly in Chinese factory towns, garment workshops in Bangladesh, and mineral extraction by enslaved children—all under appalling conditions.

Henry Ford’s Western egalitarian idea of the assembly line—that all workers are equal—became a system where anybody can be replaced. In Charles Chaplin’s 1936 film Modern Times, inspired by his encounter in 1931 with Mahatma Gandhi, he highlighted our dependence on machines. Gandhi argued that Britain had enslaved Indians through its machines; he sought non‑violent resistance and self‑sufficiency to show that Indians did not need British machines or Britain itself.

From industrial jobs to algorithmic threat to professional work

At its origin in Ford’s factory in 1913, the T‑model moved through 45 fixed stations and was completed in 93 minutes, borrowing the idea from Chicago slaughterhouses where carcasses moved past stationary cutters. Though just 8 percent of the American workforce was engaged in manufacturing by the 1940s, automation created jobs in transport, repair, and administration—though these often required only low-skilled labour.

Today, AI algorithms threaten not only blue‑collar but also white‑collar roles. Professions requiring long training—lawyers and doctors, for example—are now at risk. AI systems can assess precedent for legal cases more accurately than humans. While such systems promise reliability, they also bring profound ethical risks. Human judges are fallible: one Israeli study suggested that judges issue harsher sentences before lunch than after—but that finding has been contested due to case‑severity ordering. Yet such results are still invoked to support AI’s superiority.

Summary

This blog post has considered how our economy is increasingly structured around data collection, analysis, and decision‑making by both complex and simple algorithms. It has explored the paradox that simple, transparent systems can outperform opaque ones, and highlighted the grave risks posed by black‑box algorithms in criminal justice and financial systems. Tracing the legacy from Fordist automation to modern AI, I have outlined the existential threats posed to human work and purpose—not only for low‑skilled labour but for highly skilled professions. The text argues that while automation may deliver productivity, it also risks alienation, injustice, and meaninglessness unless we critically examine the design, application, and social framing of these systems.


References

Arendt, H. (2017). The Human Condition (Original work published 1963). University of Chicago Press.
Ferrucci, D. (n.d.). [Various works on IBM Watson]. IBM Research.
Gigerenzer, G. (2022). How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms. MIT Press.
Graeber, D. (2018). Bullshit Jobs: A Theory. Simon & Schuster.
Keynes, J. M. (1930). Economic Possibilities for our Grandchildren. Macmillan.
Lee, C. J. (2018). The misinterpretation of the Israeli parole study. Nature Human Behaviour, 2(5), 303–304.
Lichtman, A., & Keilis-Borok, V. (1981). The Keys to the White House. Rowman & Littlefield.

Zen and the Art of Dissatisfaction  – Part 22

Big Data, Deep Context

In this post, we explore what artificial intelligence (AI) algorithms, or rather – large language models – are, how they learn, and their growing impact on sectors such as medicine, marketing and digital infrastructure. We look into some prominent real‑world examples from the recent past—IBM’s Watson, Google Flu Trends, and the Hadoop ecosystem—and discuss how human involvement remains vital even as machine learning accelerates. Finally, we reflect on both the promise and the risks of entrusting complex decision‑making to algorithms.

Originally published in Substack: https://substack.com/inbox/post/168617753

Artificial intelligence algorithms function by ingesting training data, which guides their learning. How this data is acquired and labelled marks the key differences between various types of AI algorithms. An AI algorithm receives training data and uses it to learn. Once trained, the algorithm performs new tasks using that data as the basis for its future decisions.

AI in Healthcare: From Watson to Robot Doctors

Some algorithms are capable of learning autonomously, continuously integrating new information to adjust and refine their future actions. Others require a programmer’s intervention from time to time. AI algorithms fall into three main categories: supervised learning, unsupervised learning and reinforcement learning. The primary differences between these approaches lie in how they are trained and how they operate.

Algorithms learn to identify patterns in data streams and make assumptions about correct and incorrect choices. They become more effective and accurate the more data they receive—a process known as deep learning, based on artificial neural networks that distinguish between right and wrong answers, enabling them to draw better and faster conclusions. Deep learning is widely used in speech, image and text recognition and processing.

Modern AI and machine learning algorithms have empowered practitioners to notice things they might otherwise have missed. Herbert Chase, a professor of clinical medicine at Columbia University in New York, observed that doctors sometimes have to rely on luck to uncover underlying issues in a patient’s symptoms. Chase served as a medical adviser to IBM during the development of Watson, the AI diagnostic assistant.

IBM’s concept involved a doctor inputting, for example, three patient‑described symptoms into Watson; the diagnostic assistant would then suggest a list of possible diagnoses, ranked from most to least likely. Despite the impressive hype surrounding Watson, it proved inadequate at diagnosing actual patients. IBM therefore announced that Watson would be phased out by the end of 2023 and its clients encouraged to transition to its newer services.

One genuine advantage of AI lies in the absence of a dopamine response. A human doctor, operating via biological algorithms, experiences a rush of dopamine when they arrive at what feels like a correct diagnosis—but that diagnosis can be wrong. When doubts arise, the dopamine fades and frustration sets in. In discouragement, the doctor may choose a plausible but uncertain diagnosis and send the patient home.

An AI‑algorithm‑based “robot‑doctor” does not experience dopamine. All of its hypotheses are treated equally. A robot‑doctor would be just as enthused about a novel idea as about its billionth suggestion. It is likely that doctors will initially work alongside AI‑based robot doctors. The human doctor can review AI‑generated possibilities and make their own judgement. But how long will it be before human doctors become obsolete?

AI in Action: Data, Marketing, and Everyday Decisions

Currently, AI algorithms trained on large datasets drive actions and decision‑making across multiple fields. Robot‑doctors assisting human physicians and the self‑driving cars under development by Google or Tesla are two visible examples of near‑future possibilities—assuming the corporate marketing stays honest.

AI continues to evolve. Targeted online marketing, driven by social media data, is an example of a seemingly trivial yet powerful application that contributes to algorithmic improvement. Users may tolerate mismatched adverts on Facebook, but may become upset if a robot‑doctor recommends an incorrect, potentially expensive or risky test. The outcome is all about data—its quantity, how it is evaluated and whether quantity outweighs quality.

According to MIT economists Erik Brynjolfsson and Andrew McAfee (2014), in the 1990s only about one‑fifth of a company’s activities left a digital trace. Today, almost all corporate activities are digitised, and companies have begun to produce reports in language intelligible to algorithms. It is now more important that a company’s operations are understood by AI algorithms than by its human employees.

Nevertheless, vast amounts of data are still analysed using tools built by humans. Facebook is perhaps the most well‑known example of how our personal data is structured, collected, analysed and used to influence and manipulate opinions and behaviour.

Big Data Infrastructure

Jeff Hammerbacher—in a 2015 interview with Steve Lohr—helped introduce Hadoop in 2008 to manage the ever‑growing volume of data. Hadoop, developed by Mike Cafarella and Doug Cutting, is an open‑source variant of Google’s own distributed computing system. Initially named after Cutting’s child’s yellow toy elephant, Hadoop could process two terabits of data in two days. Two years later it could perform the same task in mere minutes.

At Facebook, Hammerbacher and his team constructed Hive, an application running on Hadoop. Now available as Apache Hive, it allows users without a computer science degree to query large processed datasets. During the writing of this post, generative AI applications such as ChatGPT (by OpenAI), Claude (Anthropic), Gemini (Google DeepMind), Mistral & Mixtral (Mistral AI), and LLaMA (Meta) have become available for casual users on ordinary computers.

A widely cited example of public‑benefit predictive data analysis is Google Flu Trends (GFT). Launched in 2008, GFT aimed to predict flu outbreaks faster than official healthcare systems by analysing popular Google search terms related to flu.

GFT successfully detected the H1N1 virus before official bodies in 2009, marking a major achievement. However, in the winter of 2012–2013, media coverage of flu induced a massive spike in related searches, causing GFT’s estimates to be almost twice the real figures. The Science article “The Parable of Google Flu” (Lazer et al., 2014) accused Google of “big‑data hubris”, although it conceded that GFT was never intended as a standalone forecasting tool, but rather as a supplementary warning signal (Raising the bar, Wikipedia).

Google’s miscalculation lay in its failure to interpret context. Steve Lohr (2015) emphasises that context involves understanding associations—a shift from raw data to meaningful information. IBM’s Watson was touted as capable of such contextual understanding, capable of linking words to appropriate contexts .

Watson: From TV champion to Clinical Tool, and sold for scraps!

David Ferrucci, a leading AI researcher at IBM, headed the DeepQA team responsible for Watson . Named after IBM’s founder Thomas J. Watson, Watson gained prominence after winning £1 million on Jeopardy! in 2011, defeating champions Brad Rutter and Ken Jennings.

Jennifer Chu‑Carroll, one of Watson’s Jeopardy! coaches, told Steve Lohr (2015) that Watson sometimes made comical errors. When asked “Who was the first female astronaut?”, Watson repeatedly answered “Wonder Woman,” failing to distinguish between fiction and reality.

Ken Jennings reflected that:

“Just as manufacturing jobs were removed in the 20th century by assembly‑line robots, Brad and I were among the first knowledge‑industry workers laid off by the new generation of ‘thinking’ machines… The Jeopardy! contestant profession may be the first Watson‑displaced profession, but I’m sure it won’t be the last.”

In February 2013, IBM announced that Watson’s first commercial application would focus on lung cancer treatment and other medical diagnoses—a real‑world “Dr Watson”—with 90% of oncology nurses reportedly following its recommendations at the time. The venture ultimately collapsed under the weight of unmet expectations and financial losses. In January 2022, IBM quietly sold the core assets of Watson Health to private equity firm Francisco Partners—reportedly for about $1 billion, a fraction of the estimated $4 billion it had invested—effectively signalling the death knell of its healthcare ambitions. The sale marked the end of Watson’s chapter as a medical innovator; the remaining assets were later rebranded under the name Merative, a standalone company focusing on data and analytics rather than AI‑powered diagnosis. Slate described the move as “sold for scraps,” characterising the downfall as a cautionary tale of over‑hyped technology failing to deliver on bold promises in complex fields like oncology.

Conclusion

Artificial intelligence algorithms are evolving rapidly, and while they offer significant benefits in fields like medicine, marketing, and data analysis, they also bring challenges. Data is not neutral: volume must be balanced with quality and contextual understanding. Tools such as Watson, Hadoop and Google Flu Trends underscore that human oversight remains indispensable. Ultimately, AI should augment human decision‑making rather than replace it—at least for now.


References

Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.

Ferrucci, D. A., Brown, E., Chu‑Carroll, J., Fan, J., Gondek, D., Kalyanpur, A. A., … Welty, C. (2011). Building Watson: An overview of the DeepQA project. AI Magazine, 31(3), 59–79. (IBM Research)

Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). The parable of Google Flu: traps in big data analysis. Science, 343(6176), 1203–1205. (Wikipedia)

Lohr, S. (2015). Data‑ism. HarperBusiness.

Mintz‑Oron, O. (2010). Smart Machines: IBM’s Watson and the Era of Cognitive Computing. Columbia Business School Publishing. [Referenced via IBM Watson bibliography] (TIME, Wikipedia)