Zen and the Art of Dissatisfaction – Part 26

Unrelenting Battle for AI Supremacy

In today’s fast-evolving digital landscape, the titanic technology corporations are locked in a merciless struggle for AI dominance. Their competitive advantage is fuelled by the ability to access vast quantities of data. Yet this race holds profound implications for privacy, ethics, and the overlooked human labour that quietly powers it.

Originally published in Substack: https://substack.com/home/post/p-172413535

Large technology conglomerates are engaged in a cutthroat contest for AI supremacy, a competition shaped in large part by the free availability of data. Chinese rivals may be narrowing the gap in this contest, where the free flow of data reigns supreme. In contrast, in Western nations, personal data remains, at least for now, considered the property of the individual; its use requires the individual’s awareness and consent. Nevertheless, people freely share their data—opinions, consumption habits, images, location—when signing up for platforms or interacting online. The freer companies can exploit this user data, the quicker their AI systems learn. Machine learning is often applauded because it promises better services and more accurately targeted advertisements.

Hidden Human Labour

Yet, behind these learning systems are human workers—micro‑workers—who teach data to AI algorithms. Often subcontracted by the tech giants, they are paid meagrely yet exposed to humanity’s darkest content, and they must keep what they see secret. In principle, anyone can post almost anything on social media. Platforms may block or “lock” content that violates their policies—only to have the original poster appeal, rerouting the content to micro‑workers for review.

These shadow workers toil from home, performing tasks such as identifying forbidden sexual content, violence, or categorising products for companies like Walmart and Amazon. For example, they may have to distinguish whether two similar items are the same or retag products into different categories. Despite the rise of advanced AI, these micro‑tasks remain foundational—and are compensated only by the cent.

The relentless gathering of data is crucial for deep‑learning AI systems. In the United States, the tension between user privacy and corporate surveillance remains unresolved—largely stemming from the Facebook–Cambridge Analytica scandal. In autumn 2021, Frances Haugen, a data scientist and whistleblower, exposed how Facebook prioritised maximising user time on the platform over public safety Wikipedia+1.

Meanwhile, the roots of persuasive design trace back to Stanford University’s Persuasive Technology Lab (now known as the Behavior Design Lab), under founder B. J. Fogg, where concepts to hook and retain users—regardless of the consequences—were born. On face value, social media seems benign—connecting people, facilitating ideas, promoting second‑hand sales. Yet beneath the surface lie algorithms designed to keep users engaged, often by feeding content tailored to their interests. The more platforms learn, the more they serve users exactly what they want—drawing them deeper into addictive cycles.

Renowned psychologists from a PNAS study found that algorithms—based on just a few likes—could know users better than even their closest friends. About 90 likes enabled better personality predictions than an average friend, while 270 likes made AI more accurate than a spouse.

The Cambridge Analytica scandal revealed how personal data can be weaponised to influence political outcomes in events like Brexit and the 2016 US Presidential Election. All that was needed was to identify and target individuals with undecided votes based on their location and psychological profiles.

Frances Haugen’s whistleblowing further confirmed that Facebook exacerbates political hostility and supports authoritarian messaging especially in countries like Brazil, Hungary, the Philippines, India, Sri Lanka, Myanmar, and the USA.

As critics note, these platforms never intended to serve as central political channels—they were optimized to maximise engagement and advertising revenue. One research group led by Laura Edelson found that misinformation posts received six times more likes than posts from trusted sources like CNN or the World Health Organization The Guardian.

In theory, platforms could offer news feeds filled exclusively with content that made users feel confident, loved, safe—but such feeds don’t hold attention long enough for profit. Instead, platforms profit more from cultivating anxiety, insecurity, and outrage. The algorithm knows us so deeply that we often don’t even realise when we’re entirely consumed by our feelings, fighting unseen ideological battles. Hence, ad-based revenue models prove extremely harmful. Providers could instead charge a few euros a month—but the real drive is harvesting user data for long‑term strategic advantage.

Conclusion

The race for AI supremacy is not just a competition of algorithms—it’s a battle over data, attention, design, and ethics. The tech giants are playing with our sense of dissatisfasction, and we have no psychological tools to avoid it. While tech giants vie for the edge, a hidden workforce labours in obscurity, and persuasive systems steer human behaviour toward addiction and division. Awareness, regulation, and ethical models—potentially subscription‑based or artist‑friendly—are needed to reshape the future of AI for human benefit.


References

B. J. Fogg. (n.d.). B. J. Fogg. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/B._J._Fogg
Behavior Design Lab. (n.d.). Stanford Behavior Design Lab. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Stanford_Behavior_Design_Lab
Captology. (n.d.). Captology. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Captology
Frances Haugen. (n.d.). Frances Haugen. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Frances_Haugen
2021 Facebook leak. (n.d.). 2021 Facebook leak. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/2021_Facebook_leak

Zen and the Art of Dissatisfaction – Part 21

Data: The Oil of the Digital Age

Data applications rely fundamentally on data—its extraction, collection, storage, interpretation, and monetisation—making them arguably the most significant feature of our contemporary world. Often referred to as ”the new oil,” data is, from the perspective of persistent capitalists, a valuable resource capable of sustaining economic growth even after conventional natural reserves have been exhausted. This new form of capitalism has been titled Surveillance Capitalism (Zuboff 2019).

Originally published in Substack: https://substack.com/@mikkoijas

Data matters more than opinions. For developers of data applications, the key goal is that we browse online, click “like,” follow links, spend time on their platforms, and accept cookies. What we think or do does not matter; what matters is the digital behavioural surplus, a trace we leave and our consent to tracking. That footprint has become immensely valuable—companies are willing to pay for it, and sometimes break laws to get it.

Cookies and Consumer Privacy in Europe

European legislation like the General Data Protection Regulation (GDPR) ensures some personal protection, but we still leave traces even if we refuse to share personal data. Websites are legally obligated to request our cookie consent, making privacy violations more visible. Rejecting cookies and clearing them out later becomes a time-consuming and frustrating chore.

In stark contrast, China’s data laws are much more relaxed, granting companies broader operational freedom. The more data a company gathers, the more fine-tuned its predictive algorithms can be. It’s much like environmental regulation: European firms are restricted from drilling for oil in protected areas, which reduces profit but protects nature. Chinese firms, unrestrained by such limits, may harm ecosystems while driving profits. In the data realm, restrictive laws narrow the available datasets. Whereas Chinese firms harvest freely, they might gain a major competitive edge that could help them lead the global AI market.

Data for Good: Jeff Hammerbacher’s Vision

American data scientist Jeff Hammerbacher is one of the field’s most influential figures. As journalist Steve Lohr (2015) reports, Hammerbacher started on Wall Street and later helped build Facebook’s data infrastructure. Today, he curates data collection and interpretation for the purpose of improving human lives—a fundamental ethos across the data industry. According to Hammerbacher, we must understand the current data landscape to predict the future. Practically, this means equipping everything we care about with sensors that collect data. His current focus? Transforming medicine by centring it on data. Data science is one of the most promising fields, where evidence trumps intuition.

Hammerbacher has been particularly interested in mental health and how data can improve psychological wellbeing. His close friend and former classmate, Steven Snyder, tragically died by suicide after struggling with bipolar disorder. This event, combined with Hammerbacher’s own breakdown at age 27—after being diagnosed with bipolar disorder and generalised anxiety disorder—led him to rethink his life. He notes that mental illness is a major cause of workforce dropout and ranks third among causes of early death. Researchers are now collecting neurobiological data from those with mental health conditions. Hammerbacher calls this “one of the most necessary and challenging data problems of our time.”

Pharmaceuticals haven’t solved the issue. Selective serotonin reuptake inhibitors(SSRIs), introduced in the 1980s, have failed to deliver a breakthrough for mood disorders. These remain a leading cause of death; roughly 90% of suicides involve untreated or poorly treated mood disorders, and about 50% of Western populations are affected at some point. The greater challenge lies in defining mental wellness—should people simply adapt to lives that feel unfit?

“Bullshit Jobs” and Social Systems

Investigative anthropologist David Graeber (2018) reported that 37–40% of Western workers view their jobs as “bullshit”—work they see as socially pointless. Thus, the problem isn’t merely psychological; our entire social structure normalises employment that values output over wellbeing.

Data should guide smarter decisions. Yet as our world digitises, data accumulates faster than our ability to interpret it. As Steve Lohr (2015) notes, a 20-bed intensive care unit can generate around 160,000 data points per second—a torrent demanding constant vigilance. Still, this data deluge offers positive outcomes: continuous patient monitoring enables proactive, personalised care.

Data-driven forecasting is set to reshape society, concentrating power and wealth. Not long ago, anyone could found a company; now a single corporation could dominate an entire sector with superior data. A case in point is the partnership between McKesson and IBM. In 2009, Kaan Katircioglu (IBM researcher) sought data for predictive modelling. He found it at McKesson—clean datasets recording medication inventory, prices, and logistics. IBM used this to build a predictive model, enabling McKesson to optimise its warehouse near Memphis and improve delivery accuracy from 90% to 99%.

At present, data-mining algorithms behave as clever tools. An algorithm is simply a set of steps for solving problems—think cooking recipes or coffee machine programming. Even novices can produce impressive outcomes by following a good set of instructions.

Historian Yuval Noah Harari (2015) provocatively suggests we are ourselves algorithms. Unlike machines, our algorithms run through emotions, perceptions, and thoughts—biological processes shaped by evolution, environment, and culture.

Summary

Personal data is the new source of extraction and exploitation—vital for technological progress yet governed by uneven regulations that determine competitive advantage. Pioneers like Jeff Hammerbacher highlight its potential for social good, especially in mental health, while revealing our complex psychology. We collect data abundantly, yet face the challenge of interpreting it effectively. Predictive systems can drive efficiency, but they can also foster monopolies. Ultimately, whether data serves or subsumes us depends on navigating its ethical, legal, and societal implications.


References

Graeber, D. (2018). Bullshit Jobs: A Theory. New York: Simon & Schuster.
Hammerbacher, J. (n.d.). [Interview in Lohr 2015].
Harari, Y. N. (2015). Homo Deus: A History of Tomorrow. New York: Harper.
Lohr, S. (2015). Data-ism: The Revolution Transforming Decision Making, Consumer Behavior, and Almost Everything Else. New York: Harper Business.
Zuboff, Shoshana (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.