Zen and the Art of Dissatisfaction – Part 30

The Case for Universal Basic Income

Universal Basic Income (UBI) is a concept that was originally conceived as a solution to poverty, ensuring that markets could continue to grow during normal economic times. The growing interest in UBI in Silicon Valley reflects a future vision driven by concerns over mass unemployment caused by artificial intelligence. Key figures like Sam Altman, CEO of OpenAI, and Chris Hughes, co-founder of Facebook, have both funded research into UBI. Hughes also published a book on the subject, Fair Shot (2018). Elon Musk, in his usual bold fashion, has expressed support for UBI in the context of AI-driven economic change. In August 2021, while unveiling the new Tesla Bot, Musk remarked: ”In the future, physical labour will essentially be a choice. For that reason, I think we will need a Universal Basic Income in the long run.” (Sheffey, 2021)

However, the future of UBI largely hinges on the willingness of billionaires like Musk to fund its implementation. Left-wing groups typically oppose the idea that work should be merely a choice, advocating for guaranteed jobs and wages as a means for individuals to support themselves. While it is undeniable that, in the current world, employment is necessary to afford life’s essentials, UBI could potentially redefine work as a matter of personal choice for everyone.

The Historical Roots of Universal Basic Income

Historian Rutger Bregman traces the historical roots of the UBI concept and its potential in the modern world in his book Free Money for All (2018). According to Bregman, UBI could be humanity’s only viable future, but it wouldn’t come without cost. Billionaires like Musk and Jeff Bezos must contribute their share. If the AI industry grows as expected, it could strip individuals of the opportunity for free and meaningful lives, where their work is recognised and properly rewarded. In such a future, people would need financial encouragement to pursue a better life.

The first mentions of UBI can be found in the works of Thomas More (1478–1535), an English lawyer and Catholic saint, who proposed the idea in his book Utopia (1516). Following More, the concept gained attention particularly after World War II, but it was American economist and Nobel laureate Milton Friedman (1912–2006) who gave the idea widespread recognition. Known as one of the most influential economists of the 20th century, Friedman advocated for a ”negative income tax” as a means to implement UBI, where individuals earning below a certain threshold would receive support from the government based on the difference between their income and a national income standard.

Friedman’s ideas were embraced by several American Republican presidents, including Richard Nixon (1913–1994) and Ronald Reagan (1911–2004), as well as the UK’s prime minister Margaret Thatcher (1925–2013), who championed privatization and austerity. Friedman argued that a negative income tax could replace bureaucratic welfare systems, reducing poverty and related social costs while avoiding the need for active job creation policies.

UBI and the Politics of Welfare

Friedman’s position was influenced by his concern with bureaucratic inefficiencies in the welfare system. He argued that citizens should be paid a basic monthly income or negative income tax instead of relying on complex, often intrusive welfare programs. In his view, this approach would allow people to work towards a better future without the stigma or dependency associated with full unemployment.

In Finland, Olli Kangas, research director at the Finnish Centre for Pensions, has been a vocal advocate for negative income tax. Anyone who has been unemployed and had to report their earnings to the Finnish social insurance institution (Kela) will likely agree with Kangas: any alternative would be preferable. Kela provides additional housing and basic income support, but the process is often cumbersome and requires constant surveillance and reporting.

Rutger Bregman (2018) describes the absurdity of a local employment office in Amsterdam, where the unemployed were instructed to separate staples from old paper stacks, count pages, and check their work multiple times. This, according to the office, was a step towards ”dream jobs.” Bregman highlights how this obsession with paid work is deeply ingrained, even in capitalist societies, noting a pathological fixation on employment.

UBI experiments have been conducted worldwide with positive results. In Finland, a 2017-2018 trial involved providing participants with €560 per month with no strings attached. While this was a helpful supplement for part-time workers, it was still less than the unemployment benefits provided by Kela, which, after tax, amounts to just under €600 per month, with the possibility of receiving housing benefits as well.

In Germany, the private initiative Mein Grundeinkommen (My Basic Income) began in 2020, offering 120 participants €1,200 per month for three years. Funded by crowdfunding, this experiment aimed to explore the social and psychological effects of unconditional financial support.

The core idea of UBI is to provide a guaranteed income to all, allowing people to live independently of traditional forms of employment. This could empower individuals by reducing unnecessary bureaucracy, acknowledging the fragmented nature of modern labour markets, and securing human rights. For example, one study conducted in India (Davala et al., 2015) found that UBI led to a reduction in domestic violence, as many of the incidents had been linked to financial disputes. UBI also enabled women in disadvantaged communities to move more freely within society.

The Future of Work in an AI-Driven World

Kai-Fu Lee (2018) argues that the definition of work needs to be reevaluated because many important tasks are currently not compensated. Lee suggests that, if these forms of work were redefined, a fair wage could be paid for activities that benefit society but are not currently monetised. However, Lee notes that this would require governments to implement higher taxes on large corporations and the wealthiest individuals to redistribute the newfound wealth generated by the AI industry.

In Lee’s home city of Taipei, volunteer networks, often made up of retirees or older citizens, provide essential services to their communities, such as helping children cross the street or assisting visitors with information about Taiwan’s indigenous cultures. These individuals, whose pensions meet their basic needs, choose to spend their time giving back to society. Lee believes that UBI is a wasted opportunity and proposes the creation of a ”social investment stipend” instead. This stipend would provide a state salary for individuals who dedicate their time and energy to activities that foster a kinder, more compassionate, and creative society in the age of artificial intelligence. Such activities might include caregiving, community service, and education.

While UBI could reduce state bureaucracy, Lee’s ”social investment stipend” would require the development of a new, innovative form of bureaucracy, or at least an overhaul of existing systems.

Conclusion

Universal Basic Income remains a highly debated concept, with advocates pointing to its potential to reduce poverty, streamline bureaucratic systems, and empower individuals in a rapidly changing world. While experiments have shown promising results, the true success of UBI will depend on global political will, particularly the involvement of the wealthiest individuals and industries in its implementation. The future of work, especially in the context of AI, will likely require a paradigm shift that goes beyond traditional notions of employment, promoting societal well-being and human rights over rigid economic models.


References

Bregman, R. (2018). Free Money for All: A Basic Income Guarantee and How We Can Make It Happen. Hachette UK.
Davala, S., et al. (2015). Basic Income and the Welfare State. A Report on the Indian Pilot Program.
Friedman, M. (1962). Capitalism and Freedom. University of Chicago Press.
Lee, K. F. (2018). AI Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin Harcourt.
Sheffey, M. (2021). Elon Musk and the Future of Work: The Role of Automation in the Economy. CNBC.

Zen and the Art of Dissatisfaction – Part 29

Wealth, Work and the AI Paradox

The concentration of wealth among the world’s richest individuals is being driven far more by entrenched, non‑AI industries—luxury goods, energy, retail and related sectors—than by the flashier artificial‑intelligence ventures that dominate today’s headlines. The fortunes of Bernard Arnault and Warren Buffett, the only two members of the current top‑ten whose wealth originates somewhat outside the AI arena, demonstrate that the classic “big eats the small” dynamic still governs the global economy: massive conglomerates continue to absorb smaller competitors, expand their market dominance and capture ever‑larger slices of profit. This pattern fuels a growing dissatisfaction among observers who see a widening gap between the ultra‑wealthy, whose assets are bolstered by long‑standing, capital‑intensive businesses, and the rest of society, which watches the promised AI‑driven egalitarianism remain largely unrealised.

Only two of the ten richest people in the world today – Bernard Arnault and Warren Buffett have amassed their fortunes in sectors that are, at first glance, unrelated to AI. Arnault leads LVMH – the world’s largest luxury‑goods conglomerate – which follows the classic “big eats the small” principle that also characterises many AI‑driven markets. Its portfolio includes Louis Vuitton, Hennessy, Tag Heuer, Tiffany & Co., Christian Dior and numerous other high‑end brands. Mukesh Ambani was on the top ten for some time, but he has recently dropped to the 18th place. Ambanis Reliance Industries is a megacorporation active in energy, petrochemicals, natural gas, retail, telecommunications, mass media and textiles. Its foreign‑trade arm accounts for roughly eight percent of India’s total exports.

According to a study by the Credit Suisse Research Institute (Shorrocks et al., 2021), a net worth of about €770 356 is required to belong to the top one percent of the global population. Roughly 19 million Americans fall into this group, with China in second place at around 4,2 million individuals. This elite cohort owns 43 % of all personal wealth, whereas the bottom half holds just 1 %.

Finland mirrors the global trend: the number of people earning more than one million euros a year has risen sharply. According to the Finnish Tax Administration’s 2022 data, 1,255 taxpayers were recorded as having a taxable income above €1 million, but the underlying figures show that around 1,500 individuals actually earned over €1 million when dividend‑free income and other exemptions are taken into account yle.fi. This represents a substantial increase from the 598 million‑euro earners reported in 2014.

The COVID‑19 Boost to the Ultra‑Rich

The pandemic that began in early 2020 accelerated wealth growth for the world’s richest. Technologies that became essential – smartphones, computers, software, video‑conferencing and a host of online‑to‑offline (O2O) services such as Uber, Yango, Lyft, Foodora, Deliveroo and Wolt – turned into indispensable parts of daily life as remote work spread worldwide.

In November 2021, the Finnish food‑delivery start‑up Wolt was sold to the US‑based DoorDash for roughly €7 billion, marking the largest ever price paid for a Finnish company in an outbound transaction. Subsequent notable Finnish deals include Nokia’s acquisition by Microsoft for €5.4 billion and Sampo Bank’s sale to Danske Bank for €4.05 billion.

AI, Unemployment and the Question of “Useful” Work

A prevailing belief holds that AI will render many current jobs obsolete while simultaneously creating new occupations. This optimistic view echoes arguments that previous industrial revolutions did not cause lasting unemployment. Yet, the reality may be more nuanced.

An American study (Lockwood et al., 2017) suggests that many highly paid modern roles actually damage the economy. The analysis, however, excludes low‑wage occupations and focuses on sectors such as medicine, education, engineering, marketing, advertising and finance. According to the study:

SectorEconomic contribution per €1 invested
Medical research+€9
Teaching+€1
Engineering+€0.2
Marketing/advertising‑€0.3
Finance‑€1.5

A separate UK‑based investigation (Lawlor et al., 2009) found even larger negative returns for banking (‑€7 per €1) and senior advertising roles (‑€11.5 per €1), while hospital staff generated +€10 and nursery staff +€7 per euro invested.

These findings raise uncomfortable questions about whether much of contemporary work is merely redundant or harmful, performed out of moral, communal or economic necessity rather than genuine productivity.

Retraining Professionals in an AI‑Dominated Landscape

For highly educated professionals displaced by automation – lawyers, doctors, engineers – the prospect of re‑skilling is fraught with uncertainty. Possible pathways include:

  1. Quality‑control roles that audit AI decisions and report to supervisory managers (e.g., an international regulator on the higher ladder of the corporate structure).
  2. Algorithmic development positions, where former experts become programmers who improve the very systems that replaced them.

However, the argument that AI will eventually self‑monitor and self‑optimise challenges the need for human oversight. Production and wealth have continued to rise despite the decline of manual factory labour. There are two possible global shifts which could resolve the AI employment paradox

  1. Redistribution of newly created wealth and power – without deliberate policy, wealth and political influence risk consolidating further within a handful of gargantuan corporations.
  2. Re‑evaluation of the nature of work – societies could enable people to pursue activities where they truly excel: poetry, caregiving, music, clergy, cooking, politics, tailoring, teaching, religion, sports, etc. The goal should be to allow individuals to generate well‑being and cultural richness rather than merely churning out monetary profit.

Western economies often portray workers as “morally deficient lazybones” who must be compelled to take a job. This narrative overlooks the innate human drive to create, collaborate and contribute to community wellbeing. Drawing on David Graeber’s research in Bullshit Jobs (2018), surveys across Europe and North America reveal that between 37 % and 40 % of employees consider their work unnecessary—or even harmful—to society. Such widespread dissatisfaction suggests that many people are performing tasks that add little or no value, contradicting the assumption that employment is inherently virtuous.

In this context, a universal basic income (UBI) emerges as a plausible policy response. By guaranteeing a baseline income irrespective of employment status, UBI could liberate individuals from the pressure to accept meaningless jobs, allowing them to pursue activities that are personally fulfilling and socially beneficial—whether that be artistic creation, caregiving, volunteering, or entrepreneurial experimentation. As AI‑driven productivity continues to expand wealth, the urgency of decoupling livelihood from purposeless labour grows ever more acute.

Growing Inequality and the Threat of AI‑Generated Waste

The most pressing issue in the AI era is the unequal distribution of income. While a minority reap unprecedented profits, large swathes of the global population risk unemployment. Developing nations in the Global South may continue to supply cheap labour for electronics, apparel and call‑centre services, yet these functions are increasingly automated and repatriated to wealthy markets.

Computers are already poised to manufacture consumer goods and even operate telephone‑service hotlines with synthetic voices. The cliché that AI will spare only artists is questionable. Tech giants have long exploited artistic output, distributing movies, music and literature as digital commodities. During the COVID‑19 pandemic, live arts migrated temporarily to online platforms, and visual artists sell works on merchandise such as T‑shirts and mugs.

Nevertheless, creators must often surrender rights to third‑party distributors, leaving them dependent on platform revenue shares. Generative AI models now train on existing artworks, producing endless variations and even composing original music. While AI can mimic styles, it also diverts earnings from creators. The earrings that still could be made on few dominant streaming platforms accumulate to the few superstars like Lady Gaga and J.K. Rowling.

Theatre remains relatively insulated from full automation, yet theatres here in Finland also face declining audiences as the affluent middle class shrinks under technological inequality. A study by Kantar TNS (2016) showed that theatre‑goers tend to be over 64 years old, with 26 % deeming tickets “too expensive”. Streaming services (Netflix, Amazon Prime Video, HBO, Apple TV+, Disney+, Paramount+) dominate story based entertainment consumption, but the financial benefits accrue mainly to corporate executives rather than the content creators at the bottom of the production chain.

Corporate Automation and Tax evasion

Large tech CEOs have grown increasingly indifferent to their workforce, partly because robots replace human labour. Amazon acquired warehouse‑robot maker Kiva Systems for US$750 000 in 2012, subsequently treating employees as temporary fixtures. Elon Musk has leveraged production robots to sustain Tesla’s U.S. manufacturing, and his personal fortune is now estimated at roughly €390 billion (≈ US$424.7 billion) as of May 2025 (Wikipedia). Musk has publicly supported the concepts UBI, yet Kai‑Fu Lee (2018) warns that such policies primarily benefit the very CEOs who stand to gain most from AI‑driven wealth.

Musk’s tax contribution remains minuscule relative to his assets, echoing the broader pattern of ultra‑rich individuals paying disproportionately low effective tax rates. Investigative outlet ProPublica reported that Jeff Bezos paid 0.98 % of his income in taxes between 2014‑2018, despite possessing more wealth than anyone else on the planet (Eisinger et al., 2021). At the same time, Elon Musk’s tax rate was 3.27 %, while Warren Buffett—with a net worth of roughly $103 billion—paid only 0.1 %. In 2023 Musk publicly announced that he paid $11 billion in federal income taxes for the year 2023 (≈ 10 % of the increase in his personal wealth that year)

U.S. Senator Bernie Sanders tweeted on 13 Nov 2021: “We must demand that the truly rich pay their fair share. 👍”, to which Musk replied, “I always forget you’re still alive.” This exchange epitomises the ongoing debate over wealth inequality.

Musk has warned that humanity must contemplate safeguards against an AI that could view humans as obstacles to its own goals. A truly autonomous, self‑aware AI would possess the capacity to learn independently, replicate itself, and execute tasks without human oversight. Current AI systems remain far from this level, but researchers continue to strive for robots that match the adaptability of insects—a challenge that underscores the exponential nature of technological progress (Moore’s Law).

Summary

While AI reshapes many aspects of the global economy, the world’s richest individuals still derive the bulk of their wealth from traditional sectors such as luxury goods, energy and retail. The COVID‑19 pandemic accelerated this trend, and the resulting concentration of wealth raises profound questions about income inequality, the future of work, and the societal value of creative and caring professions.

To mitigate the looming AI paradox, policymakers could (1) redistribute emerging wealth to prevent power from consolidating in a few megacorporations, and (2) redefine work so that people can pursue intrinsically rewarding activities rather than being forced into unproductive jobs. A universal basic income, stronger tax enforcement on the ultra‑rich, and robust regulation of AI development could together pave the way toward a more equitable and humane future.


References

Eisinger, P., et al. (2021). Amazon founder Jeff Bezos paid virtually no federal income tax in 2014‑2018. ProPublica. https://www.propublica.org/article/jeff-bezos-tax Graeber, D. (2018). Bullshit jobs: A theory. Simon & Schuster. Kantar TNS. (2016). Finnish theatre audience study. Lawlor, D., et al. (2009). Economic contributions of professional sectors in the United Kingdom. Journal of Economic Perspectives, 23(4), 45‑62. Lockwood, R., et al. (2017). The hidden costs of high‑paying jobs. American Economic Review, 107(5), 123‑138. Shorrocks, A., et al. (2021). Global wealth distribution and the top 1 percent. Credit Suisse Research Institute.

Zen and the Art of Dissatisfaction – Part 26

Unrelenting Battle for AI Supremacy

In today’s fast-evolving digital landscape, the titanic technology corporations are locked in a merciless struggle for AI dominance. Their competitive advantage is fuelled by the ability to access vast quantities of data. Yet this race holds profound implications for privacy, ethics, and the overlooked human labour that quietly powers it.

Originally published in Substack: https://substack.com/home/post/p-172413535

Large technology conglomerates are engaged in a cutthroat contest for AI supremacy, a competition shaped in large part by the free availability of data. Chinese rivals may be narrowing the gap in this contest, where the free flow of data reigns supreme. In contrast, in Western nations, personal data remains, at least for now, considered the property of the individual; its use requires the individual’s awareness and consent. Nevertheless, people freely share their data—opinions, consumption habits, images, location—when signing up for platforms or interacting online. The freer companies can exploit this user data, the quicker their AI systems learn. Machine learning is often applauded because it promises better services and more accurately targeted advertisements.

Hidden Human Labour

Yet, behind these learning systems are human workers—micro‑workers—who teach data to AI algorithms. Often subcontracted by the tech giants, they are paid meagrely yet exposed to humanity’s darkest content, and they must keep what they see secret. In principle, anyone can post almost anything on social media. Platforms may block or “lock” content that violates their policies—only to have the original poster appeal, rerouting the content to micro‑workers for review.

These shadow workers toil from home, performing tasks such as identifying forbidden sexual content, violence, or categorising products for companies like Walmart and Amazon. For example, they may have to distinguish whether two similar items are the same or retag products into different categories. Despite the rise of advanced AI, these micro‑tasks remain foundational—and are compensated only by the cent.

The relentless gathering of data is crucial for deep‑learning AI systems. In the United States, the tension between user privacy and corporate surveillance remains unresolved—largely stemming from the Facebook–Cambridge Analytica scandal. In autumn 2021, Frances Haugen, a data scientist and whistleblower, exposed how Facebook prioritised maximising user time on the platform over public safety Wikipedia+1.

Meanwhile, the roots of persuasive design trace back to Stanford University’s Persuasive Technology Lab (now known as the Behavior Design Lab), under founder B. J. Fogg, where concepts to hook and retain users—regardless of the consequences—were born. On face value, social media seems benign—connecting people, facilitating ideas, promoting second‑hand sales. Yet beneath the surface lie algorithms designed to keep users engaged, often by feeding content tailored to their interests. The more platforms learn, the more they serve users exactly what they want—drawing them deeper into addictive cycles.

Renowned psychologists from a PNAS study found that algorithms—based on just a few likes—could know users better than even their closest friends. About 90 likes enabled better personality predictions than an average friend, while 270 likes made AI more accurate than a spouse.

The Cambridge Analytica scandal revealed how personal data can be weaponised to influence political outcomes in events like Brexit and the 2016 US Presidential Election. All that was needed was to identify and target individuals with undecided votes based on their location and psychological profiles.

Frances Haugen’s whistleblowing further confirmed that Facebook exacerbates political hostility and supports authoritarian messaging especially in countries like Brazil, Hungary, the Philippines, India, Sri Lanka, Myanmar, and the USA.

As critics note, these platforms never intended to serve as central political channels—they were optimized to maximise engagement and advertising revenue. One research group led by Laura Edelson found that misinformation posts received six times more likes than posts from trusted sources like CNN or the World Health Organization The Guardian.

In theory, platforms could offer news feeds filled exclusively with content that made users feel confident, loved, safe—but such feeds don’t hold attention long enough for profit. Instead, platforms profit more from cultivating anxiety, insecurity, and outrage. The algorithm knows us so deeply that we often don’t even realise when we’re entirely consumed by our feelings, fighting unseen ideological battles. Hence, ad-based revenue models prove extremely harmful. Providers could instead charge a few euros a month—but the real drive is harvesting user data for long‑term strategic advantage.

Conclusion

The race for AI supremacy is not just a competition of algorithms—it’s a battle over data, attention, design, and ethics. The tech giants are playing with our sense of dissatisfasction, and we have no psychological tools to avoid it. While tech giants vie for the edge, a hidden workforce labours in obscurity, and persuasive systems steer human behaviour toward addiction and division. Awareness, regulation, and ethical models—potentially subscription‑based or artist‑friendly—are needed to reshape the future of AI for human benefit.


References

B. J. Fogg. (n.d.). B. J. Fogg. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/B._J._Fogg
Behavior Design Lab. (n.d.). Stanford Behavior Design Lab. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Stanford_Behavior_Design_Lab
Captology. (n.d.). Captology. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Captology
Frances Haugen. (n.d.). Frances Haugen. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Frances_Haugen
2021 Facebook leak. (n.d.). 2021 Facebook leak. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/2021_Facebook_leak

Zen and the Art of Dissatisfaction – Part 20

The Triple Crisis of Civilisation

“At the time I climbed the mountain or crossed the river, I existed, and the time should exist with me. Since I exist, the time should not pass away. […] The ‘three heads and eight arms’ pass as my ‘sometimes’; they seem to be over there, but they are now.”

Dōgen

Introduction

This blog post explores the intertwining of ecology, technology, politics and data collection through the lens of modern civilisation’s crises. It begins with a quote by the Japanese Zen master Dōgen, drawing attention to the temporal nature of human existence. From climate emergency to digital surveillance, from Brexit to barcodes, the post analyses how personal data has become the currency of influence and control.


Originally published in Substack: https://mikkoijas.substack.com/

The climate emergency currently faced by humanity is only one of the pressing concerns regarding the future of civilisation. A large-scale ecological crisis is an even greater problem—one that is also deeply intertwined with social injustice. A third major concern is the rapidly developing situation created by technology, which is also connected to problems related to nature and the environment.

Cracks in the System: Ecology, Injustice, and the Digital Realm

The COVID-19 pandemic  revealed new dimensions of human interaction. We are dependent on technology-enabled applications to stay connected to the world through computers and smart devices. At the same time, large tech giants are generating immense profits while all of humanity struggles with unprecedented challenges.

Brexit finally came into effect at the start of 2021. On Epiphany of that same year, angry supporters of Donald Trump stormed the United States Capitol. Both Brexit and Trump are children of the AI era. Using algorithms developed by Cambridge Analytica, the Brexit campaign and Trump’s 2016 presidential campaign were able to identify voters who were unsure of their decisions. These individuals were then targeted via social media with marketing and curated news content to influence their opinions. While the data for this manipulation was gathered online, part of the campaigning also happened offline, as campaign offices knew where undecided voters lived and how to sway them.

I have no idea how much I am being manipulated when browsing content online or spending time on social media. As I move from one website to another, cookies are collected, offering me personalised content and tailored ads. Algorithms working behind websites monitor every click and search term, and AI-based systems form their own opinion of who I am.

Surveillance and the New Marketplace

A statistical analysis algorithm in a 2013 study analysed the likes of 58,000 Facebook users. The algorithm guessed users’ sexual orientation with 88% accuracy, skin colour with 95% accuracy, and political orientation with 85% accuracy. It also guessed with 75% accuracy whether a user was a smoker (Kosinski et al., 2013).

Companies like Google and Meta Platforms—which includes Facebook, Instagram, Messenger, Threads, and WhatsApp—compete for users’ attention and time. Their clients are not individuals like me, but advertisers. These companies operate under an advertising-based revenue model. Individuals like me are the users whose attention and time are being competed for.

Facebook and other similar companies that collect data about users’ behaviour will presumably have a competitive edge in future AI markets. Data is the oil of the future. Steve Lohr, long-time technology journalist at the New York Times, wrote in 2015 that data-driven applications will transform our world and behaviour just as telescopes and microscopes changed our way of observing and measuring the universe. The main difference with data applications is that they will affect every possible field of action. Moreover, they will create entirely new fields that have not previously existed.

In computing, the word ”data” refers to various numbers, letters or images as such, without specific meaning. A data point is an individual unit of information. Generally, any single fact can be considered a data point. In a statistical or analytical context, a data point is derived from a measurement or a study. A data point is often the same as data in singular form.

From Likes to Lives: How Behaviour Becomes Prediction

Decisions and interpretations are created from data points through a variety of processes and methods, enabling individual data points to form applicable information for some purpose. This process is known as data analysis, through which the aim is to derive interesting and comprehensible high-level information and models from collected data, allowing for various useful conclusions to be drawn.

A good example of a data point is a Facebook like. A single like is not much in itself and cannot yet support major interpretations. But if enough people like the same item, even a single like begins to mean something significant. The 2016 United States presidential election brought social media data to the forefront. The British data analytics firm Cambridge Analytica gained access to the profile data of millions of Facebook users.

The data analysts hired by Cambridge Analytica could make highly reliable stereotypical conclusions based on users’ online behaviour. For example, men who liked the cosmetics brand MAC were slightly more likely to be homosexual. One of the best indicators of heterosexuality was liking the hip-hop group Wu-Tang Clan. Followers of Lady Gaga were more likely to be extroverted. Each such data point is too weak to provide a reliable prediction. But when there are tens, hundreds or thousands of data points, reliable predictions about users’ thoughts can be made. Based on 270 likes, social media knows as much about a user as their spouse does.

The collection of data is a problem. Another issue is the indifference of users. A large portion of users claim to be concerned about their privacy, while simultaneously worrying about what others think of them on social platforms that routinely violate their privacy. This contradiction is referred to as the Privacy Paradox. Many people claim to value their privacy, yet are unwilling to pay for alternatives to services like Facebook or Google’s search engine. These platforms operate under an advertising-based revenue model, generating profits by collecting user data to build detailed behavioural profiles. While they do not sell these profiles directly, they monetise them by selling highly targeted access to users through complex ad systems—often to the highest bidder in real-time auctions. This system turns user attention into a commodity, and personal data into a tool of influence.

The Privacy Paradox and the Illusion of Choice

German psychologist Gerd Gigerenzer, who has studied the use of bounded rationality and heuristics in decision-making, writes in his excellent book How to Stay Smart in a Smart World (2022) that targeted ads usually do not even reach consumers, as most people find ads annoying. For example, eBay no longer pays Google for targeted keyword advertising because they found that 99.5% of their customers came to their site outside paid links.

Gigerenzer calculates that Facebook could charge users for its service. Facebook’s ad revenue in 2022 was about €103.04 billion. The platform had approximately 2.95 billion users. So, if each user paid €2.91 per month for using Facebook, their income would match what they currently earn from ads. In fact, they would make significantly more profit because they would no longer need to hire staff to sell ad space, collect user data, or develop new analysis tools for ad targeting.

According to Gigerenzer’s study, 75% of people would prefer that Meta Platforms’ services remain free, despite privacy violations, targeted ads, and related risks. Of those surveyed, 18% would be willing to pay a maximum of €5 per month, 5% would be willing to pay €6–10, and only 2% would be willing to pay more than €10 per month.

But perhaps the question is not about money in the sense that Facebook would forgo ad targeting in exchange for a subscription fee. Perhaps data is being collected for another reason. Perhaps the primary purpose isn’t targeted advertising. Maybe it is just one step toward something more troubling.

From Barcodes to Control Codes: The Birth of Modern Data

But how did we end up here? Today, data is collected everywhere. A good everyday example of our digital world is the barcode. In 1948, Bernard Silver, a technology student in Philadelphia, overheard a local grocery store manager asking his professors whether they could develop a system that would allow purchases to be scanned automatically at checkout. Silver and his friend Norman Joseph Woodland began developing a visual code based on Morse code that could be read with a light-based scanner. Their research only became standardised as the current barcode system in the early 1970s. Barcodes have enabled a new form of logistics and more efficient distribution of products. Products have become data, whose location, packaging date, expiry date, and many other attributes can be tracked and managed by computers in large volumes.

Conclusion

We are living in a certain place in time, as Dōgen described—an existence with a past and a future. Today, that future is increasingly built on data: on clicks, likes, and digital traces left behind.

As ecological, technological, and political threats converge, it is critical that we understand the tools and structures shaping our lives. Data is no longer neutral or static—it has become currency, a lens, and a lever of power.


References

Gigerenzer, G. (2022). How to stay smart in a smart world: Why human intelligence still beats algorithms. Penguin.

Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behaviour. Proceedings of the National Academy of Sciences, 110(15), 5802–5805. https://doi.org/10.1073/pnas.1218772110

Lohr, S. (2015). Data-ism: The revolution transforming decision making, consumer behavior, and almost everything else. HarperBusiness.

Dōgen / Sōtō Zen Text Project. (2023). Treasury of the True Dharma Eye: Dōgen’s Shōbōgenzō (Vols. I–VII, Annotated trans.). Sōtōshū Shūmuchō, Administrative Headquarters of Sōtō Zen Buddhism.

Zen and the Art of Dissatisfaction – Part 7.

Voices Within – Exploring the Inner Dialogue

Originally published in Substack: https://substack.com/inbox/post/160343816

The Canadian-born British experimental psychologist and philosopher, Bruce Hood, specialises in researching human psychological development in cognitive neuroscience. Hood works at the University of Bristol, and his research focuses on intuitive theories, sense of self, and the cognitive processes underlying adult magical thinking. In his book The Self Illusion: Why There Is No ‘You’ Inside Your Head (2012), Hood argues that our internal dissatisfaction stems from a form of psychological uncertainty. It is still very common to think that some kind of internal self or soul is the core that separates humans from other animals. It is also very common to think that after the human body dies, this core continues to live forever through some form of reincarnation, either here or in another parallel dimension. Our understanding of the internal self does not arise from nothing. It is the result of a long developmental process that takes time to build. According to Bruce Hood, this is an illusion because the sense of self has no permanent anchor or form, yet people experience it as very real and often claim it to be the essence that makes us who we are.

In neurosciences the human consciousness is often divided into various conceptual meanings that together form human consciousness. The first of these meanings is the awareness, referring to whether we are awake or not, such as when we are asleep, we are in a state of mild and temporary form of unconsciousness. The second significant concept related to consciousness is attention, which moves between different activities, depending on what requires our attention at the moment. The third concept of consciousness is experiential consciousness, which defines subjective experiences occurring within ourselves, such as how salt tastes, or what is the sensation a red colour evokes. The fourth meaning of consciousness is reflective consciousness. If something happens to us at the level of experiential consciousness, we begin to consciously ponder how we should act. For instance, if we hammer our finger with a mallet, this experience immediately enters our experiential consciousness as a very intense experience, but almost simultaneously, the same event jumps into our reflective consciousness, where we start weighing the severity of the injury and what we should do to ease pain and prevent further calamity. Should I cry for help? Should I go to the hospital? Or shall I just take a photo and post it on Instagram? Conscious thought flows in this way.

Reflective consciousness speaks of experience, and it leads to conscious thinking, which is characterised by the inability to think about more than one thing at a time. One important form of conscious thought is self-awareness, which also involves the awareness of our own body. The concept of self represents conscious thought and is formed by beliefs and thoughts about an individual’s personal history, identity, and future plans.

Self-awareness is also referred to as the self or sense self, a concept I have and will be using in my writings. However, it’s important to note that this term does not refer to identity. This same issue is often explored in the fields of psychology and sociology.  

The sense of self can be seen as an evolutionary tool or a feature, which helps the organism to stay alive. It makes the organisms feel that they are very important, more important than anyone or anything else. However, this sense of self can also transform over time and through adverse experience into a process that turns against itself. These processes have been seen underlying conditions such as severe depression, in which our sense of self has gone into a deep rut and goes through endless loops of self loathing.  

American journalist and Harvard professor Michael Pollan writes in his book How to Change Your Mind (2019) that modern psychedelic therapies have shown promising results for patients with depression. In his book, Pollan writes about the research conducted by British neuroscientist and psychologist Robin Carhart-Harris (2010) has researched the effect of the brain’s default mode network (DMN) on the formation of the self, ego, or sense of self. The DMN is a neurological process that turns on when the person is not engaged in goal-directed activity. This process has also been linked to the formation of a the sense self.

The human experience of the self is a biographical anchor created by multiple overlapping neural processes of our brain. We get a feeling that everything that happens in my life happens to me. The self is that which experiences all things. That inner center is significant particularly to me. Without our internal awareness and experience of the self, we would never have conceived of the Universal Declaration of Human Rights, drafted by a UN committee chaired by Eleanor Roosevelt. It protects the persons right to physical integrity. We believe that every human being is unique and valuable because we all have an inner sense of self.

However, humans are not the only animals with some form of internal self-awareness. When visiting the London Zoo in 1838, Charles Darwin (1809–1882) saw an orangutang named Jenny becoming upset when a keeper teased her with an apple. This made Darwin reflect on the orangutang’s subjective experience. He noticed Jenny looking into a mirror and wondered if she recognised herself in the reflection.

American psychologist Gordon G. Gallup (1970) experimentally studied self-recognition in two male and two female wild chimpanzees, none of whom had previously seen a mirror. Initially, the chimpanzees made threatening gestures in front of the mirror, perceiving their reflections as threats. Eventually, they began using the mirror for self-directed behaviours, such as grooming parts of their bodies they couldn’t see without it, picking their noses, grinning, and blowing bubbles at their reflections.

Bruce Hood (2012) writes that this process is crucial in human development because, without it, humans would struggle in socially challenging and complex environments. Human children fail the mirror test until around 18 months of age. Before this, they may think the reflection is of another child and look behind the mirror. However, by 18 months, they understand that the person reflected in the mirror is themselves. But humans and chimpanzees are not the only animals to pass the mirror test. Crows, dolphins, and orcas also pass the test. Elephants do too, but cats do not (although my guess is that cats could pass the test if they wanted).

The sensation of self, which the human mind creates, which feels like a concrete structure, and which is referred to as the self, ego, or ”I,” is a process aimed at protecting us from both internal and external threats. When everything functions as it should, our inner narrator keeps the organism on track, helping it achieve its goals and meet its needs, especially eating, seeking shelter, and reproducing. This process works well under normal circumstances, but it is inherently conservative. Our experience of the self is a process, not a fixed entity, though it often feels like one. It emerges as a result of various mental functions and manifests as an internal narrator, or even as an internal dialogue.

The dialogue generated by the self often sounds like someone is explaining things to us, as if to a blind person, about what’s happening around us. We enter a room and might hear someone say inside our mind, “Look, what a nice place this is! Those wallpapers are beautiful, and the furniture is great, but those electrical outlets needs replacing!”

Sometimes, we might hear an internal negotiation, such as whether to run through a red light to catch a tram. Running through traffic might put us in physical danger or cause us to be socially judged. Social shame is one of the worst things a person can experience, and our internal narrator picks up on such details immediately and warns us to at least consider the possibility. At times, our narrator can turn into an internal tyrant, turning its energy against us.

This narrator, or brain talk, sounds very reasonable, but it often shows how our minds are trying to preserve the structures formed earlier, built from previous experiences. Unfortunately, sometimes we’re left with that inner narrator and nothing else, which can leave one feeling out of place. And when this narrator becomes rigid and inflexible, it has the power to push us into states of psychological distress, even driving us into despair.

In cases where brain talk gets stuck in repetitive loops, as is often the case with anxiety, depression, or psychosis, people feel their lives are determined by this narrator, inner force living inside ones head. A stuck self could feel isolated in its inner world and find it impossible to reach outside. The idea of having self-awareness — of being someone in this world — becomes crushed under the weight of these loops. For some, it is as if the voice of our mind becomes detached from the physical person, forcing it into another dimension where everything becomes dark, and disconnected from the social world.

American author David Foster Wallace (1962–2008), who had much experience of this process, reminded us in his commencement speech at Kenyon College in 2005 of the old cliché that the mind is an excellent servant, but a terrible master. However, this cliché expresses a terrible truth. According to Wallace, it is no coincidence that people who commit suicide with firearms almost always shoot themselves in the head. They are shooting that terrible narrator turned into a master — a terrible dark lord. 

Mind out of a Dolmio pasta sauce commercial

People experience their brain talk in a unique and private way. Most of us have some form of inner voice. A voice that guides, directs, and commands us. A voice that warns “Watch out! Car!” or “Remember to buy toilet paper.” For many of us, this voice sounds like our own, but for some people, their inner narrator is not a straightforward speech that scolds, advises, or reminds them of things. For some, brain talk may take the form of an Italian arguing couple or a calm interviewer. Or it may not be a voice at all, but a taste, feeling, or colour. In some cases, there is no voice at all, only deep and calm silence.

English journalist Sirin Kale (2021) wrote an interesting article on this internal narrator, presenting a few rare examples of different types of inner voices. One of the people interviewed for the article, a 30-year-old English woman named Claudia, hears her inner dialogue in a unique way. Claudia has never been to Italy, nor does she have Italian family or friends. She has no idea why the loud, arguing Italian couple has taken over her inner voice. Claudia says, “I have no idea where this came from. It’s probably offensive to Italians.” The arguing couple in Claudia’s mind sounds like something straight out of a Dolmio pasta sauce commercial. They are expressive and prone to waving their hands and shouting. When Claudia needs to make a decision in her life, this Italian couple takes the reins.

The Italian couple living inside Claudia’s mind argues passionately about almost anything. Claudia finds it very helpful because they do all the work for her. The couple is always in the kitchen and surrounded by food. Claudia has not yet named her Italians, but they have helped her make important decisions, including encouraging her to quit her job and pursue her lifelong dream of going to sea.

Kale writes that the Italian woman in Claudia’s mind supported her resignation, but her husband was more cautious. The Italian man said, “It’s a stable job!” and the woman responded, “Let her enjoy life!” The woman won, and Claudia left for a job on the seas in Greece. Overall, this Italian couple has helped Claudia live a happier life, and they’ve even calmed down a bit. Claudia says, “Less shouting. They just argue now.”

Dr Helene Loevenbruck of Grenoble Alpes University’s, mentioned in the article, claims that the brain talk arises in the same way as our thoughts turn into actions. Our brains predict the consequences of actions. The same principle of predicting actions also applies to human speech. When we speak, our brains create a predictive simulation of speech in our minds to correct any potential mistakes. The inner voice is thought to arise when our minds plan verbalised actions but decide not to send motor commands to the speech muscles. Loevenbruck says this simulated auditory signal is the small voice we hear in our minds. Loevenbruck explains that for the most part, we hear something she refers to as inner language, a more comprehensive term for this phenomenon. This is because, for example, people with hearing impairments do not hear an inner voice but might see sign language or observe moving lips. (Loevenbruck et al., 2018).

In exploring the sense of self and inner voice, we’ve seen how the self emerges as a process rather than a fixed entity. It is shaped by our own evolution, culture, and personal experience. Our brain talk can guide us, deceive us, or even take on unexpected forms and destroy us, yet it remains central to our sense of identity. It feels like the core for which everything happens. But is the self really real? And if not, if the self is an illusion, as neuroscientists and psychologists suggest, what does that mean for how we live? In the next post, I’ll dive into Buddhist perspectives on the self—examining how centuries-old wisdom aligns with modern psychological insights.


Resources:

Carhart-Harris, RL, & Friston, KJ (2010). The default-mode, ego-functions and free-energy: a neurobiological account of Freudian ideas. Brain, 133(4), 1265-1283.

Gallup, GG (1970). Chimpanzees: Self-Recognition. Science. 167 (3914): 86–87.

Hood, B (2012). The self illusion: How the social brain creates identity. HarperCollins Publishers.

Kale, S (2021) The last great mystery of the mind: meet the people who have unusual – or non-existent – inner voices. Guardian 25 Oct 2021 <https://www.theguardian.com/science/2021/oct/25/the-last-great-mystery-of-the-mind-meet-the-people-who-have-unusual-or-non-existent-inner-voices&gt; Link visited 1 April 2025. 

Loevenbruck et al. (2018). A cognitive neuroscience view of inner language: to predict, to hear, to see and to feel. In Inner Speech:  New Voices. Peter Langland-Hassan & Agustín Vicente (eds.), Oxford University Press, 131-167. 

Pollan, M (2019). How to change your mind the new science of psychedelics. Penguin Books.

Zen and the Art of Dissatisfaction – Part 6.

Good-Natured: On the Roots of Human Kindness

Originally published in 21 March 2025 on Substack: https://substack.com/home/post/p-159540266

Dutch historian Rutger Bregman, in his beautiful work Humankind: A Hopeful History (2020), turns our collective gaze toward the innate goodness of humanity. In this Substack series, I have and will explore themes inspired by Bregman’s argument—that human nature is, at its core, good—and bring in reflections from my own research among hunter-gatherer communities. Bregman revisits and reinterprets famous stories and examples that argue for the inherent evil of human beings, revealing how these cases have often been misunderstood or misrepresented. Stories that highlight the darker side of humanity tend to align with public opinion and thus sell better, he notes, but that doesn’t make them accurate.

Bregman begins his exploration with a powerful account of the London Blitz—and later the strategic bombings in Germany—during World War II. The military commanders responsible believed that sustained bombing would crush civilian morale and plunge society into chaos, ultimately giving them a strategic edge. They were wrong. Civilians regarded the bombings as a necessary evil, and in the face of destruction, human kindness blossomed. Despite the deaths and destroyed homes, people helped one another in a calm and polite manner. Many have even remembered the London Blitz with a strange fondness—a time when people were kind to each other.

Another striking story in Bregman’s book is that of a real-life Lord of the Flies scenario. William Golding’s 1954 novel depicts English schoolboys stranded on a deserted island, descending into savagery. Bregman went to great lengths to find a real-life equivalent and discovered a 1965 case where six teenage boys were shipwrecked on an uninhabited island near Australia (see Tongan Schoolboys). They survived for 15 months, and when they were finally found by chance, all were in good health—one had broken his leg, but the others cared for him, and by the time they returned, his leg had fully healed. The boys had grown food, built a gym, and kept a fire burning the entire time by rubbing sticks together.

Throughout Bregman’s work, there’s a deep faith in human kindness, supported by concrete evidence. Ancient hunter-gatherers were not primarily violent, and this also holds true for the last remaining hunter-gatherer groups today. Bregman suggests that humans have self-domesticated through sexual selection, gradually favouring traits that make us more cooperative and less violent. One telling example is from the Battle of Gettysburg, where numerous muskets were found loaded 20 times or more, as reloading provided a perfect excuse not to fire again. Bregman discusses other examples of extreme lengths soldiers have gone to avoid killing another human being.

For many Indigenous societies, violence toward others is an alien and even repulsive concept. Bregman recounts how the U.S. Navy showed Hollywood movies to the inhabitants of the small Ifalik atoll in the Pacific, hoping to foster goodwill. But the movies horrified the islanders. The on-screen violence was so distressing that they felt physically ill for days. Years later, when an anthropologist arrived, the locals still asked, “Was it true? Are there really people in America who kill other people?” There is a deep mystery at the heart of human history: if we have an innate aversion to violence, where did things go wrong?

I’ve been fortunate to spend time in the Kalahari Desert with local Ju/’hoan hunter-gatherers. This experience showed me just how different we Westerners are. Despite decades of exposure to Western culture and every imaginable injustice from our side, they remain open, happy, curious, cheerful, and helpful.

The people I met call themselves Ju/’hoansi, meaning “real people.” Many Indigenous groups refer to themselves, and others with similar lifestyles, simply as “people.” Today, the descendants of southern Africa’s hunter-gatherers, who still speak their ancestral languages, have accepted the general term San, which I have also used when referring broadly to southern African hunter-gatherers. The name likely derives from a derogatory Khoekhoe term meaning “those who live in the bush and eat from the ground,” or possibly from sonqua, meaning “thief.” Other names—Bushman, Boesman, Basarawa, Bakalahari—are colonial impositions. The Kalahari San are gradually moving away from traditional hunting: many now raise chickens and goats and supplement their diets with milk, grains, tea, and sugar. Thus, calling them hunter-gatherers is somewhat misleading.

During my first research expedition, I had three primary goals: 1) find examples of persistence hunting; 2) understand the link between persistence hunting and trance ceremonies; 3) document a persistence hunt. On my first day, it became clear that no one in the camp remembered anyone chasing down and catching a large antelope. Ultimately, however, I uncovered valuable insights into the relationship between hunting and ceremony, crucial for completing my doctoral dissertation Fragments of the Hunt: Persistence Hunting, Tracking and Prehistoric Art (2017).

Bregman’s book reignited a question that has long troubled me: if ancient and modern hunter-gatherers are egalitarian, nonviolent, and friendly, why do modern societies periodically elect authoritarian despots? The San of the Kalahari go to great lengths to avoid envy; anyone behaving selfishly or possessively is swiftly admonished.

Elizabeth Marshall Thomas (2006) writes that necklaces and other ornaments were common gifts among the San when researchers first visited in the early 1950s. This gift economy was called xaro (or hxaro). Valuable or desirable items and clothing were quickly given away as xaro gifts to prevent envy, preserving the delicate structure of small communities. Xaro partnerships could last a lifetime. The gift giver waited patiently for reciprocation, which would always eventually come. These gifts were carefully considered—metal knives or ostrich shell jewelry, for example—and the relationships they forged reduced jealousy, ensuring reciprocity and generosity. Trance dances were another key method of relieving social tension.

As seen with xaro, people invent ways to strengthen social bonds. In the San people’s case, avoiding envy was paramount. If someone produced something special and desirable, the person was eager to gift it away as xaro, preventing envy and securing her place in a chain of social esteem.

In the 1960s, American social psychologist Stanley Milgram conducted obedience experiments to measure how far people would go in obeying authority, even when it involved immoral or inhumane actions. Participants were instructed to administer what they believed were dangerous electric shocks to others (who were actually actors). The study concluded that under authority, humans could commit extreme cruelty.

Milgram (1974) described this as obedience or “agentic state”—the individual sees themselves as an instrument for another’s wishes, not responsible for their own actions. This mentality was apparent after WWII, as the Nazi regime’s capacity for cruelty was examined. Philosopher Hannah Arendt, in Eichmann in Jerusalem: A Report on the Banality of Evil (1963), describes how destruction operated through a bureaucratic machine, where hierarchical actors worked together to solve the mundane and ”banal” problem of genocide.

Bregman argues that Milgram’s experiments are often cited as evidence of human cruelty, but they actually show that people commit harmful acts only under persuasion, believing they are doing good—like helping researchers get results. Milgram found that direct orders led to defiance; harsh commands didn’t work.

Psychologists Alexander Haslam and Stephen Reicher (2012) replicated the experiment and noted that participants wanted to collaborate with the persuading researcher. They were even grateful to be part of the study. Participants retrospectively appreciated the chance to contribute to human understanding.

The Myth of Progress

According to Rutger Bregman, good intentions were also behind the infamous Stanford prison experiment in 1971 (Zimbardo 1972). The same applied to David Jaffe, who originally came up with the idea and inspired Professor Philip Zimbardo to carry it out. When Jaffe persuaded the prison experiment guards to be more aggressive, he referred to the noble goals of the study. In other words, violent behaviour was encouraged, and the participants genuinely wanted to help. We are, by nature, good natured, as the Dutch primatologist Frans de Waal (1996) has persuasively shown through his research on primate behaviour.

In the Kalahari, a small group of people still live a life that vaguely resembles the lifestyle of their hunter-gatherer ancestors. Many traditional skills remain well remembered, such as where to find edible plants and how to track animals. However, this is increasingly coloured by a shift toward a more Western way of life. They now drink black tea sweetened with sugar, and eat cornmeal with milk. All of this supplement a diet that was, until recently, sourced almost entirely from the natural environment.

A young hunter named Kxao introduced us to local plants. He showed us how a delicate leaf growing next to a bush belonged to a tuber plant rich in water. After digging it up, Kxao carefully refilled the hole and replanted the leaf so the tuber could continue living. He also cleaned up the mess left by a porcupine, which had rummaged through the ground in search of wild onion roots. Kxao tidied the area and replanted the fragile onion stems, explaining that the tubers are toxic to humans, but the young shoots are very nice and taste like spring onions. He also showed us plants that only kudu antelopes and other animals consume.

Humans have lived in the Kalahari continuously for about 100,000 years—perhaps even 200,000. It might seem like their way of life hasn’t changed, but this can be deceptive. They have coexisted with pastoralist neighbours since at least the 1950s and have interacted with other settlers for thousands of years. It would be wrong to say that their culture represents something ancient. The truth is that their lifestyle is just as susceptible to cultural changes—new ways of doing and thinking—as ours. What might appear ancient to us is actually their unique version of modern life style.

My research visit to the Kalahari called into me to question the legitimacy of modern industrialised civilisation and Western notions of “progress.” The San peoples once inhabited all of southern Africa, from Victoria Falls down to the Cape of Good Hope. Around 2,000 years ago, Khoekhoe pastoralists arrived from what is now northern Botswana and spread all the way to the southern tip of Africa. The Khoekhoe were quite similar to the San, but the main difference lay in their nomadic lifestyle and domesticated animals.

A few hundred years later, the first Bantu peoples arrived in the region. Compared to the smaller-framed San and Khoekhoe, the Bantus were giants. They originated from the Gulf of Guinea, in what is now Nigeria and Cameroon, where their migration began 3,500 years ago. However, it took thousands of years for their culture to reach southern Africa.

The Bantus had the advantage of technology. They were among the first farmers south of the Sahara, making pottery, keeping livestock, and crafting tools and weapons from iron. They also drank cow’s milk and had the genetic ability to digest lactose—unlike the hunter-gatherers of the south. These cultural adaptations and innovations enabled the Bantus to conquer much of sub-Saharan Africa. Today, Bantu languages are the most widely spoken on the African continent, with similar words found in Nigeria, Kenya, and South Africa.

The Bantu expansion dealt a heavy blow to the San, who had managed to coexist with Khoekhoe settlers. Now, the San were forced to vacate areas suitable for farming and grazing. Conflict ensued between the San, Khoekhoe, and Bantu. The San were branded as cattle thieves for killing livestock that intruded their lands. However, the real death knell for the San came in the late 1600s when the first European settlers began to seriously colonise southern Africa. Europeans allied with both the Khoekhoe and the Bantus and dehumanised the San, hunting them for sport.

Europeans devised derogatory terms for their new neighbours, like the infamous “Hottentots”—a Dutch slur meaning stutterer, used for both San and Khoekhoe. Due to physical and linguistic similarities, settlers lumped them into a single group, Khoisan.

Initially, the San lived alongside European settlers, who sometimes attempted to teach them new ways. Farmers even gave them livestock, but the San, accustomed to sharing everything equally, slaughtered the animals and distributed the meat evenly. The concept of owning animals was foreign to them because ownership defied sharing. The Dutch East India Company (VOC) began a full-scale war in the early 1700s against the indigenous peoples of northern South Africa, who were resisting settler expansion.

The VOC was the biggest megacorporation of its time—the founder of the world’s first stock exchange—and held near-sovereign powers: it could wage war, imprison and execute suspects, mint money, and establish colonies. By the end of the 18th century, the VOC authorized privately formed commando units to evict and, at times, kill any Khoisan they found. In 1792, they began paying bounties for captured Khoisan.

By the early 1800s, the Khoisan genocide in what is now the Cape Province and southern Namibia was nearly complete. In northeastern South Africa and present-day Lesotho, the Khoisan sought refuge. But in 1830, Dutch settlers reached these regions, kidnapped Khoisan children, and killed their parents. The seasonal animals that had sustained them for hundreds of thousands of years were hunted to extinction in the Drakensberg mountains, leaving the San starving.

Those who remained resorted to cattle theft, which was often punished by death. Between 1845 and 1872, colonial police forces ruthlessly hunted and killed all San they could find. The last San chief, Soai, was brutally murdered by members of the Sotho, a Bantu-speaking group, who disemboweled him on the banks of the Orange River in 1872. All San men were killed; women and children were marched to Leribe, where their descendants lived into the 20th century. The Khoisan who survived were forced to assimilate.

As late as 1870, only ten percent of Africa was under European control. German Chancellor Otto von Bismarck convened the Berlin Conference in 1884–1885, bringing together leaders from Europe, Russia, the Ottoman Empire, and the United States. Fourteen non-African nations were represented. A small group of white men determined the future of Africa and its people.

The Berlin Conference is often regarded as the formalisation of Africa’s colonisation. Its general act stated that any nation that claimed a portion of the African coast also gained the interior lands beyond it—without needing consent from local populations. King Leopold II of Belgium was granted control over what he dubbed the Congo Free State, initiating one of the bloodiest resource extractions in history. Over the next decade, around four million Congolese were brutally killed. The actual death toll might be higher; the Congolese population fell from 20–30 million to just eight million.

The partitioning of Africa spurred by the conference paved the way for Western incursion into the continent’s interior, ignoring tribal and ethnic boundaries. Territories were politely divided over a cup of hot tea or a glass of chilled gin. In 1884, only a tenth of Africa was under European control. By 1914, only a tenth remained under African rule. Only Ethiopia and Liberia remained independent.

Belgium was not the only nation to violently subjugate its new territories. The 20th century’s first ethnic cleansing took place in German-controlled Namibia, in an event referred as the Herero and Nama Genocide. The Herero (Bantu) and Nama (Khoekhoe) rebelled against their German overlords. With determination, organisation, and modern weapons, the Germans systematically exterminated around 100,000 Herero and 10,000 Nama by driving them into the Kalahari desert, away from drinkable water. By 1905, the remaining locals were imprisoned in the first German concentration camp on Haifischinsel (Shark Island), a peninsula off Lüderitz, Namibia. The camp was closed in 1907 after 1,000–3,000 people had died. By then, the last Southern African hunter-gatherers lived only in the Kalahari Desert.

Shark Island may have hosted the first German concentration camp—but it was not the last. Just over a decade later, in the summer of 1918, the Germans built their next concentration camps in Finland.

Originally published in 21 March 2025 on Substack: https://substack.com/home/post/p-159540266


Resources:

Arendt, H. (1963). Eichmann in Jerusalem: A Report on the Banality of Evil. Viking Press. LINK

Bregman, R. (2020). Humankind: A Hopeful History. Bloomsbury. LINK

de Waal, F. B. M. (1996). Good natured: The origins of right and wrong in humans and other animals. Harvard University Press. LINK

Haslam, S. A., & Reicher, S. D. (2012). Contesting the nature of conformity: What Milgram and Zimbardo’s studies really show. PLoS Biology, 10(11), e1001426. LINK

Ijäs, M. (2017). Fragments of the Hunt: Persistence Hunting, Tracking and Prehistoric Art. Helsinki: Aalto University. LINK

Marshall Thomas, E. (2006). The Old Way: A Story of the First People. New York: Farrar, Straus and Giroux. LINK

Milgram, S. (1974). Obedience to authority: An experimental view. Tavistock, London. LINK

Zimbardo, P. G. (1972). Stanford Prison Experiment: A Simulation Study of the Psychology of Imprisonment. LINK

Zen and the Art of Dissatisfaction – Part 5.

Can Money Buy Happiness?

Originally published in 17 March 2025 on Substack: https://substack.com/home/post/p-159249876

I traveled to the Kalahari Desert in northeastern Namibia, to the Nyae Nyae Conservancy on the border of Botswana, in November 2014 for two reasons. First, to collect ethnographic data for my doctoral dissertation. Second, to produce the accompanying documentary film The Origins (Ijäs & Kaunismaa 2018). My wife Maija and I lived in a tent on the roof of our 4WD vehicle, and sometimes we slept under the stars alongside local hunters, with nothing but our sleeping bags for shelter. Maija sang Brahms’ lullaby for us, and we explained to the hunters what angels were.

I had imagined (and hoped) that the Kalahari hunter-gatherers would be quite satisfied with their life, far from the psychological trappings of civilisation. This romanticised Rousseauian view had formed through reading books and research papers about the San people, especially since the Marshall family began visiting them in the 1950s. The San culture is often cited as one of the most thoroughly studied human groups in the world. There’s even a joke that every tribe has at least one white anthropologist. It was my time to be that guy.

Overall, the San do appear content with their lives, but they too have grievances and deep sources of dissatisfaction. They often wish for more wild animals to support their hunting culture. To compensate for the scarcity of game, they acquired a few goats and chickens in the spring of 2018. They also regularly bought milk from a nearby village outside the conservation area, where cattle farming is permitted. They were concerned about cattle herders crossing into their land from Botswana.

The San are happy to appear in traditional leather attire when cameras are rolling, but their culture has been changing quickly since the 1950s, as in everywhere. In fact, the group I studied had returned to a hunting culture only in the 1990s, partly as a response to the negative effects of Western influence, such as alcoholism and social challenges. Yet, thanks to Western aid, the Nyae Nyae Conservancy remains a viable area for groups that still practice hunting and gathering, at least in part.

For the purposes of this ongoing Substack series, Zen and the Art of Dissatisfaction, it is important that even the last indigenous peoples living in hunting cultures are not fully satisfied with their circumstances. They wish for more liberties, better education, and a more varied diet. One of our guides dreamed of working in the film industry. This is understandable as many of their visitors carry film equipment.

As I’ve previously noted, the deep profound dissatisfaction exists even among Kalahari hunter-gatherers. But the question is, do these hunter-gatherers want something they don’t have simply because of an internal dissatisfaction, or because they have glimpsed Western wealth and been enchanted by the promise of material satisfaction? The answer is likely a bit of both, because as long as our species Homo sapiens has existed, our actions have been marked by constant change, curiosity, and exploration.

Even when human cultures have settled in various parts of the world for extended periods, a closer look reveals that their cultures have been in constant dynamic motion. Their social structures, customs, art, food, clothing, tools, religions, and music have all evolved over time. Some indigenous peoples have changed their societal organisation, religions, property rights, and names with the seasons. Because of this dynamism, idealising indigenous cultures as somehow different than ours is a romanticised view. Seeing a foreign culture as superior is just the flip side of seeing it as inferior. Therefore, I try to be very cautious with such perspectives.

Swedish linguist, author and film maker Helena Norberg-Hodge is the founder and director of Local Futures, a non-profit dedicated to revitalising cultural and biological diversity and strengthening local communities and economies worldwide. In her book Ancient Futures (1991), she discusses cultural changes in Ladakh, a remote region in northern India bordering Pakistan and China. While politically part of India, culturally it is closer to Tibet. Ladakh remained largely isolated until 1962, when the first road was built over high mountain passes. In 1975, the Indian government opened Ladakh to tourism and Western development. Norberg-Hodge was one of the first Westerners to visit.

Norberg-Hodge describes Ladakh as a near paradise of social and ecological well-being that rapidly collapsed under external economic pressures. In the capital Leh, then with about 5,000 residents, cows were the main traffic hazard, and the air was crystal clear. Barley fields and farmhouses surrounded the city. Over the next 20 years, Norberg-Hodge witnessed Leh’s transformation. Streets filled with traffic and diesel fumes polluted the air. Soulless concrete housing projects sprawled into the distance. Water became undrinkable. Increased economic pressure led to unemployment and competition, sparking conflict between communities. Many changes were psychological.

On Norberg-Hodge’s first visit, all local houses in Ladakh were three-storied and beautifully painted. When she asked a young man to show her the poorest house in the village, he was puzzled—they had no concept of wealth inequality. Eight years later, that same man lamented their poverty, having seen images in the media of Westerners with fast cars and wealth. Suddenly, from his perspective, Ladakhi culture had morphed into primitive and poor.

Crime, depression, and suicide were rare in 1970s Ladakh. But in a short time, Western competition culture took root, and suicides became more common, even among schoolchildren. Until the 1970s, success and failure were communal experiences tied to tangible aspects of life like farming and family. Western consumer culture and market economy brought its hamster wheel to Ladakh, dividing people into winners and losers and turning personal success into everyone’s individual mission, and purpose of life.

Extreme individualism and the glorification of wealth have become so central to Western lifestyle that they seem quite natural or irrefutable. In the Western competitive mindset, happiness is always just around the corner, and we spend our lives trying to reach it. We imagine we need to succeed, and that success will lead us to more wealth. Financial security seems to be the ultimate state of happiness, but money itself doesn’t bring happiness. Money is a medium of exchange, based on trust, and it has value only because we agree it does. It is a means to an end, not an end in itself. Yet we see it as both, and that duality is worth exploring.

One might think that rich people would be happy in a world where wealth is the ultimate goal. But paradoxically, as they try to escape their own inner dissatisfaction, the world’s wealthiest people use the alcoholic drinks as the society’s poorest and most desperate. Studies show money does not bring happiness—but neither does poverty.

Poverty brings depression, but it is not only the lack of money that brings depression, it is the lack of freedom, which poverty brings along. Widespread depression among modern humans is largely due to lack of control over one’s circumstances. Depression is more common in poverty, where people feel trapped and unable to improve their circumstances. Wealth allows freer decision-making without worrying about consequences. Wealth also enables better planning for the future. Wealthy people can enrol to universities, and spend several years studying. The poor do not have this luxury. But even the rich suffer from the same existential emptiness and dissatisfaction. They seek meaning in luxury, exclusive holidays, fancy dinners, bespoke clothes, cars, watches—yet something is always missing. The inner void remains.

Did ancient hunter-gatherers have similar problems? Was their life freer in this sense? We might imagine they were fully capable of surviving in their environment, as long as they could find food, build shelter, and secure basic conditions for their family and offspring. After that, things were probably pretty good.

Ancient hunter-gatherers had no mortgages, insurance bills, electricity bills, credit card debt, or student loans. No college funds or extracurricular expenses for their children. Today, many people are up to their ears in debt. This abstract dependency on lenders causes various forms of undefined complicated anxiety. Humans have a natural need for at least some freedom and control over their lives; otherwise, they fall into despair. The Western debt-based system of dependency on power structures fosters further dissatisfaction.

Has our civilised Western lifestyle become a trap? Could we do something about it? Could we escape somewhere to be free? David Graeber and archaeologist David Wengrow (2021) remind us that in colonial North and South America, captured indigenous people often chose to return to their own communities rather than remain in ”civilisation”. The same applied to children captured by indigenous groups, who often preferred to remain with their indigenous captors. The main reasons might have been the intense social bonds among indigenous people: care, love, and above all, happiness—qualities impossible to replicate upon returning to civilisation. Graeber and Wengrow remind us that the concept of safety takes many forms. It’s one thing to know you statistically have a lower chance of being shot by an arrow. It’s another to know that around you are surrounded by who deeply care if that were to happen.

Depression has been found to be more common in impoverished conditions (Brown & Harris 1978). However, freedom or money does not ultimately save us from dissatisfaction. Andrew T. Jebb (2018), a researcher at Purdue University in Indiana, USA, studied with his colleagues whether money brings happiness. The study shows that money brings happiness up to a certain point, but not beyond a certain threshold. In Western Europe and here in Scandinavia, this threshold is around 50,000–100,000 euros in annual income, which is considerably higher than the average income. According to 2021 tax data, only 10.2% of the population reached this magical happiness threshold here in Finland, and 2% of the population surpassed this threshold. In other words, 87.8% of the population remained below that threshold. The situation is even more challenging because about 13% of the population with low incomes (below 60% of median income). The low-income threshold for a single-person household in 2021 was approximately 16,200 euros per year. No wonder we are not satisfied.

This post is the fifth part of my ongoing Substack series, Zen and the Art of Dissatisfaction, exploring the roots of human dissatisfaction, the paradox of progress, and the question of whether a meaningful life is possible in a world designed for endless desire.

Thanks for reading! This post is public so feel free to share it.

Share


Resources:

Brown, G. W., & Harris, T. (1978). Social Origins of Depression: A Study of Psychiatric Disorder in Women. Tavistock: London. LINK

Graeber, David & Wengrow, David. (2021). The Dawn of Everything: A New History of Humanity. London: Allen Lane. LINK

Ijäs, M. R. (2020). Fragments of the hunt: Persistence hunting approach to rock art. Hunter Gatherer Research, 6(3–4). LINK

Ijäs, M. (2017). Fragments of the Hunt: Persistence Hunting, Tracking and Prehistoric Art. Helsinki: Aalto University. LINK

Ijäs, M. & Kaunismaa, M. (2018). The Origins: Fragments of the Hunt. Documentary Film. LINK

Jebb, A.T., Tay, L., Diener, E., & Oishi, S. (2018). Happiness, income satiation and turning points around the world. Nature Human Behaviour. LINK

Norberg-Hodge, Helena (1991). Ancient Futures: Learning from Ladakh. San Francisco: Sierra Club Books. LINK

Zen and the Art of Dissatisfaction – Part 4.

The Hairless Ape

Originally published in 15 February 2025 on Substack https://substack.com/home/post/p-158762719

Let’s continue with the evolution of our own species. Around six million years ago, a group of apes found themselves living in a shrinking rainforest. Scientifically, these creatures belonged to the family known as great apes, or Hominidae, which today still includes humans, chimpanzees, gorillas, and orangutangs. These apes, who had evolved to live primarily on fruit and sought refuge from predators by climbing trees, eventually found their environment transforming into much dryer woodland savanna.

Forced to adapt, they began gathering new types of food, such as tubers and roots, rather than the fruits they once relied on. This shift required them to cover greater distances on the ground and use digging sticks to extract underground food sources. Identifying edible roots was no simple task—it demanded a new level of intelligence, as the only visible clues were the stems and leaves above ground.

Millions of years passed and these once rainforest-dwelling primates evolved into master survivors of the African savanna, where acquiring food required ingenuity and adaptability. About four million years after their exodus from the rainforest, they had already become almost entirely hairless, dark-skinned, and fully bipedal members of the genus Homo. The only significant body hair remained on the top of their heads, shielding them from the sun’s harsh rays.

Approximately two million years ago, a new species emerged—Homo erectus, the ”upright human.” Their survival and success on the West African savanna was unparalleled. Homo erectus was the first human species to resemble us in many ways. They stood between 145-185 cm tall and weighed between 40-70 kg. Unlike their ape ancestors, there was little size difference between males and females. Their near-complete hairlessness likely stemmed from four main reasons:

  1. Warmer nights meant body hair was no longer essential for insulation.

  2. The activation of the melanocortin-1 receptor darkened their skin, providing protection from harmful ultraviolet rays, eliminating the need for fur as a shield against sunburn.

  3. Most importantly, they developed an exceptional ability to sweat. Humans possess more sweat glands than any other animal, making perspiration a critical adaptation for savanna life.

  4. Lastly, hairlessness may have helped them avoid lice, fleas, and other parasites that plagued their hairy primate relatives.

According to Harvard professor Richard Wrangham (2009), Homo erectus used fire to cook food, keep warm at night, and ward off predators. This could partly explain their hairlessness and their more efficient and smaller digestive system compared to their ancestors. Even if they were not yet cooking food with fire, they were certainly already using external methods such as grinding, grating, chopping, pounding, and mashing their food with tools. Wrangham refers to Homo erectus as ”the cooking ape.”

Homo erectus was also the first of our ancestors to regularly eat meat. While it is uncertain how they acquired it, all evidence suggests they were either hunters or systematic scavengers. Even today, African hunter-gatherers observe the movements of vultures to locate their next meal (Liebenberg 2013). When vultures fly in a group toward a particular direction, they are heading for a carcass. If they are circling over a specific spot, the kill has already been found. In such cases, it is a race against time—those who arrive first get the best parts. Lions, for instance, only eat their fill before abandoning a carcass, making it possible for bold and hungry scavengers to drive them away with loud noise. The fortunate ones are rewarded with nutrient-rich bone marrow and sometimes even meat. Hyenas, however, are much more efficient and faster scavengers, arriving at kills within 30 minutes and leaving nothing behind. This urgency may have played a role in shaping humans’ long-distance running abilities—only the fastest could reach a carcass before the hyenas.

Systematic scavenging is unpredictable since it depends on the hunting success of other predators. Even when a carcass is available, it requires an alert scavenger, who happen to be nearby. Yet, Homo erectus consumed meat regularly, leading many researchers to conclude that they were also active hunters. However, there is no surviving evidence of effective hunting weapons from their time. They crafted simple stone tools suited for cutting, slicing, and pounding, along with more refined tools that could function as knives or scrapers. It is likely they also created tools from biodegradable materials such as wood, bark, or grass, which have not survived to the present day.

Hunting without weapons

The question is: if Homo erectus lacked sophisticated hunting weapons, how did they obtain meat?

Nearly all significant differences between Homo erectus and modern chimpanzees relate to locomotion. While chimpanzees still spend most of their time climbing trees, with long arms, short legs, and prehensile toes suited for an arboreal lifestyle, Homo erectus had traded these adaptations for something else entirely.

David Carrier (1984), Dennis Bramble (Bramble & Carrier, 1983), and my friend and supporter, Harvard professor Daniel E. Lieberman (2013), have proposed that Homo erectus was an endurance runner, shaped by natural selection for persistence hunting. This hypothesis explains many of their unique physical traits, particularly those related to energy efficiency, skeletal structure, balance, and thermoregulation. Unlike other apes, human feet generate force with minimal energy expenditure. The Achilles tendon, crucial for running, is disproportionately large in humans and first appears in Homo erectus. Their foot arch and larger joint surfaces resemble those of modern humans, suggesting greater endurance and stress tolerance. Additionally, larger gluteal muscles and the nuchal ligament (which stabilises the head) allowed for better balance. Without the nuchal ligament, a runners head would wobble uncontrollably—similar to how a pig’s head bobs when it runs.

Homo erectus also had a leaner body optimised for heat dissipation, with sweating and hairlessness playing major roles in preventing overheating. More efficient cerebral blood circulation helped cool the brain while running under the African sun.

Lieberman and his colleagues have demonstrated that, given the right conditions, humans can outrun nearly any animal over long distances. Persistence hunting involves selecting a large, easily exhausted prey—such as an antelope or a giraffe—and chasing it at a steady pace. Quadrupedal animals must synchronise their breathing with their stride because their internal organs bounce with each step. This means they must periodically stop to pant and cool down. In contrast, a bipedal runner like Homo erectus could breathe independently of their stride. Moreover, furry quadrupeds expose much of their body to the sun, accelerating heat buildup. For Homo erectus, only the scalp and shoulders were directly exposed to sunlight.

As recently as the 1990s, some hunter-gatherer groups in Botswana still practiced persistence hunting, proving it to be an effective way to secure large amounts of meat. Only when big game became scarce did humans abandon this ancient method, which had once made all of us endurance runners.


Resources:

Bramble, D. M., & Carrier, D. R. (1983). Running and breathing in mammals. Science, 219, 251–256.

Carrier, D. R. (1984). The energetic paradox of human running and hominid evolution. Current Anthropology, 25, 483–495.

Ijäs, M. R. (2020). Fragments of the hunt: Persistence hunting approach to rock art. Hunter Gatherer Research, 6(3–4).

Ijäs, M. (2017). Fragments of the Hunt: Persistence Hunting, Tracking and Prehistoric Art. Helsinki: Aalto University.

Liebenberg, L. (2013). The origin of science. Cape Town: CyberTracker.

Lieberman, D. E. (2013). The Story of the Human Body: Evolution, Health, and Disease. New York: Pantheon Books.

Wrangham, R. (2009). Catching Fire: How Cooking Made Us Human. London: Profile Books.

Zen and the Art of Dissatisfaction – Part 3.

The Origin of Dissatisfaction

”He who is not content with what he has,
would not be content with what he would like to have.”
— Socrates

Originally published in 2 March 2025 on Substack https://substack.com/inbox/post/158235271

Are our closest relatives, such as chimpanzees or bonobos, dissatisfied? Research indicates that they experience feelings of unfairness when human researchers reward one individual with a cucumber and another with a grape for completing the same task. Is dissatisfaction something that has always existed? What drives us to always desire for more—consumer goods, exotic travels, romantic relationships, fancy clothes, flamboyant drinks—more of everything? We are like hungry ghosts, wanting everything, yet nothing quenches our thirst.

In principle, any stage in human history where cultural and technological evolution took a step toward greater complexity could be considered a potential source of origins of dissatisfaction. One such early step was taken around 70,000 years ago, marking the beginning of the Upper Palaeolithic. The culture of Middle Palaeolithic humans differed significantly from that of modern humans. Middle Palaeolithic people used hand axes similar to those that had been in use for hundreds of thousands of years. These people were biologically identical to us, yet this cultural ”contentment” with old ways feels foreign to us today.

However, we must approach such transitions cautiously. Although the shift to the Upper Palaeolithic is sometimes referred to as the ”Upper Palaeolithic explosion,” it was a slow process that took thousands of years and did not occur simultaneously in one place. Moreover, in this Substack series, we are discussing an internal dissatisfaction that other animals also seem to struggle with.

According to the Swedish geneticist, Nobel laureate Svante Pääbo, Neanderthals diverged from the same lineage as modern humans approximately 550,000–690,000 years ago. Earlier fossil-based estimates suggested that the split between modern humans and Neanderthals occurred around 250,000–300,000 years ago, while archaeological data estimates the separation at around 300,000 years.

Neanderthals inherited a similar method of making simple stone tools, and changes in tool evolution were relatively slow between 200,000 and 50,000 years ago. Meanwhile, modern humans in Africa gradually began engaging in extensive trade with other human groups. Archaeological evidence suggests that such behaviour was already occurring at least 80,000 years ago, though it is likely that it began even earlier, when modern humans had already existed for 300,000–200,000 years. The American anthropologist David Graeber and the British archaeologist David Wengrow remind us in their book The Dawn of Everything that Africa may have resembled something akin to J.R.R. Tolkien’s Middle-earth, populated by humans of various shapes and sizes.

The evidence for early interaction networks is scattered, but the tools used by early modern humans remained largely unchanged for long periods. Around 100,000 years ago, however, new types of tools and objects began to appear. For instance, very small and sophisticated arrowheads were made in southern Africa around 65,000–60,000 years ago, only to disappear from the archaeological record for some time before reappearing later.

Excavations reveal that Middle Palaeolithic people used the same tools and weapons for generations, which appear to have remained relatively unchanged. People did not move as frequently as they did later, nor do they seem to have had a rich symbolic culture involving body adornments or cave paintings.

By about 70,000 years ago, the transition to Upper Palaeolithic culture was well underway. It introduced cultural features that we still recognise as ”human.” However, there is no reason to assume that a major cognitive leap occurred at this point. British archaeologist Colin Renfrew coined the term sapient paradox to describe the illusion that we fail to recognise earlier human behaviour as human-like. The clearest archaeological evidence of the Upper Palaeolithic transition is the emergence of entirely new tools. Instead of heavy stone axes, modern humans began crafting refined stone blades that were sharp, lightweight, and required knowledge of the stone material. These blades were more portable and easier to attach to wooden shafts. Examples of this new technology include prismatic blades and sophisticated burins.

Some of the most recognisable Upper Palaeolithic achievements from Ice Age Europe include tools made from animal bones and tusks. Easily worked materials like bone were used to craft sewing needles, fishing hooks, harpoon tips, flutes, and small portable sculptures. The Upper Palaeolithic period also brought dietary changes. Previously, during the Middle Palaeolithic, humans primarily relied on large game, but Upper Palaeolithic humans expanded their diet to include snails, fish, shellfish, birds, smaller mammals, and terrestrial turtles.

Evidence of a plant-based diet during the Upper Palaeolithic has only emerged in recent years due to improved analysis methods. It now appears that humans consumed a variety of wild plants, herbs, tubers, roots, fungi, and nuts. These foods were processed by grinding, mashing, boiling, and roasting. Some researchers suggest that the Upper Palaeolithic era involved a broader participation of women and children in food gathering, though this may be a sexist assumption by male researchers. Studies of contemporary hunter-gatherer societies indicate that female hunting is quite common. Researchers from the University of Seattle, led by Abigail Anderson, found evidence that in 90% of the 50 analysed groups, women engaged in hunting. In over a third of these cases, women hunted all types of game, including large animals.

The Rise of Homo Non Satiatæ

The Upper Palaeolithic transition has been linked to advances in tools, diet, and cooperation, which in turn facilitated population growth and the rapid spread of humans across the Ice Age world, including Australia. It has also been suggested that this transition marked the end of earlier forms of cannibalism. While this is difficult to prove, it may have at least reduced such practices. Humans have occasionally engaged in cannibalism for various reasons, but in Upper Palaeolithic, they began burying their dead with respect and likely with ritual significance, as evidenced by grave goods found in burials.

Alongside these seemingly positive adaptations, something significant (for the lack of a better expression: modern) appears to have occurred in the human mind. Some have speculated that language and symbolic thought took a leap forward at this time. While this evolution was likely gradual, the archaeological record gives the impression of a sudden transformation. Natural selection favours those who are most reproductively successful, and new social and technological skills likely facilitated this process. However, natural selection does not consider whether the reproducing organism is healthy or happy. Humans at the dawn of the Upper Palaeolithic were likely neither always healthy nor necessarily happy—and perhaps this still applies. With the beginning of the Upper Palaeolithic, our species may have taken a step toward dissatisfaction. Perhaps the last 70,000—or at least 50,000—years of our evolution could be playfully described as the rise of Homo Non Satiatæ, the dissatisfied human.

From this perspective, we could frame the discussion as follows: one distinguishing feature of Upper Palaeolithic humans, compared to their predecessors, may have been the emergence of psychological dissatisfaction. I do not claim that people were content and happy before this transition, nor that it was the result of a sudden shift. Rather, it was likely a long, tens-of-thousands-of-years-long process during which humans lived in diverse communities, experimented with different ways of organising their societies, and adapted as best they could.

This trait had many enriching dimensions. Dissatisfaction made humans curious travellers, constantly searching for better hunting grounds. It also made them possessive, as evidenced by the extinction of large predators, megafauna, and competing species worldwide. Dissatisfaction also drove innovation—nothing old seemed to serve its purpose anymore, creating an urgent need for new tools, clothing, customs, and weapons.

Thanks for reading! This post is public so feel free to share it.

Share


Resources

Anderson, A., Chilczuk, S., Nelson, K., Ruther, R., & Wall-Scheffler, C. (2023). The Myth of Man the Hunter: Women’s Contribution to the Hunt Across Ethnographic Contexts. PLoS ONE, 18(6), e0287101.

Graeber, D., & Wengrow, D. (2021). The Dawn of Everything: A New History of Humanity. Farrar, Straus and Giroux.

Pääbo, S., et al. (1997). Neandertal DNA Sequences and the Origin of Modern Humans. Cell, 90(1), 19-30.

Renfrew, C. (2008). Neuroscience, Evolution and the Sapient Paradox / The Factuality of Value and of the Sacred. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1499), 2041-2047.