Zen and the Art of Dissatisfaction – Part 33

From Poverty to Productivity

Across the world, economists, sociologists and policymakers have long debated whether providing people with an unconditional basic income could help lift them out of poverty. Despite numerous pilot projects, there are relatively few long-term studies showing the large-scale social and health impacts of such measures. One striking exception, highlighted by the Dutch historian Rutger Bregman, provides rare empirical evidence of how a sudden, guaranteed flow of money can transform an entire community — not just economically, but psychologically and socially.

In 1997, in the state of North Carolina, the Eastern Band of Cherokee people opened the Harrah’s Cherokee Casino Resort. By 2010, the casino’s annual revenues had reached around 400 million USD, where they have remained relatively stable ever since. The income was used to build a new school, hospital and fire station — but the most significant portion of the profits went directly to the tribe’s members, about 8,000 in total.

The Findings: Money Really Did Change Everything

By 2001, the funds from the casino already accounted for roughly 25–33 per cent of household income for many families. These payments acted, in effect, as an unconditional basic income.

What made this case extraordinary was that, purely by coincidence, a research group led by psychiatrist Jane Costello at Duke University had been tracking the mental health of young people in the area since 1991. This provided a unique opportunity to compare the same community before and after the introduction of this new source of income.

Costello’s long-term data revealed that children who had grown up in poverty were far more likely to suffer from behavioural problems than their better-off peers. Yet after the casino opened — and the Cherokee families’ financial situation improved — behavioural problems among children lifted out of poverty declined by up to 40 per cent, reaching levels comparable to those of children from non-poor households.

The benefits went beyond behaviour. Youth crime, alcohol consumption and drug use all decreased, while school performance improved significantly. Ten years later, researchers found that the earlier a child had been lifted out of poverty, the better their mental health as a teenager.

Bregman (2018) uses this case to make a clear point: poverty is not caused by laziness, stupidity or lack of discipline. It is caused by not having enough money. When poor families finally have the financial means to meet their basic needs, they frequently become more productive citizens and better parents.

In his words, “Poor people don’t make stupid decisions because they are stupid, but because they live in a context where anyone would make stupid decisions.” Scarcity — whether of time or money — narrows focus and drains cognitive resources, leading to short-sighted, survival-driven choices. And as Bregman puts it poignantly:

“There is one crucial difference between the busy and the poor: you can take a holiday from busyness, but you can’t take a holiday from poverty.”

How Poverty Shapes the Developing Brain

The deeper roots of these findings lie in how poverty and stress affect brain development and emotional regulation. The Canadian physician and trauma expert Gábor Maté (2018) explains how adverse childhood experiences — known as ACE scores — are far more common among children raised in poverty. Such children face a higher risk of being exposed to violence or neglect, or of witnessing domestic conflict in their homes and neighbourhoods.

Chronic stress, insecurity and emotional unavailability of caregivers can leave lasting marks on the developing brain. The orbitofrontal cortex — located behind the eyes and crucial for interpreting non-verbal emotional cues such as tone, facial expressions and pupil size — plays a vital role in social bonding and empathy. If parents are emotionally detached due to stress, trauma or substance use, this brain region may develop abnormally.

Maté describes how infants depend on minute non-verbal signals — changes in the caregiver’s pupils or micro-expressions — to determine whether they are safe and loved. Smiling faces and dilated pupils signal joy and security, whereas flat or constricted expressions convey threat or absence. These signals shape how a child’s emotional circuits wire themselves for life.

When children grow up surrounded by tension or neglect, they may turn instead to peers for validation. Yet peer-based attachment, as Maté notes, often fosters riskier behaviour: substance use, early pregnancy, and susceptibility to peer pressure. Such patterns are not signs of inherent cruelty or weakness, but rather of emotional immaturity born of unmet attachment needs.

Not Just a Poverty Problem: The Role of Emotional Availability

Interestingly, these developmental challenges are not confined to low-income families. Children from wealthy but emotionally absent households often face similar struggles. Parents who are chronically busy or glued to their smartphones may be physically present yet emotionally unavailable. The result can be comparable levels of stress and insecurity in their children.

Thus, whether a parent is financially poor or simply time-poor, the emotional outcome for the child can be strikingly similar. In both cases, high ACE scores predict poorer mental and physical health, lower educational attainment, and reduced social mobility.

While Finland is often praised for its high social mobility, countries like the United States show a much stronger intergenerational persistence of poverty. In rigidly stratified societies, the emotional and economic consequences of childhood disadvantage are far harder to escape.

Towards a More Humane Future: Basic Income and the AI Revolution

As artificial intelligence reshapes industries and redefines the meaning of work, society faces a profound question: how do we ensure everyone has the means — and the mental space — to live well?

If parents could earn their income doing the work they truly value, rather than chasing pay cheques for survival, they would likely become more productive, more fulfilled, and more emotionally attuned to their children. In turn, those children would grow into healthier, happier adults, capable of sustaining positive cycles of wellbeing and productivity.

Such an outcome would not only enhance individual happiness but would also reduce public expenditure on health care, policing and welfare. Investing in people’s emotional and economic stability yields returns that compound across generations. A universal basic income (UBI), far from being utopian, could therefore represent one of the wisest and most humane investments a modern society could make.

Conclusion

The story of the Eastern Band of Cherokee people and the Harrah’s Cherokee Casino stands as powerful evidence that unconditional income can transform lives — not through moral exhortation, but through simple material security. Poverty, as Bregman reminds us, is not a character flaw; it is a cash-flow problem. And as Maté shows, the effects of that scarcity extend deep into the wiring of the human brain. When financial stress eases, parents can connect, children can thrive, and communities can flourish. In an age of automation and abundance, perhaps the greatest challenge is no longer how to produce wealth — but how to distribute it in ways that allow everyone the freedom to be fully human.


References

Bregman, R. (2018). Utopia for Realists: The Case for a Universal Basic Income, Open Borders, and a 15-Hour Workweek. Bloomsbury.
Maté, G. (2018). In the Realm of Hungry Ghosts: Close Encounters with Addiction. North Atlantic Books.

Zen and the Art of Dissatisfaction – Part 26

Unrelenting Battle for AI Supremacy

In today’s fast-evolving digital landscape, the titanic technology corporations are locked in a merciless struggle for AI dominance. Their competitive advantage is fuelled by the ability to access vast quantities of data. Yet this race holds profound implications for privacy, ethics, and the overlooked human labour that quietly powers it.

Originally published in Substack: https://substack.com/home/post/p-172413535

Large technology conglomerates are engaged in a cutthroat contest for AI supremacy, a competition shaped in large part by the free availability of data. Chinese rivals may be narrowing the gap in this contest, where the free flow of data reigns supreme. In contrast, in Western nations, personal data remains, at least for now, considered the property of the individual; its use requires the individual’s awareness and consent. Nevertheless, people freely share their data—opinions, consumption habits, images, location—when signing up for platforms or interacting online. The freer companies can exploit this user data, the quicker their AI systems learn. Machine learning is often applauded because it promises better services and more accurately targeted advertisements.

Hidden Human Labour

Yet, behind these learning systems are human workers—micro‑workers—who teach data to AI algorithms. Often subcontracted by the tech giants, they are paid meagrely yet exposed to humanity’s darkest content, and they must keep what they see secret. In principle, anyone can post almost anything on social media. Platforms may block or “lock” content that violates their policies—only to have the original poster appeal, rerouting the content to micro‑workers for review.

These shadow workers toil from home, performing tasks such as identifying forbidden sexual content, violence, or categorising products for companies like Walmart and Amazon. For example, they may have to distinguish whether two similar items are the same or retag products into different categories. Despite the rise of advanced AI, these micro‑tasks remain foundational—and are compensated only by the cent.

The relentless gathering of data is crucial for deep‑learning AI systems. In the United States, the tension between user privacy and corporate surveillance remains unresolved—largely stemming from the Facebook–Cambridge Analytica scandal. In autumn 2021, Frances Haugen, a data scientist and whistleblower, exposed how Facebook prioritised maximising user time on the platform over public safety Wikipedia+1.

Meanwhile, the roots of persuasive design trace back to Stanford University’s Persuasive Technology Lab (now known as the Behavior Design Lab), under founder B. J. Fogg, where concepts to hook and retain users—regardless of the consequences—were born. On face value, social media seems benign—connecting people, facilitating ideas, promoting second‑hand sales. Yet beneath the surface lie algorithms designed to keep users engaged, often by feeding content tailored to their interests. The more platforms learn, the more they serve users exactly what they want—drawing them deeper into addictive cycles.

Renowned psychologists from a PNAS study found that algorithms—based on just a few likes—could know users better than even their closest friends. About 90 likes enabled better personality predictions than an average friend, while 270 likes made AI more accurate than a spouse.

The Cambridge Analytica scandal revealed how personal data can be weaponised to influence political outcomes in events like Brexit and the 2016 US Presidential Election. All that was needed was to identify and target individuals with undecided votes based on their location and psychological profiles.

Frances Haugen’s whistleblowing further confirmed that Facebook exacerbates political hostility and supports authoritarian messaging especially in countries like Brazil, Hungary, the Philippines, India, Sri Lanka, Myanmar, and the USA.

As critics note, these platforms never intended to serve as central political channels—they were optimized to maximise engagement and advertising revenue. One research group led by Laura Edelson found that misinformation posts received six times more likes than posts from trusted sources like CNN or the World Health Organization The Guardian.

In theory, platforms could offer news feeds filled exclusively with content that made users feel confident, loved, safe—but such feeds don’t hold attention long enough for profit. Instead, platforms profit more from cultivating anxiety, insecurity, and outrage. The algorithm knows us so deeply that we often don’t even realise when we’re entirely consumed by our feelings, fighting unseen ideological battles. Hence, ad-based revenue models prove extremely harmful. Providers could instead charge a few euros a month—but the real drive is harvesting user data for long‑term strategic advantage.

Conclusion

The race for AI supremacy is not just a competition of algorithms—it’s a battle over data, attention, design, and ethics. The tech giants are playing with our sense of dissatisfasction, and we have no psychological tools to avoid it. While tech giants vie for the edge, a hidden workforce labours in obscurity, and persuasive systems steer human behaviour toward addiction and division. Awareness, regulation, and ethical models—potentially subscription‑based or artist‑friendly—are needed to reshape the future of AI for human benefit.


References

B. J. Fogg. (n.d.). B. J. Fogg. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/B._J._Fogg
Behavior Design Lab. (n.d.). Stanford Behavior Design Lab. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Stanford_Behavior_Design_Lab
Captology. (n.d.). Captology. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Captology
Frances Haugen. (n.d.). Frances Haugen. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Frances_Haugen
2021 Facebook leak. (n.d.). 2021 Facebook leak. Wikipedia. Retrieved from https://en.wikipedia.org/wiki/2021_Facebook_leak