Zen and the Art of Dissatisfaction – Part 27

From Red Envelopes to Smart Finance

In recent years China has accelerated the intertwining of state‑led surveillance, artificial‑intelligence‑driven finance and ubiquitous digital platforms. The country’s 2017 cyber‑security law introduced harsher penalties for the unlawful collection and sale of personal data, raising the perennial question of how much privacy is appropriate in an era of pervasive digitisation. This post examines the legislative backdrop, the role of pioneering technologists such as Kai‑Fu Lee, the meteoric growth of platforms like WeChat, and the emergence of AI‑powered financial services such as Smart Finance. It also reflects on the broader societal implications of a surveillance‑centric model that is increasingly being mirrored in Western contexts.Subscribe

Originally published in Substack: https://substack.com/home/post/p-172666849

China began enforcing a new cyber‑security law in 2017. The legislation added tougher punishments for the illegal gathering or sale of user data. The central dilemma remains: how much privacy is the right amount in the age of digitalisation? There is no definitive answer to questions about the optimal level of social monitoring needed to balance convenience and safety, nor about the degree of anonymity citizens should enjoy when attending a theatre, dining in a restaurant, or travelling on the metro. Even if we trust current authorities, are we prepared to hand the tools for classification and surveillance over to future rulers?

Kai‑Fu Lee’s Perspective on China’s Data Openness

According to Taiwanese AI pioneer Kai‑Fu Lee (2018), China’s relative openness in collecting data in public spaces gives it a head start in deploying observation‑based AI algorithms. Lee’s background lends weight to his forecasts. His 1988 doctoral dissertation was a groundbreaking work on speech recognition, and from 1990 onward he worked at Apple, Microsoft and Google before becoming a private‑equity investor in 2009. This openness (i.e., the lack of privacy protection) accelerates the digitalisation of urban environments and opens the door to new OMO (online‑merge‑offline) applications in retail, security and transport. Pushing AI into these sectors requires more than cameras and data; creating OMO environments in hospitals, cars and kitchens demands a diverse array of sensor‑enabled hardware to synchronise the physical and digital worlds.

One of China’s most successful companies in recent years has been Tencent, which has been Asia’s most valuable firm since 2016. Its secret sauce is the messaging app WeChat, launched in January 2011 when Tencent already owned two other dominant social‑media platforms. Its QQ instant‑messaging service and Q‑Zone social network each boasted hundreds of millions of users.

WeChat initially allowed users to send photos, short voice recordings and text in Chinese characters, and it was built specifically for smartphones. As the user base grew, its functionalities expanded. By 2013 WeChat had 300 million users; by 2019 that figure rose to 1.15 billion daily active users. It introduced video calls and conference calls several years before the American WhatsApp (today owned by Meta). The app’s success rests on its “app‑within‑an‑app” principle, allowing businesses to create their own mini‑apps inside WeChat—effectively their own dedicated applications. Many firms have abandoned standalone apps and now operate entirely within the WeChat ecosystem.

Over the years, WeChat has captured users’ digital lives beyond smartphones, becoming an Asian “remote control” that governs everyday transactions: paying in restaurants, ordering taxis, renting city bikes, managing investments, booking medical appointments and even ordering prescription medication to the doorstep.

In honour of the Chinese New Year 2014, WeChat introduced digital red envelopes—cash‑filled gifts akin to Western Christmas presents. Users could link their bank accounts to WeChat Pay and send a digital red envelope, with the funds landing directly in the recipient’s WeChat wallet. The campaign prompted five million users to open a digital bank account within WeChat.

Competition from Alipay and the Rise of Cashless Payments

Another Chinese tech titan, Jack Ma, founder of Alibaba, launched the digital payment system Alipay back in 2004. Both Alipay and WeChat enabled users to request payments via simple, printable QR codes as early as 2016. This shift has transformed Chinese phone usage into a primary payment method, to the extent that homeless individuals now beg for money by displaying QR codes. In several Chinese cities cash has effectively disappeared for years.

WeChat and Alipay closely monitor users’ spending habits, building detailed profiles of consumer behaviour. China has largely bypassed a transitional cash‑payment stage: millions moved straight from cash to mobile payments without ever owning a credit card. While both platforms allow users to withdraw cash from linked bank accounts, their core services do not extend credit.

Lee (2018) notes the emergence of a service called Smart Finance, an AI‑powered application that relies solely on algorithms to grant millions of micro‑loans. The algorithm requires only access to the borrower’s phone data, constructing a consumption profile from seemingly trivial signals—such as typing speed, battery level and birthdate—to predict repayment likelihood.

Smart Finance’s AI does not merely assess the amount of money in a WeChat wallet or bank statements; it harvests data points that appear irrelevant to humans. Using these algorithmically derived credit indicators, the system achieves finer granularity than traditional scoring methods. Although the opaque nature of the algorithm prevents public scrutiny, its unconventional metrics have proven highly profitable.

As data volumes swell, these algorithms become ever more refined, allowing firms to extend credit to groups traditionally overlooked by banks—young people, migrant workers, and others. However, the lack of transparency means borrowers cannot improve their scores because the criteria remain hidden, raising fairness concerns.

Surveillance Society: Social Credit and Ethnic Monitoring

Lee reminds us that AI algorithms are reshaping society. From a Western viewpoint, contemporary China resembles a surveillance state where continuous monitoring and a social credit system are routine. Traffic violations can be punished through facial‑recognition algorithms, with fines deducted directly from a user’s WeChat account. WeChat itself tracks users’ movements, language and interactions, acting as a central hub for social eligibility monitoring.

A Guardian article by Johana Bhuiyan (2021) reported that Huaweifiled a July 2018 patent for technology capable of distinguishing whether a person belongs to the Han majority or the persecuted Uyghur minority. State‑contracted Chinese firm Hikvision has developed similar facial‑recognition capabilities for use in re‑education camps and at the entrances of nearly a thousand mosques. China denies allegations of torture and sexual violence against Uyghurs; estimates suggest roughly one million detainees in these camps.

AI‑enabled surveillance is commonplace in China and is gaining traction elsewhere. Amazon offers its facial‑recognition service Rekognition to various clients, although the U.S. police stopped using it in June 2020 amid protests against police racism and violence. Critics highlighted Rekognition’s difficulty correctly identifying gender for darker‑skinned individuals—a claim Amazon disputes.

Google’s image‑search facial‑recognition feature also faced backlash after software engineer Jacky Alciné discovered in 2015 that the system mislabelled African‑American friends as “gorillas.” After public outcry, Google removed the offending categories (gorilla, chimpanzee, ape) from its taxonomy (Vincent 2018).

Limits of Current AI and Future Outlook

Present‑day AI algorithms primarily excel at inference tasks and object detection. General artificial intelligence—capable of autonomous, creative reasoning—remains a distant goal. Nonetheless, we are only beginning to grasp the possibilities and risks of AI‑driven algorithms.

Is the Chinese surveillance model something citizens truly reject? Within China, the social credit system may be viewed positively by ordinary citizens who can boost their scores by paying bills promptly, volunteering and obeying traffic rules. In Europe, a quieter acceptance of similar profiling is emerging: we are already classified—often without our knowledge—through the data we generate while browsing the web. This silent consent fuels targeted advertising for insurance, lingerie, holidays, television programmes and even political persuasion. As long as we are unwilling to pay for the privilege of using social‑media platforms, those platforms will continue exploiting our data as they see fit.

Summary

China’s 2017 cyber‑security law set the stage for an expansive data‑collection regime that underpins a sophisticated surveillance economy. Visionaries like Kai‑Fu Lee highlight how openness in public‑space data fuels AI development, while corporate giants such as Tencent and Alibaba have turned messaging apps into all‑purpose digital wallets and service hubs. AI‑driven financial products like Smart Finance illustrate both the power and opacity of algorithmic credit scoring. Simultaneously, state‑backed facial‑recognition technologies target ethnic minorities, and the social‑credit system normalises continuous monitoring of everyday behaviour. These trends echo beyond China, with Western firms and governments experimenting with comparable surveillance tools. Understanding the interplay between legislation, corporate strategy and AI is essential for navigating the privacy challenges of our increasingly digitised world.


References

Bhuiyan, J. (2021). Huawei files patent to identify UyghursThe Guardian
Lee, K. F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Harper Business. 
Vincent, J. (2018). Google removes offensive labels from image‑search resultsBBC.

Zen and the Art of Dissatisfaction – Part 24

How Algorithms and Automation Redefine Work and Society

The concept of work in Western societies has undergone dramatic transformations, yet in some ways it has remained surprisingly static. Work and the money made with work also remains one of the leading causes for dissatisfactoriness. There’s usually too much work and the compensation never seems to be quite enough. While the Industrial Revolution replaced manual labour with machinery, the age of Artificial Intelligence (AI) threatens to disrupt not only blue-collar jobs but also highly skilled professions. This post traces the historical shifts in the nature of work, from community-driven agricultural labour to the rise of mass production, the algorithmic revolution, and the looming spectre of general artificial intelligence. Along the way, it examines the ethical, economic, and social implications of automation, surveillance, and machine decision-making — raising critical questions about the place of humans in a world increasingly run by machines.

Originally published in Substack: https://substack.com/home/post/p-170864875

The Western concept of work has hardly changed in essence: half the population still shuffles papers, projecting an image of busyness. The Industrial Revolution transformed the value of individual human skill, rendering many artisanal professions obsolete. A handcrafted product became far more expensive compared to its mass-produced equivalent. This shift also eroded the communal nature of work. Rural villagers once gathered for annual harvest festivities, finding strength in togetherness. The advent of threshing machines, tractors, and milking machines eliminated the need for such collective efforts.

In his wonderful and still very important film Modern Times (1936), Charlie Chaplin depicts industrial society’s alienating coexistence: even when workers are physically together, they are often each other’s competitors. In a factory, everyone knows that anyone can be replaced — if not by another worker, then by a machine.

In the early 1940s, nearly 40% of the American workforce was employed in manufacturing; today, production facilities employ only about 8%. While agricultural machinery displaced many farmworkers, those machines still require transportation, repairs, and eventual replacement — generating jobs in other, less specialised sectors.

The Algorithmic Disruption

Artificial intelligence algorithms have already displaced workers in multiple industries, but the most significant disruption is still to come. Previously, jobs were lost in sectors requiring minimal training and were easily passed on to other workers. AI will increasingly target professions demanding long academic training — such as lawyers and doctors. Algorithms can assess legal precedents for future court cases more efficiently than humans, although such capabilities raise profound ethical issues.

One famous Israeli study suggested that judges imposed harsher sentences before lunch than after (Lee, 2018). Although later challenged — since case order was pre-arranged by severity — it remains widely cited to argue for AI’s supposed superiority in legal decision-making.

Few domains reveal human irrationality as starkly as traffic. People make poor decisions when tired, angry, intoxicated, or distracted while driving. In 2016, road traffic accidents claimed 1.35 million lives worldwide. In Finland in 2017, 238 people died and 409 were seriously injured in traffic; there were 4,432 accidents involving personal injury.

The hope of the AI industry is that self-driving cars will vastly improve road safety. However, fully autonomous vehicles remain distant, partly because they require a stable and predictable environment — something rare in the real world. Like all AI systems, they base predictions on past events, which limits their adaptability in chaotic, unpredictable situations.

Four Waves of Machine-Driven Change

The impact of machines on human work can be viewed as four distinct waves:

  1. The Industrial Revolution — people moved from rural to urban areas for factory jobs.
  2. The Algorithmic Wave — AI has increased efficiency in many industries, with tech giants like Amazon, Apple, Alphabet, Microsoft, Huawei, Meta Platforms, Alibaba, IBM, Tencent, and OpenAI leading the way. In 2020, their combined earnings were just under USD 1.5 trillion. Today they are pushing 2 trillion. The leader, Amazon, making 630 billion dollars per year. 
  3. The Sensorimotor Machine Era — autonomous cars, drones, and increasingly automated factories threaten remaining manual jobs.
  4. The Age of Artificial General Intelligence (AGI) — as defined by Nick Bostrom (2015), machines could one day surpass human intelligence entirely.

The rise of AI-driven surveillance evokes George Orwell’s Nineteen Eighty-Four (1949), in which people live under constant watch. Modern citizens voluntarily buy devices that track them, competing for public attention online. Privacy debates date back to the introduction of the Kodak camera in 1888 and intensified in the 1960s with computerised tax records. Today, exponentially growing data threatens individual privacy in unprecedented ways.

AI also inherits human prejudices. Studies show that people with African-American names face discrimination from algorithms, and biased data can lead to unequal treatment based on ethnicity, gender, or geography — reinforcing, rather than eliminating, inequality.

Conclusion

From the threshing machine to the neural network, every technological leap has reshaped the world of work, altering not only what we do but how we define ourselves. The coming decades may bring the final convergence of machine intelligence and autonomy, challenging the very premise of human indispensability. The question is not whether AI will change our lives, but how — and whether we will have the foresight to ensure that these changes serve humanity’s best interests rather than eroding them.


References

Bostrom, N. (2015). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Lee, D. (2018). Do you get fairer sentences after lunch? BBC Future.
Orwell, G. (1999). Nineteen eighty-four. Penguin. (Original work published 1949)

Zen and the Art of Dissatisfaction  – Part 22

Big Data, Deep Context

In this post, we explore what artificial intelligence (AI) algorithms, or rather – large language models – are, how they learn, and their growing impact on sectors such as medicine, marketing and digital infrastructure. We look into some prominent real‑world examples from the recent past—IBM’s Watson, Google Flu Trends, and the Hadoop ecosystem—and discuss how human involvement remains vital even as machine learning accelerates. Finally, we reflect on both the promise and the risks of entrusting complex decision‑making to algorithms.

Originally published in Substack: https://substack.com/inbox/post/168617753

Artificial intelligence algorithms function by ingesting training data, which guides their learning. How this data is acquired and labelled marks the key differences between various types of AI algorithms. An AI algorithm receives training data and uses it to learn. Once trained, the algorithm performs new tasks using that data as the basis for its future decisions.

AI in Healthcare: From Watson to Robot Doctors

Some algorithms are capable of learning autonomously, continuously integrating new information to adjust and refine their future actions. Others require a programmer’s intervention from time to time. AI algorithms fall into three main categories: supervised learning, unsupervised learning and reinforcement learning. The primary differences between these approaches lie in how they are trained and how they operate.

Algorithms learn to identify patterns in data streams and make assumptions about correct and incorrect choices. They become more effective and accurate the more data they receive—a process known as deep learning, based on artificial neural networks that distinguish between right and wrong answers, enabling them to draw better and faster conclusions. Deep learning is widely used in speech, image and text recognition and processing.

Modern AI and machine learning algorithms have empowered practitioners to notice things they might otherwise have missed. Herbert Chase, a professor of clinical medicine at Columbia University in New York, observed that doctors sometimes have to rely on luck to uncover underlying issues in a patient’s symptoms. Chase served as a medical adviser to IBM during the development of Watson, the AI diagnostic assistant.

IBM’s concept involved a doctor inputting, for example, three patient‑described symptoms into Watson; the diagnostic assistant would then suggest a list of possible diagnoses, ranked from most to least likely. Despite the impressive hype surrounding Watson, it proved inadequate at diagnosing actual patients. IBM therefore announced that Watson would be phased out by the end of 2023 and its clients encouraged to transition to its newer services.

One genuine advantage of AI lies in the absence of a dopamine response. A human doctor, operating via biological algorithms, experiences a rush of dopamine when they arrive at what feels like a correct diagnosis—but that diagnosis can be wrong. When doubts arise, the dopamine fades and frustration sets in. In discouragement, the doctor may choose a plausible but uncertain diagnosis and send the patient home.

An AI‑algorithm‑based “robot‑doctor” does not experience dopamine. All of its hypotheses are treated equally. A robot‑doctor would be just as enthused about a novel idea as about its billionth suggestion. It is likely that doctors will initially work alongside AI‑based robot doctors. The human doctor can review AI‑generated possibilities and make their own judgement. But how long will it be before human doctors become obsolete?

AI in Action: Data, Marketing, and Everyday Decisions

Currently, AI algorithms trained on large datasets drive actions and decision‑making across multiple fields. Robot‑doctors assisting human physicians and the self‑driving cars under development by Google or Tesla are two visible examples of near‑future possibilities—assuming the corporate marketing stays honest.

AI continues to evolve. Targeted online marketing, driven by social media data, is an example of a seemingly trivial yet powerful application that contributes to algorithmic improvement. Users may tolerate mismatched adverts on Facebook, but may become upset if a robot‑doctor recommends an incorrect, potentially expensive or risky test. The outcome is all about data—its quantity, how it is evaluated and whether quantity outweighs quality.

According to MIT economists Erik Brynjolfsson and Andrew McAfee (2014), in the 1990s only about one‑fifth of a company’s activities left a digital trace. Today, almost all corporate activities are digitised, and companies have begun to produce reports in language intelligible to algorithms. It is now more important that a company’s operations are understood by AI algorithms than by its human employees.

Nevertheless, vast amounts of data are still analysed using tools built by humans. Facebook is perhaps the most well‑known example of how our personal data is structured, collected, analysed and used to influence and manipulate opinions and behaviour.

Big Data Infrastructure

Jeff Hammerbacher—in a 2015 interview with Steve Lohr—helped introduce Hadoop in 2008 to manage the ever‑growing volume of data. Hadoop, developed by Mike Cafarella and Doug Cutting, is an open‑source variant of Google’s own distributed computing system. Initially named after Cutting’s child’s yellow toy elephant, Hadoop could process two terabits of data in two days. Two years later it could perform the same task in mere minutes.

At Facebook, Hammerbacher and his team constructed Hive, an application running on Hadoop. Now available as Apache Hive, it allows users without a computer science degree to query large processed datasets. During the writing of this post, generative AI applications such as ChatGPT (by OpenAI), Claude (Anthropic), Gemini (Google DeepMind), Mistral & Mixtral (Mistral AI), and LLaMA (Meta) have become available for casual users on ordinary computers.

A widely cited example of public‑benefit predictive data analysis is Google Flu Trends (GFT). Launched in 2008, GFT aimed to predict flu outbreaks faster than official healthcare systems by analysing popular Google search terms related to flu.

GFT successfully detected the H1N1 virus before official bodies in 2009, marking a major achievement. However, in the winter of 2012–2013, media coverage of flu induced a massive spike in related searches, causing GFT’s estimates to be almost twice the real figures. The Science article “The Parable of Google Flu” (Lazer et al., 2014) accused Google of “big‑data hubris”, although it conceded that GFT was never intended as a standalone forecasting tool, but rather as a supplementary warning signal (Raising the bar, Wikipedia).

Google’s miscalculation lay in its failure to interpret context. Steve Lohr (2015) emphasises that context involves understanding associations—a shift from raw data to meaningful information. IBM’s Watson was touted as capable of such contextual understanding, capable of linking words to appropriate contexts .

Watson: From TV champion to Clinical Tool, and sold for scraps!

David Ferrucci, a leading AI researcher at IBM, headed the DeepQA team responsible for Watson . Named after IBM’s founder Thomas J. Watson, Watson gained prominence after winning £1 million on Jeopardy! in 2011, defeating champions Brad Rutter and Ken Jennings.

Jennifer Chu‑Carroll, one of Watson’s Jeopardy! coaches, told Steve Lohr (2015) that Watson sometimes made comical errors. When asked “Who was the first female astronaut?”, Watson repeatedly answered “Wonder Woman,” failing to distinguish between fiction and reality.

Ken Jennings reflected that:

“Just as manufacturing jobs were removed in the 20th century by assembly‑line robots, Brad and I were among the first knowledge‑industry workers laid off by the new generation of ‘thinking’ machines… The Jeopardy! contestant profession may be the first Watson‑displaced profession, but I’m sure it won’t be the last.”

In February 2013, IBM announced that Watson’s first commercial application would focus on lung cancer treatment and other medical diagnoses—a real‑world “Dr Watson”—with 90% of oncology nurses reportedly following its recommendations at the time. The venture ultimately collapsed under the weight of unmet expectations and financial losses. In January 2022, IBM quietly sold the core assets of Watson Health to private equity firm Francisco Partners—reportedly for about $1 billion, a fraction of the estimated $4 billion it had invested—effectively signalling the death knell of its healthcare ambitions. The sale marked the end of Watson’s chapter as a medical innovator; the remaining assets were later rebranded under the name Merative, a standalone company focusing on data and analytics rather than AI‑powered diagnosis. Slate described the move as “sold for scraps,” characterising the downfall as a cautionary tale of over‑hyped technology failing to deliver on bold promises in complex fields like oncology.

Conclusion

Artificial intelligence algorithms are evolving rapidly, and while they offer significant benefits in fields like medicine, marketing, and data analysis, they also bring challenges. Data is not neutral: volume must be balanced with quality and contextual understanding. Tools such as Watson, Hadoop and Google Flu Trends underscore that human oversight remains indispensable. Ultimately, AI should augment human decision‑making rather than replace it—at least for now.


References

Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.

Ferrucci, D. A., Brown, E., Chu‑Carroll, J., Fan, J., Gondek, D., Kalyanpur, A. A., … Welty, C. (2011). Building Watson: An overview of the DeepQA project. AI Magazine, 31(3), 59–79. (IBM Research)

Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). The parable of Google Flu: traps in big data analysis. Science, 343(6176), 1203–1205. (Wikipedia)

Lohr, S. (2015). Data‑ism. HarperBusiness.

Mintz‑Oron, O. (2010). Smart Machines: IBM’s Watson and the Era of Cognitive Computing. Columbia Business School Publishing. [Referenced via IBM Watson bibliography] (TIME, Wikipedia)