Zen and the Art of Dissatisfaction – Part 27

From Red Envelopes to Smart Finance

In recent years China has accelerated the intertwining of state‑led surveillance, artificial‑intelligence‑driven finance and ubiquitous digital platforms. The country’s 2017 cyber‑security law introduced harsher penalties for the unlawful collection and sale of personal data, raising the perennial question of how much privacy is appropriate in an era of pervasive digitisation. This post examines the legislative backdrop, the role of pioneering technologists such as Kai‑Fu Lee, the meteoric growth of platforms like WeChat, and the emergence of AI‑powered financial services such as Smart Finance. It also reflects on the broader societal implications of a surveillance‑centric model that is increasingly being mirrored in Western contexts.Subscribe

Originally published in Substack: https://substack.com/home/post/p-172666849

China began enforcing a new cyber‑security law in 2017. The legislation added tougher punishments for the illegal gathering or sale of user data. The central dilemma remains: how much privacy is the right amount in the age of digitalisation? There is no definitive answer to questions about the optimal level of social monitoring needed to balance convenience and safety, nor about the degree of anonymity citizens should enjoy when attending a theatre, dining in a restaurant, or travelling on the metro. Even if we trust current authorities, are we prepared to hand the tools for classification and surveillance over to future rulers?

Kai‑Fu Lee’s Perspective on China’s Data Openness

According to Taiwanese AI pioneer Kai‑Fu Lee (2018), China’s relative openness in collecting data in public spaces gives it a head start in deploying observation‑based AI algorithms. Lee’s background lends weight to his forecasts. His 1988 doctoral dissertation was a groundbreaking work on speech recognition, and from 1990 onward he worked at Apple, Microsoft and Google before becoming a private‑equity investor in 2009. This openness (i.e., the lack of privacy protection) accelerates the digitalisation of urban environments and opens the door to new OMO (online‑merge‑offline) applications in retail, security and transport. Pushing AI into these sectors requires more than cameras and data; creating OMO environments in hospitals, cars and kitchens demands a diverse array of sensor‑enabled hardware to synchronise the physical and digital worlds.

One of China’s most successful companies in recent years has been Tencent, which has been Asia’s most valuable firm since 2016. Its secret sauce is the messaging app WeChat, launched in January 2011 when Tencent already owned two other dominant social‑media platforms. Its QQ instant‑messaging service and Q‑Zone social network each boasted hundreds of millions of users.

WeChat initially allowed users to send photos, short voice recordings and text in Chinese characters, and it was built specifically for smartphones. As the user base grew, its functionalities expanded. By 2013 WeChat had 300 million users; by 2019 that figure rose to 1.15 billion daily active users. It introduced video calls and conference calls several years before the American WhatsApp (today owned by Meta). The app’s success rests on its “app‑within‑an‑app” principle, allowing businesses to create their own mini‑apps inside WeChat—effectively their own dedicated applications. Many firms have abandoned standalone apps and now operate entirely within the WeChat ecosystem.

Over the years, WeChat has captured users’ digital lives beyond smartphones, becoming an Asian “remote control” that governs everyday transactions: paying in restaurants, ordering taxis, renting city bikes, managing investments, booking medical appointments and even ordering prescription medication to the doorstep.

In honour of the Chinese New Year 2014, WeChat introduced digital red envelopes—cash‑filled gifts akin to Western Christmas presents. Users could link their bank accounts to WeChat Pay and send a digital red envelope, with the funds landing directly in the recipient’s WeChat wallet. The campaign prompted five million users to open a digital bank account within WeChat.

Competition from Alipay and the Rise of Cashless Payments

Another Chinese tech titan, Jack Ma, founder of Alibaba, launched the digital payment system Alipay back in 2004. Both Alipay and WeChat enabled users to request payments via simple, printable QR codes as early as 2016. This shift has transformed Chinese phone usage into a primary payment method, to the extent that homeless individuals now beg for money by displaying QR codes. In several Chinese cities cash has effectively disappeared for years.

WeChat and Alipay closely monitor users’ spending habits, building detailed profiles of consumer behaviour. China has largely bypassed a transitional cash‑payment stage: millions moved straight from cash to mobile payments without ever owning a credit card. While both platforms allow users to withdraw cash from linked bank accounts, their core services do not extend credit.

Lee (2018) notes the emergence of a service called Smart Finance, an AI‑powered application that relies solely on algorithms to grant millions of micro‑loans. The algorithm requires only access to the borrower’s phone data, constructing a consumption profile from seemingly trivial signals—such as typing speed, battery level and birthdate—to predict repayment likelihood.

Smart Finance’s AI does not merely assess the amount of money in a WeChat wallet or bank statements; it harvests data points that appear irrelevant to humans. Using these algorithmically derived credit indicators, the system achieves finer granularity than traditional scoring methods. Although the opaque nature of the algorithm prevents public scrutiny, its unconventional metrics have proven highly profitable.

As data volumes swell, these algorithms become ever more refined, allowing firms to extend credit to groups traditionally overlooked by banks—young people, migrant workers, and others. However, the lack of transparency means borrowers cannot improve their scores because the criteria remain hidden, raising fairness concerns.

Surveillance Society: Social Credit and Ethnic Monitoring

Lee reminds us that AI algorithms are reshaping society. From a Western viewpoint, contemporary China resembles a surveillance state where continuous monitoring and a social credit system are routine. Traffic violations can be punished through facial‑recognition algorithms, with fines deducted directly from a user’s WeChat account. WeChat itself tracks users’ movements, language and interactions, acting as a central hub for social eligibility monitoring.

A Guardian article by Johana Bhuiyan (2021) reported that Huaweifiled a July 2018 patent for technology capable of distinguishing whether a person belongs to the Han majority or the persecuted Uyghur minority. State‑contracted Chinese firm Hikvision has developed similar facial‑recognition capabilities for use in re‑education camps and at the entrances of nearly a thousand mosques. China denies allegations of torture and sexual violence against Uyghurs; estimates suggest roughly one million detainees in these camps.

AI‑enabled surveillance is commonplace in China and is gaining traction elsewhere. Amazon offers its facial‑recognition service Rekognition to various clients, although the U.S. police stopped using it in June 2020 amid protests against police racism and violence. Critics highlighted Rekognition’s difficulty correctly identifying gender for darker‑skinned individuals—a claim Amazon disputes.

Google’s image‑search facial‑recognition feature also faced backlash after software engineer Jacky Alciné discovered in 2015 that the system mislabelled African‑American friends as “gorillas.” After public outcry, Google removed the offending categories (gorilla, chimpanzee, ape) from its taxonomy (Vincent 2018).

Limits of Current AI and Future Outlook

Present‑day AI algorithms primarily excel at inference tasks and object detection. General artificial intelligence—capable of autonomous, creative reasoning—remains a distant goal. Nonetheless, we are only beginning to grasp the possibilities and risks of AI‑driven algorithms.

Is the Chinese surveillance model something citizens truly reject? Within China, the social credit system may be viewed positively by ordinary citizens who can boost their scores by paying bills promptly, volunteering and obeying traffic rules. In Europe, a quieter acceptance of similar profiling is emerging: we are already classified—often without our knowledge—through the data we generate while browsing the web. This silent consent fuels targeted advertising for insurance, lingerie, holidays, television programmes and even political persuasion. As long as we are unwilling to pay for the privilege of using social‑media platforms, those platforms will continue exploiting our data as they see fit.

Summary

China’s 2017 cyber‑security law set the stage for an expansive data‑collection regime that underpins a sophisticated surveillance economy. Visionaries like Kai‑Fu Lee highlight how openness in public‑space data fuels AI development, while corporate giants such as Tencent and Alibaba have turned messaging apps into all‑purpose digital wallets and service hubs. AI‑driven financial products like Smart Finance illustrate both the power and opacity of algorithmic credit scoring. Simultaneously, state‑backed facial‑recognition technologies target ethnic minorities, and the social‑credit system normalises continuous monitoring of everyday behaviour. These trends echo beyond China, with Western firms and governments experimenting with comparable surveillance tools. Understanding the interplay between legislation, corporate strategy and AI is essential for navigating the privacy challenges of our increasingly digitised world.


References

Bhuiyan, J. (2021). Huawei files patent to identify UyghursThe Guardian
Lee, K. F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Harper Business. 
Vincent, J. (2018). Google removes offensive labels from image‑search resultsBBC.

Zen and the Art of Dissatisfaction – Part 21

Data: The Oil of the Digital Age

Data applications rely fundamentally on data—its extraction, collection, storage, interpretation, and monetisation—making them arguably the most significant feature of our contemporary world. Often referred to as ”the new oil,” data is, from the perspective of persistent capitalists, a valuable resource capable of sustaining economic growth even after conventional natural reserves have been exhausted. This new form of capitalism has been titled Surveillance Capitalism (Zuboff 2019).

Originally published in Substack: https://substack.com/@mikkoijas

Data matters more than opinions. For developers of data applications, the key goal is that we browse online, click “like,” follow links, spend time on their platforms, and accept cookies. What we think or do does not matter; what matters is the digital behavioural surplus, a trace we leave and our consent to tracking. That footprint has become immensely valuable—companies are willing to pay for it, and sometimes break laws to get it.

Cookies and Consumer Privacy in Europe

European legislation like the General Data Protection Regulation (GDPR) ensures some personal protection, but we still leave traces even if we refuse to share personal data. Websites are legally obligated to request our cookie consent, making privacy violations more visible. Rejecting cookies and clearing them out later becomes a time-consuming and frustrating chore.

In stark contrast, China’s data laws are much more relaxed, granting companies broader operational freedom. The more data a company gathers, the more fine-tuned its predictive algorithms can be. It’s much like environmental regulation: European firms are restricted from drilling for oil in protected areas, which reduces profit but protects nature. Chinese firms, unrestrained by such limits, may harm ecosystems while driving profits. In the data realm, restrictive laws narrow the available datasets. Whereas Chinese firms harvest freely, they might gain a major competitive edge that could help them lead the global AI market.

Data for Good: Jeff Hammerbacher’s Vision

American data scientist Jeff Hammerbacher is one of the field’s most influential figures. As journalist Steve Lohr (2015) reports, Hammerbacher started on Wall Street and later helped build Facebook’s data infrastructure. Today, he curates data collection and interpretation for the purpose of improving human lives—a fundamental ethos across the data industry. According to Hammerbacher, we must understand the current data landscape to predict the future. Practically, this means equipping everything we care about with sensors that collect data. His current focus? Transforming medicine by centring it on data. Data science is one of the most promising fields, where evidence trumps intuition.

Hammerbacher has been particularly interested in mental health and how data can improve psychological wellbeing. His close friend and former classmate, Steven Snyder, tragically died by suicide after struggling with bipolar disorder. This event, combined with Hammerbacher’s own breakdown at age 27—after being diagnosed with bipolar disorder and generalised anxiety disorder—led him to rethink his life. He notes that mental illness is a major cause of workforce dropout and ranks third among causes of early death. Researchers are now collecting neurobiological data from those with mental health conditions. Hammerbacher calls this “one of the most necessary and challenging data problems of our time.”

Pharmaceuticals haven’t solved the issue. Selective serotonin reuptake inhibitors(SSRIs), introduced in the 1980s, have failed to deliver a breakthrough for mood disorders. These remain a leading cause of death; roughly 90% of suicides involve untreated or poorly treated mood disorders, and about 50% of Western populations are affected at some point. The greater challenge lies in defining mental wellness—should people simply adapt to lives that feel unfit?

“Bullshit Jobs” and Social Systems

Investigative anthropologist David Graeber (2018) reported that 37–40% of Western workers view their jobs as “bullshit”—work they see as socially pointless. Thus, the problem isn’t merely psychological; our entire social structure normalises employment that values output over wellbeing.

Data should guide smarter decisions. Yet as our world digitises, data accumulates faster than our ability to interpret it. As Steve Lohr (2015) notes, a 20-bed intensive care unit can generate around 160,000 data points per second—a torrent demanding constant vigilance. Still, this data deluge offers positive outcomes: continuous patient monitoring enables proactive, personalised care.

Data-driven forecasting is set to reshape society, concentrating power and wealth. Not long ago, anyone could found a company; now a single corporation could dominate an entire sector with superior data. A case in point is the partnership between McKesson and IBM. In 2009, Kaan Katircioglu (IBM researcher) sought data for predictive modelling. He found it at McKesson—clean datasets recording medication inventory, prices, and logistics. IBM used this to build a predictive model, enabling McKesson to optimise its warehouse near Memphis and improve delivery accuracy from 90% to 99%.

At present, data-mining algorithms behave as clever tools. An algorithm is simply a set of steps for solving problems—think cooking recipes or coffee machine programming. Even novices can produce impressive outcomes by following a good set of instructions.

Historian Yuval Noah Harari (2015) provocatively suggests we are ourselves algorithms. Unlike machines, our algorithms run through emotions, perceptions, and thoughts—biological processes shaped by evolution, environment, and culture.

Summary

Personal data is the new source of extraction and exploitation—vital for technological progress yet governed by uneven regulations that determine competitive advantage. Pioneers like Jeff Hammerbacher highlight its potential for social good, especially in mental health, while revealing our complex psychology. We collect data abundantly, yet face the challenge of interpreting it effectively. Predictive systems can drive efficiency, but they can also foster monopolies. Ultimately, whether data serves or subsumes us depends on navigating its ethical, legal, and societal implications.


References

Graeber, D. (2018). Bullshit Jobs: A Theory. New York: Simon & Schuster.
Hammerbacher, J. (n.d.). [Interview in Lohr 2015].
Harari, Y. N. (2015). Homo Deus: A History of Tomorrow. New York: Harper.
Lohr, S. (2015). Data-ism: The Revolution Transforming Decision Making, Consumer Behavior, and Almost Everything Else. New York: Harper Business.
Zuboff, Shoshana (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.