Zen and the Art of Dissatisfaction – Part 27

From Red Envelopes to Smart Finance

In recent years China has accelerated the intertwining of state‑led surveillance, artificial‑intelligence‑driven finance and ubiquitous digital platforms. The country’s 2017 cyber‑security law introduced harsher penalties for the unlawful collection and sale of personal data, raising the perennial question of how much privacy is appropriate in an era of pervasive digitisation. This post examines the legislative backdrop, the role of pioneering technologists such as Kai‑Fu Lee, the meteoric growth of platforms like WeChat, and the emergence of AI‑powered financial services such as Smart Finance. It also reflects on the broader societal implications of a surveillance‑centric model that is increasingly being mirrored in Western contexts.Subscribe

Originally published in Substack: https://substack.com/home/post/p-172666849

China began enforcing a new cyber‑security law in 2017. The legislation added tougher punishments for the illegal gathering or sale of user data. The central dilemma remains: how much privacy is the right amount in the age of digitalisation? There is no definitive answer to questions about the optimal level of social monitoring needed to balance convenience and safety, nor about the degree of anonymity citizens should enjoy when attending a theatre, dining in a restaurant, or travelling on the metro. Even if we trust current authorities, are we prepared to hand the tools for classification and surveillance over to future rulers?

Kai‑Fu Lee’s Perspective on China’s Data Openness

According to Taiwanese AI pioneer Kai‑Fu Lee (2018), China’s relative openness in collecting data in public spaces gives it a head start in deploying observation‑based AI algorithms. Lee’s background lends weight to his forecasts. His 1988 doctoral dissertation was a groundbreaking work on speech recognition, and from 1990 onward he worked at Apple, Microsoft and Google before becoming a private‑equity investor in 2009. This openness (i.e., the lack of privacy protection) accelerates the digitalisation of urban environments and opens the door to new OMO (online‑merge‑offline) applications in retail, security and transport. Pushing AI into these sectors requires more than cameras and data; creating OMO environments in hospitals, cars and kitchens demands a diverse array of sensor‑enabled hardware to synchronise the physical and digital worlds.

One of China’s most successful companies in recent years has been Tencent, which has been Asia’s most valuable firm since 2016. Its secret sauce is the messaging app WeChat, launched in January 2011 when Tencent already owned two other dominant social‑media platforms. Its QQ instant‑messaging service and Q‑Zone social network each boasted hundreds of millions of users.

WeChat initially allowed users to send photos, short voice recordings and text in Chinese characters, and it was built specifically for smartphones. As the user base grew, its functionalities expanded. By 2013 WeChat had 300 million users; by 2019 that figure rose to 1.15 billion daily active users. It introduced video calls and conference calls several years before the American WhatsApp (today owned by Meta). The app’s success rests on its “app‑within‑an‑app” principle, allowing businesses to create their own mini‑apps inside WeChat—effectively their own dedicated applications. Many firms have abandoned standalone apps and now operate entirely within the WeChat ecosystem.

Over the years, WeChat has captured users’ digital lives beyond smartphones, becoming an Asian “remote control” that governs everyday transactions: paying in restaurants, ordering taxis, renting city bikes, managing investments, booking medical appointments and even ordering prescription medication to the doorstep.

In honour of the Chinese New Year 2014, WeChat introduced digital red envelopes—cash‑filled gifts akin to Western Christmas presents. Users could link their bank accounts to WeChat Pay and send a digital red envelope, with the funds landing directly in the recipient’s WeChat wallet. The campaign prompted five million users to open a digital bank account within WeChat.

Competition from Alipay and the Rise of Cashless Payments

Another Chinese tech titan, Jack Ma, founder of Alibaba, launched the digital payment system Alipay back in 2004. Both Alipay and WeChat enabled users to request payments via simple, printable QR codes as early as 2016. This shift has transformed Chinese phone usage into a primary payment method, to the extent that homeless individuals now beg for money by displaying QR codes. In several Chinese cities cash has effectively disappeared for years.

WeChat and Alipay closely monitor users’ spending habits, building detailed profiles of consumer behaviour. China has largely bypassed a transitional cash‑payment stage: millions moved straight from cash to mobile payments without ever owning a credit card. While both platforms allow users to withdraw cash from linked bank accounts, their core services do not extend credit.

Lee (2018) notes the emergence of a service called Smart Finance, an AI‑powered application that relies solely on algorithms to grant millions of micro‑loans. The algorithm requires only access to the borrower’s phone data, constructing a consumption profile from seemingly trivial signals—such as typing speed, battery level and birthdate—to predict repayment likelihood.

Smart Finance’s AI does not merely assess the amount of money in a WeChat wallet or bank statements; it harvests data points that appear irrelevant to humans. Using these algorithmically derived credit indicators, the system achieves finer granularity than traditional scoring methods. Although the opaque nature of the algorithm prevents public scrutiny, its unconventional metrics have proven highly profitable.

As data volumes swell, these algorithms become ever more refined, allowing firms to extend credit to groups traditionally overlooked by banks—young people, migrant workers, and others. However, the lack of transparency means borrowers cannot improve their scores because the criteria remain hidden, raising fairness concerns.

Surveillance Society: Social Credit and Ethnic Monitoring

Lee reminds us that AI algorithms are reshaping society. From a Western viewpoint, contemporary China resembles a surveillance state where continuous monitoring and a social credit system are routine. Traffic violations can be punished through facial‑recognition algorithms, with fines deducted directly from a user’s WeChat account. WeChat itself tracks users’ movements, language and interactions, acting as a central hub for social eligibility monitoring.

A Guardian article by Johana Bhuiyan (2021) reported that Huaweifiled a July 2018 patent for technology capable of distinguishing whether a person belongs to the Han majority or the persecuted Uyghur minority. State‑contracted Chinese firm Hikvision has developed similar facial‑recognition capabilities for use in re‑education camps and at the entrances of nearly a thousand mosques. China denies allegations of torture and sexual violence against Uyghurs; estimates suggest roughly one million detainees in these camps.

AI‑enabled surveillance is commonplace in China and is gaining traction elsewhere. Amazon offers its facial‑recognition service Rekognition to various clients, although the U.S. police stopped using it in June 2020 amid protests against police racism and violence. Critics highlighted Rekognition’s difficulty correctly identifying gender for darker‑skinned individuals—a claim Amazon disputes.

Google’s image‑search facial‑recognition feature also faced backlash after software engineer Jacky Alciné discovered in 2015 that the system mislabelled African‑American friends as “gorillas.” After public outcry, Google removed the offending categories (gorilla, chimpanzee, ape) from its taxonomy (Vincent 2018).

Limits of Current AI and Future Outlook

Present‑day AI algorithms primarily excel at inference tasks and object detection. General artificial intelligence—capable of autonomous, creative reasoning—remains a distant goal. Nonetheless, we are only beginning to grasp the possibilities and risks of AI‑driven algorithms.

Is the Chinese surveillance model something citizens truly reject? Within China, the social credit system may be viewed positively by ordinary citizens who can boost their scores by paying bills promptly, volunteering and obeying traffic rules. In Europe, a quieter acceptance of similar profiling is emerging: we are already classified—often without our knowledge—through the data we generate while browsing the web. This silent consent fuels targeted advertising for insurance, lingerie, holidays, television programmes and even political persuasion. As long as we are unwilling to pay for the privilege of using social‑media platforms, those platforms will continue exploiting our data as they see fit.

Summary

China’s 2017 cyber‑security law set the stage for an expansive data‑collection regime that underpins a sophisticated surveillance economy. Visionaries like Kai‑Fu Lee highlight how openness in public‑space data fuels AI development, while corporate giants such as Tencent and Alibaba have turned messaging apps into all‑purpose digital wallets and service hubs. AI‑driven financial products like Smart Finance illustrate both the power and opacity of algorithmic credit scoring. Simultaneously, state‑backed facial‑recognition technologies target ethnic minorities, and the social‑credit system normalises continuous monitoring of everyday behaviour. These trends echo beyond China, with Western firms and governments experimenting with comparable surveillance tools. Understanding the interplay between legislation, corporate strategy and AI is essential for navigating the privacy challenges of our increasingly digitised world.


References

Bhuiyan, J. (2021). Huawei files patent to identify UyghursThe Guardian
Lee, K. F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Harper Business. 
Vincent, J. (2018). Google removes offensive labels from image‑search resultsBBC.

Zen and the Art of Dissatisfaction – Part 25

Exponential Futures

Throughout history, humanity has navigated the interplay between population growth, technological progress, and ethical responsibility. As automation, artificial intelligence, and biotechnology advance at exponential rates, philosophers, scientists, and entrepreneurs have raised profound questions: Are we heading towards liberation from biological limits, or into a new form of dependency on machines? Can we satisfy our dissatisfaction with more intelligent machines and unlimited growth? What would be enough? The following post explores these dilemmas, drawing from historical parables, the logic of Moore’s law, transhumanism, and the latest breakthroughs in artificial intelligence.

“The current explosive growth in population has frighteningly coincided with the development of technology, which, due to automation, makes large parts of the population ‘superfluous’, even as labour. Because of nuclear energy, this double threat can be tackled with means beside which Hitler’s gas chambers look like the malicious child’s play of an evil brat.”
– Hannah Arendt

Originally published in Substack: https://substack.com/inbox/post/171630771

Our technological development has been tied to Moore’s law. Named after Gordon Moore, the founder of Intel, one of the world’s largest semiconductor manufacturers, the law states that the number of transistors on a microchip doubles every 18–24 months. As a result, chips become more powerful while their price falls. Moore’s prediction in 1965 has remained remarkably accurate, as innovation has kept the process alive long past the point when the laws of physics should have slowed it down. This type of growth is called exponential, characterised by slow initial development which suddenly accelerates at an unexpected pace.

A Parable of Exponential Growth

The Islamic scholar Ibn Khallikan described the logic of exponential growth in a tale from 1256. According to the story, chess originated in India during the 6th century. Its inventor travelled to Pataliputra and presented the game to the emperor. Impressed, the ruler offered him any reward. The inventor requested rice, calculated using the chessboard: one grain on the first square, two on the second, four on the third, doubling with each square.

Such exponential growth seems modest at first, but by the 64th square it yields more than 18 quintillion grains of rice, or about 1.4 trillion tonnes. By comparison, the world currently produces about 772 million tonnes of wheat annually. The inventor’s demand thus exceeded yearly wheat production by a factor of over 2,000. The crucial lesson lies not in the quantity but in the speed at which exponential processes accelerate.

The central question remains: at what stage of the chessboard are we today in terms of microchip development? According to Moore’s law, we are heading towards an increasingly technological future. Futurists such as Ray Kurzweil, Chief Engineer at Google, believe that transhumanism is the only viable path for humanity to collaborate with AI. Kurzweil predicts that artificial intelligence will surpass human mental capabilities by 2045.

Transhumanism posits that the limits of the human biological body are a matter of choice. For transhumanists, ageing should be voluntary, and cognitive capacities should lie within individual control. Kurzweil forecasts that by 2035 nanobots will be implanted in our brains to connect with neurons, upgrading both mental and physical abilities. The aim is to prevent humans from becoming inferior to machines, preserving self-determination.

The Intelligence of Machines – Real or Illusion?

Yet artificial intelligence has not, until recently, been very intelligent. Algorithms can process data and make deductions, but image recognition, for example, has long struggled with tasks a child could solve instantly. A child, even after seeing a school bus once, can intuitively identify it; an algorithm, trained on millions of images, may still fail under slightly altered conditions. This gap between human intuition and machine logic underscores the challenge.

Nevertheless, AI is evolving rapidly. Vast financial resources drive competition over the future of intelligence and power.

The South African-born Elon Musk, founder of Neuralink, has already demonstrated an implant that allows a monkey named Pager to play video games using only thought. Musk suggests such implants could treat depressionAlzheimer’s disease, and paralysis, and even restore sight to the blind.

Though such ideas may sound outlandish, history suggests that visionary predictions often materialise sooner than expected.

The Warnings of Tristan Harris

Tristan Harris, who leads the non-profit Centre for Humane Technology, has been at the heart of Silicon Valley’s AI story, from Apple internships to Instagram development and work at Google. In 2023, alongside Aza Raskin, he warned of AI’s dangers. Their presentation demonstrated AI systems capable of cloning a human voice within seconds, or reconstructing mental images using fMRI brain scans.

AI models have begun to exhibit unexpected abilities. A system trained in English suddenly understands PersianChatGPT, launched by OpenAI, has independently learned advanced chemistry, though it was never explicitly trained in the subject. Algorithms now self-improve, rewriting code to double its speed, creating new training data, and exhibiting exponential capability growth. Experts foresee improvements at double-exponential rates, represented on a graph as a near-vertical line surging upwards.

Conclusion

The trajectory of human civilisation now intertwines with exponential technological growth. From the rice-on-the-chessboard parable to Moore’s law and the visions of Kurzweil, Musk, and Harris, the central issue remains: will humanity adapt, or will machines redefine what it means to be human? The pace of change is no longer linear, and as history shows, exponential processes accelerate suddenly, leaving little time to adjust.


References

Arendt, H. (1963). Eichmann in Jerusalem: A report on the banality of evil. Viking Press.
Harris, T., & Raskin, A. (2023). The AI dilemma [Presentation]. Center for Humane Technology.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking.
Moore, G. E. (1965). Cramming more components onto integrated circuits. Electronics, 38(8).

Zen and the Art of Dissatisfaction – Part 20

The Triple Crisis of Civilisation

“At the time I climbed the mountain or crossed the river, I existed, and the time should exist with me. Since I exist, the time should not pass away. […] The ‘three heads and eight arms’ pass as my ‘sometimes’; they seem to be over there, but they are now.”

Dōgen

Introduction

This blog post explores the intertwining of ecology, technology, politics and data collection through the lens of modern civilisation’s crises. It begins with a quote by the Japanese Zen master Dōgen, drawing attention to the temporal nature of human existence. From climate emergency to digital surveillance, from Brexit to barcodes, the post analyses how personal data has become the currency of influence and control.


Originally published in Substack: https://mikkoijas.substack.com/

The climate emergency currently faced by humanity is only one of the pressing concerns regarding the future of civilisation. A large-scale ecological crisis is an even greater problem—one that is also deeply intertwined with social injustice. A third major concern is the rapidly developing situation created by technology, which is also connected to problems related to nature and the environment.

Cracks in the System: Ecology, Injustice, and the Digital Realm

The COVID-19 pandemic  revealed new dimensions of human interaction. We are dependent on technology-enabled applications to stay connected to the world through computers and smart devices. At the same time, large tech giants are generating immense profits while all of humanity struggles with unprecedented challenges.

Brexit finally came into effect at the start of 2021. On Epiphany of that same year, angry supporters of Donald Trump stormed the United States Capitol. Both Brexit and Trump are children of the AI era. Using algorithms developed by Cambridge Analytica, the Brexit campaign and Trump’s 2016 presidential campaign were able to identify voters who were unsure of their decisions. These individuals were then targeted via social media with marketing and curated news content to influence their opinions. While the data for this manipulation was gathered online, part of the campaigning also happened offline, as campaign offices knew where undecided voters lived and how to sway them.

I have no idea how much I am being manipulated when browsing content online or spending time on social media. As I move from one website to another, cookies are collected, offering me personalised content and tailored ads. Algorithms working behind websites monitor every click and search term, and AI-based systems form their own opinion of who I am.

Surveillance and the New Marketplace

A statistical analysis algorithm in a 2013 study analysed the likes of 58,000 Facebook users. The algorithm guessed users’ sexual orientation with 88% accuracy, skin colour with 95% accuracy, and political orientation with 85% accuracy. It also guessed with 75% accuracy whether a user was a smoker (Kosinski et al., 2013).

Companies like Google and Meta Platforms—which includes Facebook, Instagram, Messenger, Threads, and WhatsApp—compete for users’ attention and time. Their clients are not individuals like me, but advertisers. These companies operate under an advertising-based revenue model. Individuals like me are the users whose attention and time are being competed for.

Facebook and other similar companies that collect data about users’ behaviour will presumably have a competitive edge in future AI markets. Data is the oil of the future. Steve Lohr, long-time technology journalist at the New York Times, wrote in 2015 that data-driven applications will transform our world and behaviour just as telescopes and microscopes changed our way of observing and measuring the universe. The main difference with data applications is that they will affect every possible field of action. Moreover, they will create entirely new fields that have not previously existed.

In computing, the word ”data” refers to various numbers, letters or images as such, without specific meaning. A data point is an individual unit of information. Generally, any single fact can be considered a data point. In a statistical or analytical context, a data point is derived from a measurement or a study. A data point is often the same as data in singular form.

From Likes to Lives: How Behaviour Becomes Prediction

Decisions and interpretations are created from data points through a variety of processes and methods, enabling individual data points to form applicable information for some purpose. This process is known as data analysis, through which the aim is to derive interesting and comprehensible high-level information and models from collected data, allowing for various useful conclusions to be drawn.

A good example of a data point is a Facebook like. A single like is not much in itself and cannot yet support major interpretations. But if enough people like the same item, even a single like begins to mean something significant. The 2016 United States presidential election brought social media data to the forefront. The British data analytics firm Cambridge Analytica gained access to the profile data of millions of Facebook users.

The data analysts hired by Cambridge Analytica could make highly reliable stereotypical conclusions based on users’ online behaviour. For example, men who liked the cosmetics brand MAC were slightly more likely to be homosexual. One of the best indicators of heterosexuality was liking the hip-hop group Wu-Tang Clan. Followers of Lady Gaga were more likely to be extroverted. Each such data point is too weak to provide a reliable prediction. But when there are tens, hundreds or thousands of data points, reliable predictions about users’ thoughts can be made. Based on 270 likes, social media knows as much about a user as their spouse does.

The collection of data is a problem. Another issue is the indifference of users. A large portion of users claim to be concerned about their privacy, while simultaneously worrying about what others think of them on social platforms that routinely violate their privacy. This contradiction is referred to as the Privacy Paradox. Many people claim to value their privacy, yet are unwilling to pay for alternatives to services like Facebook or Google’s search engine. These platforms operate under an advertising-based revenue model, generating profits by collecting user data to build detailed behavioural profiles. While they do not sell these profiles directly, they monetise them by selling highly targeted access to users through complex ad systems—often to the highest bidder in real-time auctions. This system turns user attention into a commodity, and personal data into a tool of influence.

The Privacy Paradox and the Illusion of Choice

German psychologist Gerd Gigerenzer, who has studied the use of bounded rationality and heuristics in decision-making, writes in his excellent book How to Stay Smart in a Smart World (2022) that targeted ads usually do not even reach consumers, as most people find ads annoying. For example, eBay no longer pays Google for targeted keyword advertising because they found that 99.5% of their customers came to their site outside paid links.

Gigerenzer calculates that Facebook could charge users for its service. Facebook’s ad revenue in 2022 was about €103.04 billion. The platform had approximately 2.95 billion users. So, if each user paid €2.91 per month for using Facebook, their income would match what they currently earn from ads. In fact, they would make significantly more profit because they would no longer need to hire staff to sell ad space, collect user data, or develop new analysis tools for ad targeting.

According to Gigerenzer’s study, 75% of people would prefer that Meta Platforms’ services remain free, despite privacy violations, targeted ads, and related risks. Of those surveyed, 18% would be willing to pay a maximum of €5 per month, 5% would be willing to pay €6–10, and only 2% would be willing to pay more than €10 per month.

But perhaps the question is not about money in the sense that Facebook would forgo ad targeting in exchange for a subscription fee. Perhaps data is being collected for another reason. Perhaps the primary purpose isn’t targeted advertising. Maybe it is just one step toward something more troubling.

From Barcodes to Control Codes: The Birth of Modern Data

But how did we end up here? Today, data is collected everywhere. A good everyday example of our digital world is the barcode. In 1948, Bernard Silver, a technology student in Philadelphia, overheard a local grocery store manager asking his professors whether they could develop a system that would allow purchases to be scanned automatically at checkout. Silver and his friend Norman Joseph Woodland began developing a visual code based on Morse code that could be read with a light-based scanner. Their research only became standardised as the current barcode system in the early 1970s. Barcodes have enabled a new form of logistics and more efficient distribution of products. Products have become data, whose location, packaging date, expiry date, and many other attributes can be tracked and managed by computers in large volumes.

Conclusion

We are living in a certain place in time, as Dōgen described—an existence with a past and a future. Today, that future is increasingly built on data: on clicks, likes, and digital traces left behind.

As ecological, technological, and political threats converge, it is critical that we understand the tools and structures shaping our lives. Data is no longer neutral or static—it has become currency, a lens, and a lever of power.


References

Gigerenzer, G. (2022). How to stay smart in a smart world: Why human intelligence still beats algorithms. Penguin.

Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behaviour. Proceedings of the National Academy of Sciences, 110(15), 5802–5805. https://doi.org/10.1073/pnas.1218772110

Lohr, S. (2015). Data-ism: The revolution transforming decision making, consumer behavior, and almost everything else. HarperBusiness.

Dōgen / Sōtō Zen Text Project. (2023). Treasury of the True Dharma Eye: Dōgen’s Shōbōgenzō (Vols. I–VII, Annotated trans.). Sōtōshū Shūmuchō, Administrative Headquarters of Sōtō Zen Buddhism.