Written by Brian Weatherby, Policy Researcher.
For over a decade, the Chinese Communist Party (CCP) has been developing a system for monitoring and managing citizen behaviour known as ‘social credit’. The social credit system draws on mass data collection and analysis to assign citizens a credit score reflecting their “trustworthiness” and then allocate corresponding rewards and punishments. A similar program of technological surveillance and control currently operating in Western China has been used to wage a campaign of genocide against the country’s Muslim minority, raising urgent questions about data protection and the right to privacy in the Information Age.
What is China’s social credit system?
For over a decade, the Chinese Communist Party (CCP) has been developing a system for monitoring and managing citizen behaviour known as ‘social credit’. The social credit system draws on mass data collection and analysis to assign citizens, businesses, and government officials a credit score reflecting their trustworthiness as members of society. Citizens with good credit scores are eligible for benefits such as priority admission to schools, faster career advancement, and tax breaks, while those with bad credit are subject to punishments like restrictions on travel, ineligibility for jobs, and reduced access to social services.
While this system is still in its infancy, there are already grim portents of the ways in which it might be used to silence dissent, suppress human rights, and facilitate crimes against humanity. In Xinjiang, the government has used the Uyghur population and other Muslim minority groups as lab subjects for a distinct, though not dissimilar, surveillance and control program that has facilitated the mass incarceration, sterilization, and exploitation of well over a million people and raised urgent questions over how we manage our right to privacy in the Information Age.
How did this system come about?
Lack of trust – in businesses, in government, in each other – has long been a problem in modern Chinese society. Repeated scandals with poisoned food, fraud, and corrupt officials have eroded public confidence in societal institutions – a situation that Chinese academics and Party officials criticize for lowering faith in government and acting as a drag on the economy. The favoured solution to this crisis of confidence has been the implementation of a rating system loosely modelled on the financial credit systems in the West. However, this system wouldn’t simply assess one’s suitability for a loan but would seek to measure one’s ‘trustworthiness’ as a member of society.
The idea for such a social credit system has been around since the 1990’s but it wasn’t until the release of a State Council planning outline in 2014 that the idea crystallized into policy. In that document, the party outlines their vision for a unified national credit system by 2020. While much progress has been made in the intervening years, the reality falls somewhat short of a fully functional nationwide system. Rather, the system in its current form is a network of pilot programs run by local governments and private enterprises. These different pilot programs measure different sets of data, have different rating schemes, and provide for different consequences. This diversity of systems has made assessing social credit a precarious undertaking and contributed to many misconceptions in the media over how an eventual national system will function. For our purposes, we will focus primarily on the social credit system for individuals, drawing on some of the most well-established programs (such as the regional system in Rongcheng) to form our assumptions.
How is credit built and lost?
In the social credit system, credit is built through behaviour that demonstrates your trustworthiness or ‘creditworthiness’ as a citizen. While this includes financial behaviour, such as paying your bills on time, it also encompasses a range of legal, social, and political behaviours. Examples of behaviour that increase your credit include maintaining a good financial history, engaging in charity work, donating blood, caring for elderly family members, and praising the government. Examples of behaviour that decrease your credit range from the fairly innocuous, such as traffic offences and cheating in online games, to the ominous, such as illegally protesting against the authorities, spreading religion, posting anti-government messages on social media, and “spreading rumours”.
What are the consequences?
The consequences of one’s credit score touch on nearly all facets of life, providing the worthy with opportunities and the unworthy with restrictions. A worthy citizen might be provided with access to better schools and universities, faster promotions at work, even shorter wait times at the hospital. Conversely, the unworthy might find themselves restricted from purchasing plane and train tickets, hindered in finding employment and education, and publicly shamed on electronic billboards or with dial tones identifying them as ‘untrustworthy’. Evidence from some private sector pilot programs suggest that people’s credit score may not be limited to their own behaviour but could be affected by the score of their social network, thereby encouraging people to distance themselves from those with bad credit, lest they to be deemed untrustworthy.
How is social credit data collected and analyzed?
China has long been a surveillance state, from the watchful eyes of neighbours and party cadres in the time of Mao to modern ‘grid policing’ under Hu Jintao. These forms of surveillance relied largely on human intelligence – neighbours, friends, and family reporting on each other’s behaviour. With the evolution of new technologies and the migration of daily life to the internet, the CCP’s information gathering capabilities have been vastly expanded and refined. Now, in addition to traditional sources of information gathered through human intelligence, tax records, financial transactions, school and employment histories, and various government agencies, the CCP works in close concert with internet service providers and technology companies to monitor citizens’ online behaviour. This includes tracking and analyzing users’ search histories, social media activity, and online shopping habits, as well as the vast swathes of information gathered by the rapidly expanding Internet of Things. In the near future, as China leads the development of “smart cities” (urban areas that integrate the Internet of Things to optimize the efficiency of city operations) the social credit system will increasingly draw on technology embedded in everyday life. This will include omnipresent networks of chips, sensors, and cameras that monitor and analyze citizens’ movements, interactions, and consumption habits.
However, it is not only the volume and variety of data collected that is cause for concern but the manner in which it is used. Whereas much of this data used to exist in silos – as individual threads of information confined to an organization – a major goal of the social credit system is to knit these threads together into a web that connects all members of society, all aspects of life, and all regions of the country. The spinning of this web is entrusted to artificial intelligence platforms built on complex algorithms that can process a person’s intimate details, build highly accurate behavioural models, compute their trustworthiness, and assign a score that shapes their future. As this system trains itself on larger volumes of data and grows more sophisticated, it will be better able to identify and regulate behaviours that signal ‘untrustworthiness’.
What are the implications for human rights?
Clearly, a system designed in such a way poses stark and troubling implications for human rights. The right to privacy is non-existent. The rights to speech, expression, and association are severely handicapped. For ethnic minorities in China, the burden is so much the heavier. In a system designed to produce ‘model citizens’, the critical question is, “in whose image is the model citizen modelled?” Social credit, designed to automate the process of assimilation into a culture defined by the CCP, doesn’t allow for diversity of thought or cultural expression and exerts its powers of coercion against those who don’t conform. In the worst case scenario, this system has the potential to facilitate genocide and other crimes against humanity. If that sounds alarmist, look no further than Xinjiang, the state in Western China where a similar, though distinct, platform for mass data collection and analysis is currently being used as a tool for oppression against the region’s 12 million Uyghurs.
Since 2015, the Chinese government has launched a genocidal campaign against the Uyghurs and other ethnic Muslim minorities in Western China. It has involved executions, enforced disappearances, mass sterilizations, the forced transfer of over 500,000 Uyghur children to state boarding schools, and the imprisonment of over a million men and women in concentration camps – the largest internment of an ethnic group since the Holocaust. It is the most technologically sophisticated genocide the world has ever seen and it is made possible largely through the use of big-data collection and analysis.
The Integrated Joint Operations Platform
The core technology enabling this abuse is the Integrated Joint Operations Platform (IJOP) – an intelligence-sharing and data analysis tool that has been used extensively by the CCP to identify and categorize ‘threats’ among targeted minority populations in Western China. It works by harvesting vast amounts of data (from human intelligence, online surveillance, facial recognition cameras, and more), consolidating it on a single platform, and algorithmically determining and flagging untrustworthy behaviour. Untrustworthy people then face a rising scale of punishments based on the classification of their threat. If this sounds familiar, it should. The IJOP provides us a glimpse of the way a future social credit system may be weaponized against minority groups.
This system relies on massive amounts of data to function and collecting it requires the participation of the entire government. According to a recent Human Rights Watch report, these data sources include, “national identification documents, Xinjiang’s countless checkpoints, closed-circuit cameras with facial recognition, spyware that the police force Uighurs to install on their phones, “Wi-Fi sniffers,” which collect identifying information on smartphones and computers, and package delivery,” as well as existing government records, “such as one’s vehicle ownership, health, family planning, banking, and legal records.” It also includes the invasive “home stay” program, where every two months, 1.1 million party cadres are mobilized to spend 5 days in a Uyghur household. While there, the government officials collect and update data on the families, observe and report on any “unusual situations”, and teach the benefits of Xi Xinping thought and the dangers of Islam. These measures conspire to make the Uyghurs the most heavily monitored people on earth.
All of this data is fed into the IJOP system, where it is collated and linked to a resident’s national ID card number. Predictive policing algorithms are trained to pick up on clues that indicate untrustworthiness and to flag irregular patterns of behaviour. Once suspicious activity has been flagged, it is pushed to a mobile app installed on security force devices. Locals officials are required to act immediately on these push notifications to investigate suspicious persons and take the appropriate measures based on their threat classification. Leaked internal bulletins from the Xinjang security bureau detail how in a one week period in 2017, the IJOP system flagged 24,412 suspicious persons. It goes on to note, “after conducting verification and handling work, 706 were criminally detained…15,683 were sent to education and training…2,096 were put under preventative surveillance.” Bear in mind that those numbers represent a single week’s worth of activity in an operation that has been running since 2015.
What behaviours raise red flags?
While the exact workings of the IJOP’s algorithm are shrouded in secrecy, we do know some of the behaviours that trigger red flags thanks to the release of the Karakax List and other document leaks. In the Karakax List, the most common reason cited for internment in a re-education camp was violation of the two child policy – a move clearly calculated to reduce Uyghur fertility. Many untrustworthy behaviours are in fact cultural practices, such as whether a man wears a beard or a woman a veil, how often a person prays, and whether they observe certain burial rites. Yet other behaviours are seemingly trivial and include: not socializing with neighbors; avoiding using the front door; storing large amounts of food; using more electricity than normal; and, being in possession of too many books without an explanation.
The consequences for those flagged as untrustworthy vary in severity and are determined based on their threat rating generated by the IJOP system. These consequences can include criminal imprisonment, internment in a re-education camp, house arrest, additional surveillance, and restrictions on travel inside and outside the country. If one is sent to an internment camp, the consequences don’t end at the camp gates. Internment is frequently used as proof of irresponsibility and used to justify the removal of children to boarding schools, as grounds for sterilization, and for placement in ‘work release’ programs (forced labour) following a sentence.
A look under the hood of the IJOP system lays bare the problems of predictive policing. At best, many of the behaviours flagged are arbitrary and ineffectual if the government’s goal is to prevent terrorism. At worst, they are deeply racist and have enabled a calculated genocide to be perpetrated against the Uyghurs and other minorities. By turning cultural markers, such as dress, language, and prayer, into criminal markers, the IJOP system is effectively criminalizing Uyghur culture. When culture is defined as a problem of security, the tools of security become the only logical response, with devastating effects.
It seems evident that a social credit system is incompatible with fundamental human rights and opens the door to a host of abuses. Such a system tolerates no diversity of thought, expression, or culture and is outright hostile to those whose lifestyles don’t conform to the Party’s definition of a model citizen. Privacy ceases to exist, as a necessity, as the system is predicated on maintaining a comprehensive set of data on every citizen across the spectrum of their activity. In the era of big data, this can’t help but underscore the right to privacy as a dominant frontier in the fight for human rights.