How Data Trusts can help us fight the pandemic and fix surveillance capitalism

George Zarkadakis
4 min readSep 19, 2020

Many governments, in order to deal with a likely resurgence of the pandemic in the fall, are launching track and trace apps. Citizen data are to be collected, analysed and used to identify, or predict, clusters and upticks in Covid-19 cases, and thus help isolate the infected and limit the scope and size of future lockdowns. The utility of such apps is enormous. Nevertheless, collecting such sensitive data from citizens poses a huge political dilemma for liberal democracies, which are meant to protect individual privacy as a fundamental civil right. Because of the nature of the challenge, it does not make sense to anonymize or de-identify the data in order to protect individual privacy, indeed the opposite is necessary; the government needs to know exactly who may be infected, as well as whom they may have come in contact with, in order to take appropriate action. At the core of this political dilemma is the apparent tension between utility and privacy, but that is only because it’s the government who collects the data and using it too. But what if we separated the two, the collection from the use?

Authoritarian governments can use face recognition to control citizen behaviour and mindest

What if there was an independent organization that collected citizen data, and had fiduciary responsibility for governing its proper use? Such an organization could be run by “trustees” who would have to adhere to strict policies about the collection, storage, and access to the citizen data by a third party, be the government or a private corporation. Moreover, citizens would have a direct say in how this organization would be governed. They would elect the trustees, audit their performance, and vote for the organization’s constitution and policies. Such model organizations exist and they are called Data Trusts. They are essentially a legal construct that enables the ethical and compliant collection and sharing of sensitive data, by introducing the concept of fiduciary responsibility. This legal concept, applied to data, solves for a multitude of current problems in data governance, the most important of which is the power asymmetry between individual citizens and governments or corporations. For, although there are laws that enshrine the need of user consent for the use of their data, in practice to refuse consent could lead to social exclusion (for example, not participating in social networks), denial of service (by not accepting the Terms and Conditions of a useful app), or joblessness (by not sharing employment data, credit history, etc.).

The key elements of a Data Trust (Credit: Sightline Innovation)

Data Trusts are still at their infancy. Before the pandemic, the City of Toronto was considering a Data Trust whose purpose was to collect data from a multitude of sensors in smart-city projects. The Open Data Institute in UK has already piloted a Data Trust in the Borough of Greenwich. In the context of a smart city, the stakes are very high when it comes to the collection and use of personal data. There is currently an ongoing debate on what Shoshana Zuboff calls “surveillance capitalism” whereby our individual data are collected by big tech and are used for profit, often without our consent, or in any case without us gaining any of the enormous value that our data create for the few in the emerging AI economy. Beyond surveillance there is also social injustice. Facial recognition systems, for example, are biased against minorities, and that is because of the way data are used to train algorithms. One cannot expect from private companies to take fiduciary responsibility for the proper use of our data. Despite their — quite possibly — sincere intentions, their fiduciary responsibility will always be to their shareholders. That is why it is important to separate the collection of the data from the use of the data, and why having a legal construct like a Data Trust can empower citizens to have more control over their “digital avatars”, as well as resolve the political dilemma of privacy versus utility.

However, there may be an even greater prize for rethinking how we can govern personal data through the use of Data Trusts. As our economies dramatically transform due to AI and data, “business as usual” can only lead to greater income and wealth inequality. Such inequality will be compounded by work automation due to AI, which will render millions of people permanently unemployed in the name of economic efficiency and corporate profits. It is doubtful that liberal democracies can survive this double whammy of huge economic inequality and social injustice. Data Trusts could provide at least part of the solution for greater economic equality in an AI economy powered by data, the so-called “new oil”; by capturing some of the economic value and redistributing it back to the data owners. As an example, think of a Data Trust that collects the personal data of communities living in a smart city, and then makes this data available — for a licence fee — to a third party that needs the data in order to train its AI algorithms and develop its digital products. In such a scenario, a Data Trusts is not just as intermediary that solves the privacy versus utility dilemma, but — more importantly — a significant player in the emerging data and AI ecosystem of the Fourth Industrial Revolution. We can therefore imagine democratically governed co-operatives of Data Trusts that administer the data commons of smart cities, or of patient groups, or of gig workers, or of industry associations, and transform the future AI economy from a zero-sum game that works only for the few, into a positive-sum game where everyone wins.

--

--

George Zarkadakis

PhD in AI, author of “Cyber Republic: reinventing democracy in the age of intelligent machines” (MIT Press, 2020), CEO at Voxiberate @zarkadakis