Skip to main content
Ex: Europe, Middle East, Education

"Surveillance Capitalism" Is Not a Fatality

BLOG - 24 April 2019

Six months ago, Shoshana Zuboff published her book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, while the digital world had been facing fierce criticism for some time about the use of personal data by its most famous actors. For instance, the Cambridge Analytica scandal had highlighted the risk of political manipulation through data exploitation; security breaches had exposed millions of users, weakening trust in platforms. In Europe, the General Data Protection Regulation (GDPR), which came into force on 24 May, made the continent a pioneer in the protection of personal data. 

It is in this context that Shoshana Zuboff's book revealed the political, administrative, legal, material and communicative strategies of major digital platforms to extract data from user behaviour and sell it for advertising purposes. Thus, according to Shoshana Zuboff, if 20th century capitalism was based on mass production and the increase of the middle class’ purchasing power, 21st century capitalism is based on surveillance, that is, the extraction of personal data without the knowledge of the users who generate it.

the exploitation of data extracted from people's past behaviours [...] allows increasingly accurate predictions of their future behaviours.

The author demonstrates that the value of large digital companies (including Facebook and Google) comes from the exploitation of data she calls “behavioural surplus”: data that is not generated by the user via the main service of a platform. For example, the value created by a search engine is not so much the result of the words searched but rather of the exploitation of cookies (pieces of code containing information about the user, left on his or her browser by the websites he or she visits). It is the latter that allows a fine profiling of Internet users without their knowledge. Thus, the exploitation of data extracted from people's past behaviours (online, but also and increasingly in the physical world, with the development of mobility platforms, payment services, smart cities, etc.) allows increasingly accurate predictions of their future behaviours.

DAs a result, according to Shoshana Zuboff, the risk for our societies is as follows: it becomes possible to encourage individuals to act in a certain way, without their knowledge, and therefore to control them. For example, McDonald's and Starbucks signed contracts with Pokemon Go to attract players into their stores. Some commentators (Evgeny Morozov for example in a long review of the book) considered Shoshana Zuboff's analysis unnecessarily dramatic and negative. Regardless of the validity of her arguments, her book highlights one of the issues at the root of many debates on technology: technology makes control invisible; it reduces the freedom of individuals without them necessarily being aware of it.

In some cases, these are situations of control we are deeply and predominantly opposed to in Europe, such as when the Chinese government uses facial recognition tools to monitor the Uighur population in China. However, in other cases, it is more difficult for us to have a clear and shared opinion. For example, as we show in the latest report of Institut Montaigne on platform workers, Travailleurs des plateformes : liberté oui, protection aussi (this title could be translated as "Platform Workers: Freedom Yes, Protection Too"), the algorithms of on-demand work platforms encourage users to certain behaviours: take a shift at a certain time, pick up a customer in a certain area, be polite in order to keep a score as high as possible, etc. In practice, no one gives orders, but technology and its creators provide invisible and automated control. Thus the question regulator ask themselves: does this control deserve to requalify workers as employees?

Another example is the circulation of misinformation. By understanding the logic by which content gains visibility on a specific social network, it becomes possible to reach millions of citizens by creating bots (false accounts managed by automatic systems) relaying content with high emotional value. The argument behind most initiatives to regulate misinformation is that the opinions of the populations most affected by false content are controlled by actors making a specific use of technology. Does this control warrant censorship of certain content on the Internet?

One of the reasons why we are uncomfortable with these forms of control is that people using the technology (whether platform workers or Facebook users) do not have visibility into the mechanisms that govern it.

One of the reasons why we are uncomfortable with these forms of control is that people using the technology (whether platform workers or Facebook users) do not have visibility into the mechanisms that govern it (which criteria are used for algorithmic management on the one hand, and for content recommendation on the other). Indeed, the invisible dimension of technology is at the heart of its creators’ ideology. As Mark Weisser said in his 1991 article "The Computer for the 21st Century": "The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it."

Hence, a first step for public policies is to increase the visibility that individuals have on the mechanisms that govern the technologies they use. In the recent report of Institut Montaigne on platform workers, we recommend that on-demand work platforms communicate on their terms of use, the parameters taken into account by the algorithms, as well as their effects (for example, the rating criteria and the impact of these ratings on customer contact). This step is essential to ensure regulation of the technology, but it is still insufficient. Change is the result of negotiations between stakeholders, and that is why we also recommend that platforms encourage and facilitate grouping of workers into associations to allow collective grievances and interests to be expressed.

If surveillance as presented by Shoshana Zuboff is a reality, it is not inherent in technological tools. The world she describes is therefore not a fatality (she says so herself). If public authorities are to play a more active role in regulating digital platforms, as Mark Zuckerberg calls for, it is essential they be able to have more visibility on their practices, in order to enable individuals to have access to the information that concerns them. This is an essential first step to understand and shape the structure of "surveillance capitalism".
 

 

Add new comment

Commentaire

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type='1 A I'> <li> <dl> <dt> <dd> <h2 id='jump-*'> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
  • Only images hosted on this site may be used in <img> tags.

Envoyer cette page par email

L'adresse email du destinataire n'est pas valide
Institut Montaigne
59, rue la Boétie 75008 Paris

© Institut Montaigne 2017