A supervised society – will the West overtake China?

(od lewej) Anna Turner, David Lyon, Katarzyna Szymielewicz: V Forum Geopolityczne 2023
(od lewej) Anna Turner, David Lyon, Katarzyna Szymielewicz: V Forum Geopolityczne 2023, fot. Piotr Skubisz

We talk to Katarzyna Szymielewicz and Professor David Lyon about how we are invigilated by the state and corporations, the details of monitoring systems in China and India, the situation in Poland, and the role of civil society in the fight for freedom.

(The text is translated from English, edited and abridged transcript of the debate held on 30.09.2023 in Warsaw during the 5th Geopolitical Forum organised by the Institute of Civil Affairs).

Anna Turner – Assistant Professor at the Institute of Philosophy and Sociology, Polish Academy of Sciences, where she works on issues related to the impact of new technologies on society. She conducts international research related to public interest in issues of surveillance and privacy. He is a member of the teams of two large research projects: Polish Panel Survey POLPAN, where she deals with topics related to threats, and the International Social Survey Programme, where she participates as an Advisory Expert in the development of the Digital Societies module. Author of publications, participant of scientific conferences. Vice-chair of the Digital Sociology Section of the Polish Sociological Association.

Anna Turner: Poland’s history provides a fascinating context for discussions on surveillance. Older people have memories of the times of government surveillance during the totalitarian period, while the younger generation experiences varied forms of control, including monitoring and data processing by multinational corporations. Today, surveillance technologies affect virtually everyone, especially internet users. Let’s start our conversation by asking about your understanding of the surveillance society: can you identify significant moments that have shaped it in recent years?

David Lyon: It’s an important question and there are various answers to it. I think of surveillance in relation to the data that affects us – such a broad understanding has become central to many issues.

The notion of surveillance, which is certainly familiar in Poland, refers primarily to state surveillance. Nowadays, in many if not most countries in the world, the state is strongly linked to commercial activities, to corporations, and so these two entities work closely together. The state is often found to be using data from the commercial sphere. During the pandemic, the Canadian Federal Government bought data from a telecommunications provider (it was mobile phone data) and used it to try to monitor the spread of the virus. This is an obvious example of how the state can rely on data from commercial companies. It is now a much more complicated issue.

So, when I talk about surveillance, I mean an interest in every action, personal life, every activity that reveals to others some information about us, which can then be collected in any way. It’s not just literal visual surveillance, but today it’s primarily surveillance through mobile devices.

Katarzyna Szymielewicz: I will refer to how the mission and scope of the Panoptykon Foundation has evolved over almost 15 years. As David noted, the original purpose of our activities was to analyse surveillance practices affecting citizens by the state. In a situation where we were dealing with the international war on terror and the undermining of any protection of human rights, it became clear to us that we had crossed the line drawn by philosophers such as Foucault and Agamben, beyond which no one is safe from the terror of the state. If we, as a civilisation, as communities, accept (we never did, but politicians presented it as if we did) the killing of someone in defence of our society – this also means that we ourselves can be killed in defence of a society from which we have been excluded. Looking at this dynamic of state surveillance and deciding who fits in and who is outside society – who deserves to survive and who deserves to die – was the first big idea behind our work 15 years ago.

The more we explored the issue, the more we discovered the complex power dynamics mentioned by David, in which the market and the state work hand in hand to justify the need for surveillance of citizens, and to produce the tools and infrastructure to enable it – and that it is essentially the same ecosystem. The turning point in this discussion was the Edward Snowden revelations, which proved beyond doubt that this was the case, i.e. that data collected by commercial companies previously associated with freedom, access to information and with a reputation as 'the coolest companies in the world’, such as Google, Facebook et al, were an active part of the surveillance apparatus. It then became clear to us that the main focus of our work should be to look at the practices of these companies. This is not to say that state surveillance is no longer dangerous. It is, but we understood that this is how the world works.

We will probably not be able to replace a surveillance state for citizens with a non-surveillance state, so it is better to create some kind of regulation to protect citizens’ rights. To some extent it is necessary for the state to fulfil its functions, to defend us when we really need defending and to organise public services when we need them, but in the market for online services surveillance should not be part of the package.

Over the past five years, much of our work has been about just such companies and their regulation, not least because surveillance has literally become invisible and elusive for customers. When we encounter state surveillance, we at least know that someone is exercising power over us. We may feel intimidated, threatened, uncomfortable. It is completely different in a commercial environment, where it is sold to us as a convenience: 'Do nothing. Don’t decide. We’ll do it for you.”

I think this new wave of surveillance, based on comfort tools, is linked to people becoming more passive and 'happy’ – they don’t consciously choose it, but they feel happy, withdrawing from active choices and simply giving in to suggestions, recommendations, targeted advertising, watching how companies shape their lives. This is probably much more dangerous for society than surveillance by the state, because it’s much harder for us to stop – to notice what’s going on and try to question it. This is what I see as the main problem with surveillance at the moment.

A.T.: As a researcher, I am fascinated by comparative analysis, especially in terms of what societies have in common and what makes them different. Research in Western countries shows that we have a largely negative view of surveillance and the use of our data without our consent. However, it is worth noting that acceptance of surveillance practices increases when these actions are motivated by security concerns. In other words, I do not tolerate my data being used without my knowledge, but I change my mind when I am convinced that it is necessary for security.

A country with a very different attitude to surveillance from ours is China. I will refer to a study whose findings come from a fascinating book published a few weeks ago by Ariane Ollier-Malaterre entitled „Living with Digital Surveillance in China: Citizens’ Narratives on Technology Privacy and Governance”. „Living with Digital Surveillance in China: Citizens’ Narratives on Technology, Privacy and Governance”. In the conclusion, the author writes: „Chinese respondents to the survey believe that the development of technology will restore China to its former glory, solving all Chinese problems. They accept surveillance techniques because they see the government as a trusted guardian, almost a parent, who is needed when 'moral quality’ is lacking. In other words, respondents say it is a form of discipline needed to counter chaos in such a huge country’.

We can therefore see how perceptions of surveillance techniques in China differ significantly, both in comparison with studies conducted in Western countries and in the context of the narrative presented by the Western media, in which the Chinese Social Trust System is portrayed as an example of Machiavellian, totalitarian control. What is your opinion on this and is there anything we can learn from the Chinese?

D.L.: The situation in China is fascinating and very different from what we experience in Canada and, I understand, what you experience here. The cultural differences make it impossible, in my opinion, to make simple comparisons between these cultures – as Westerners we are very different from the Chinese, shaped by a heritage of Confucianism. The US in particular, but also many other Western countries, tend to see China as some kind of avoidable dystopia, leading to growing tensions in bilateral relations, with very little understanding from the West of what is actually happening in China. I agree that Ariane’s book is a successful attempt to dispel some of the stereotypes and glaring errors in our thinking about China.

You have highlighted a different perception of state surveillance. This has to do with the national humiliation that the Chinese have experienced in many ways over the last century. The invasion of Japan in the 1930s, for example, is still a subject of tension in China. This humiliation affects the way the Chinese think about the relationship between citizen and state. Here we have a fundamental difference with Western countries. When it comes to the Public Confidence System, the Chinese are afraid of being shamed because of a low score on a confidence report – this shame is, in my opinion, much stronger than in Western societies. We are talking about similar phenomena, but experienced differently due to cultural backgrounds.

What we are dealing with in China is not so much surveillance capitalism – referring to the title of Shoshana Zuboff’s 2019 book – as state surveillance capitalism. We cannot simply apply the diagnoses posed by Zuboff to describe the Chinese reality, as the state factor plays a much greater role there than in Poland, Canada, the UK or many other Western countries. Therefore, before delving into specific issues, let us be careful not to hastily extrapolate Western realities to Chinese realities and not to make erroneous assumptions either about the motivations that drive the behaviour of Chinese citizens or about the goals that guide the actions of the Chinese government. Personally, I am not enthusiastic about Xi Jinping’s rule – but I try to understand his motivations and assumptions taking into account the cultural context in which he operates.

K.S.: I fully agree with David and, like him, I avoid comparing us with China. Such attempts seem downright ridiculous to me. This is not a criticism of your question – I know you are asking it because that is the media narrative, as you also mention. China has no intentions towards us. I.e. if they have any plans, they are not focused on Poland, but involve the whole world and big games with much more powerful players. China has become a kind of smokescreen for us to hide the problems we face in the West, and the statement that „we are not China after all” is intended to shut down the discussion that should be going on. I think this is a fundamentally flawed approach. I think we should keep a close eye on China. Like David, I am not a fan of their practices as such, but the consistency, precision and prudence with which they are implemented is impressive.

Let me give two examples that I did not know about until I started asking myself questions about China. According to the researchers I spoke to during my own study of the issue, the Social Trust System was designed specifically for social inclusion. So it is the same situation as in other underdeveloped countries, such as India, where people have no identity documents, half the population is uncounted, unidentified and citizens have no identity in relation to the state. These are very different realities from ours because we are counted, identified and monitored. So countries like China and India are creating systems to socially integrate residents so that they can take out loans, travel or receive benefits. We, on the other hand, put our cognitive filters on these processes and criticise the acquisition of information about citizens by the governments there, without being aware of what their starting point was and what challenges these programmes are responding to. This was one of the reflections that came to me while discussing China’s Social Trust System with someone who knows Chinese society much better than I do.

Another example is large companies such as TikTok, which have come in for a great deal of criticism in Europe – and with good reason. Yet in the West, we don’t really understand how these companies operate in China – from my observations, they are controlled and their operations are governed by the state, whose policies set their direction. To reiterate, I’m not a big fan of these solutions, but if we have a strong state that is able to control what the monitoring companies do, and these companies cannot cross certain red lines, for example they cannot offer to children what they offer to children here – making them addicted to technology, providing them with content that children should never watch, manipulating their minds (and clearly this is not the case in China because the relationship between these companies and the state looks different) – then maybe this is something we could learn from the Chinese.

D.L.: I will refer to the point Kasia has just made. In the West, we often cite the example of China as a kind of dystopia that can be avoided. It’s a path we don’t want to follow because we make a lot of assumptions about it – for example, it is suggested, or even stated, that the Chinese Social Trust System was developed by the Chinese state to track and control all citizens. Yes, the social trust systems that exist in China are primarily run by the government, but they mainly assess the activities of corporations. They do not collect data on individual citizens or consumers.

There is no unified system called Social Trust in China. Since 2014, there has been a plan to develop some aspects of the Social Trust System, which was supposed to be realised in 2020. This has not happened. Across the country, many people have objected to specific parts of it, and in many cities corporations are rejecting certain elements of the Social Trust System because they feel it is inadequate to the task for which the government set it up. So let us not imagine that China has a unified system of top-down, totalitarian control, exercised by the authorities in Beijing. It is far more complex, far more fluid and far more open to contestation. Over time, corporations have opposed, changed and also withdrawn some elements of it.

Kasia also referred to India. India’s population will soon surpass that of China. If we are looking for a uniform system that covers every citizen of a country, why are we not looking at India – which is never mentioned as a country whose solutions we want to emulate or avoid? India has one central, state-organised biometric enrolment system [Aadhaar – editor’s note] – the Indian Prime Minister invited the head of India’s largest technology corporation, Infosys, to set it up. There are currently 1.4 billion people registered with it. Technologically and administratively, it is a staggering system, built at a pace that is hard to believe and based on iris scans (as well as facial photos and fingerprints). The scans of 1.4 billion human irises are in one unified, comprehensive system in India. It fulfils an extremely important role in Indian politics, but is of course also the subject of much controversy. Nevertheless, what we have here is a biometric system that was initiated by the state, but realised with the support of a large corporation, and which effectively covers all citizens.

K.S.: We can also mention companies like Meta or Alphabet, which have identity-based systems used by billions of people, they serve as an online identity provider and perhaps in the future – I hope not – their services will be accessed using fingerprints, which will become the default way to enter the system. It is just a matter of programming the devices, mobile phones and tablets that we use to access these services. This solution makes sense because this way is fast, and people like the speed, the reliability, the intuitiveness; the promise that they won’t be hacked – and they don’t even think about handing over their data to private companies, although they are opposed to the state acquiring it. Of course the state will get them too. It already has them! We are fingerprinted at airports in the European Union and there is nothing we can do about it. My point is that such a future awaits us here too, so it is better to concentrate on the practices of the authorities here and now, instead of making colonial claims to teach others how to protect the privacy of their citizens.

A.T.: I have to admit that I have also noticed the patronising tone of some of the comments in the Western media, indicating rather a complete lack of understanding of local conditions. Research by Ariane Ollier-Malaterre shows that many Chinese citizens are not even aware of the existence of the Social Trust System, and that digital surveillance programmes are not something that concerns them too much.

Emerging at a dizzying pace, innovations are being implemented rapidly and on a large scale. How serious are surveillance practices in the West and is enough being done to control and regulate them from a legal point of view? Is it at all possible for changes in the law, which is usually quite slow, to keep pace and keep up with such rapid developments in technology?

D.L.: We need to think about what is actually happening in our societies. I think Shoshana Zuboff’s work on surveillance capitalism is very helpful in this regard. I disagree with some of the author’s conclusions, but we respect each other. I think Shoshana hits the nail on the head of some really important aspects of today’s surveillance practices, noting that corporations are heavily involved in the collection of personal data and that this data is extracted from our everyday behaviour. So what we are dealing with here is not the action of some alien force uncovering and aggregating information about us. Rather, the point is that it is we, through our online activities associated with the use of digital devices, who are constantly producing data that is then collected. These are very valuable and can be monetised. This is how the big digital corporations make money, by manipulating this data with algorithms to use it for their own purposes and resell it to others.

I have previously given the example of the Canadian Telus corporation, which during the pandemic sold mobile phone usage and mobile traffic data to the Public Health Agency of Canada. None of the 33 million people whose data was sold to the government knew that such a transaction was taking place. The pandemic was used as an excuse for a quick data grab, and I imagine the people at the Public Health Agency of Canada didn’t even think anyone would ask them questions about it. But they did. I think there needs to be a serious rethink of how citizens’ data is currently captured and collected. Today, even data about our relationship with the state is often collected in the commercial sphere. Of course, there are still government security agencies that have access to and use our data.

Today, however, the key question is about the commercial use of personal data, which can also be used by government bodies – from the police (who love to access data provided by corporations) to public health agencies, security agencies, and a variety of government institutions that often depend on the information. Sometimes data about us is collected by these institutions, but nowadays it is increasingly obtained from commercial agencies and corporations.

K.S.: Let me come back to the question of whether it is possible for the law to keep up with the development of technology: I believe that it should not even. After all, the law should never arise before problems are defined – otherwise we would see it as authoritarian, despotic and dystopian. It is as if the state knows better than us how to prevent problems before we define them. A very good example of such a law that was drafted on the basis of sound problem definition is the General Data Protection Regulation (GDPR), as it is built on assumptions that have existed since the 1970s.

We are therefore talking about more than fifty years during which the idea that no data should be collected about a person without a valid reason specified in the law has worked well in our reality. This reason could be the best interest of that person or their consent. But it could also be a policy implemented by a state that is able to justify the necessity of such an action. We would probably agree that this is a very good principle. However, it creates the reality that the state could implement a hypothetical policy whereby citizens are required to smile when crossing the border – and introduce facial scanning to ensure that more of them smile. We, however, can then say: no, this is illegal. And take up the fight on this

If we as citizens know what we are defending – if we are motivated to defend our freedom – we can win. If we don’t – i.e. if we actually succumb to narratives that offer us the false promise of security in exchange for our freedom – even the best assumptions won’t help us, because we won’t defend them in the specific case where our freedom is violated. Or worse, we will allow ourselves to be persuaded that this is a situation in which our consent should not matter

So much for the state, but let’s talk about a market that has been regulated by data protection laws for decades, and yet companies such as Alphabet (formerly known as Google) and Meta (formerly known as Facebook) have developed an astonishing surveillance apparatus that even China would not have been able to build on its own, without the involvement of commercial companies (as they do now). How was this possible? For one thing, these companies were created in a country where there was no regulation, namely the United States. There is currently a fierce discussion around this issue in the US, with questions being asked about how this situation could have happened. I constantly hear from US politicians and scientists how much they regret it. But there were reasons why these regulations were not put in place – those reasons were economic development and a specific approach to freedom as a default value as long as the harm to society is not obvious enough to limit the freedom of corporations to act. We fail to see how much corporations do to protect this freedom of action of theirs – and the American public has chosen this narrative. It has accepted the actions of digital giants because it has been seduced by promises of comfort, growth, free, attractive services, and so on.

That’s how it started. And then these processes reached Europe and – even though we had our laws and regulations – these big market players managed to circumvent them in many ways, mainly because of their incredible ability to create a narrative. It took a long time for our courts, the European Commission and even NGOs representing citizens to effectively counter them by formulating a narrative to the contrary. Shoshana Zuboff’s book played a key role in this process, so regardless of what I think of her argument, I like the overall way she presents the problem of surveillance capitalism and holds companies responsible for creating this system and circumventing many of the safeguards. Zuboff’s book was one of many warning signs, such as the Cambridge Analytica scandal and the Edward Snowden revelations. Decision-makers understood what they were dealing with – and that the crux of the problem was not shoe advertisements displayed to consumers with their consent on services such as Facebook. That is not the issue here. The problem is behavioural surplus, as defined by Zuboff, and which we have also discussed today. It is about data about us, about our behaviour, our choices, our preferences, which is collected and used without our consent or even without our awareness.

For a long time, companies avoided the consequences – although there were regulations covering exactly these issues – by arguing that it was not personal data. This shows what the real problem with technologies is. Very often we do not understand them enough to put in place adequate regulations or rules. If we had a different narrative and a better understanding of what happens on the other side of our screens, we could have more effectively applied the regulations we already had in place before Facebook was created and stopped these practices. However, this was beyond the reach of societies.

People like us – NGOs, university lecturers, hackers, groups like the Chaos Computer Club in Germany and the Electronic Frontier Foundation in the US – warned that these processes were happening, but it was a niche niche, an avant-garde niche, not fully understood until big names like Zuboff or popular films like The Social Dilemma on Netflix came along and changed the narrative. It took our societies two decades to understand what was happening. Therein lies the problem. I wouldn’t put all the blame on the law, and I would never encourage lawyers to move faster, even before the problems are defined. For too long we have presented the issues surrounding the use of services such as Facebook to the public as a problem of individual choice rather than a huge social problem. Now that has changed, we have new legislation, social harm rather than individual failure is being discussed, but it has taken us two decades. The question is: can we speed up? Can we analyse the phenomena generated by new technologies faster? If we work at the current pace and need two more decades to understand how the new services work, this is not a recipe for maintaining freedom.

D.L.: I agree with you, Kasia. It is important to place what we are discussing in the right context. Just as the role of cultural factors in China or India is important, in the West I notice two aspects that I think play a key role in this situation. One is the belief that technology is the key to solving all our problems, which fits with the idea of technological solvationism. This is just a myth, but companies very much want us to believe in it. The second element is our perception of our own actions through the lens of convenience. Convenience has been elevated to the highest value, although in my opinion it should not occupy that place. So when we are sold an iPhone or other such device, the main argument for buying it is usually convenience (not to mention the fact that we pay a few hundred dollars more for that convenience). The idea of convenience has been very effectively implanted in our minds as consumers.

I fully agree with Kasia that cultural factors have been instrumental in the failure of state institutions to introduce regulations that restrict the activities of digital giants. However, the fact that these companies operate as if they are accountable to no one in particular is due to the cultural background of believing that we have the technological answers to all challenges – and the belief that convenience is an inherent human value.

A.T.: When you talk about this, I am reminded of an article you wrote with Zygmunt Bauman, among others: „After Snowden: Rethinking the Impact of Surveillance”, in which you define three factors that influence the acceptance of surveillance practices: fear, entertainment and familiarity. I have spoken about fear before, often based on it by government agencies who argue for the need to monitor data and information to keep citizens safe. Entertainment has evolved from the fairly simple mechanisms that social media initially relied on, such as contact with long-lost friends, to convenience, which has become a key value. In contrast, familiarity with surveillance techniques is nothing more than the ubiquity of surveillance surrounding us in so many ways and in so many places that we no longer notice it, yet some of us still try to take steps to protect our online privacy.

This brings me to my next question about accountability. I will refer here to a Eurobarometer survey in which respondents were asked who they thought should ensure that the personal data they provide on the Internet is collected, stored and transferred securely: the government, Internet companies, or themselves? In most countries, respondents said that they themselves were responsible. Doesn’t it seem surprising that people feel they have some control over the processing of their data, even though in reality – knowing the rules of surveillance capitalism – there is little they can do?

K.S.: It’s not about how they feel about it, but what they’ve been told. I see a parallel here to the narratives associated with environmentalism, when it became quite clear to the world’s biggest companies that the problems had been noticed and they would no longer get away with polluting our planet (this was some 20 years ago). The change in narrative – funded by these companies – was often done in a sham way, so that the viewer could get the impression that they were dealing with citizen campaigns to, for example, reduce plastic consumption or reduce air travel. It’s great when consumer behaviour changes to be more responsible, but it’s the last piece of the puzzle, because the real power is always on the side of those who create trends, produce goods and then sell them to us. For big companies, changing the way they produce, e.g. reducing their use of plastic, is a matter of a single decision – whereas consumers are constrained on so many levels by time, economic pressures or a lack of access to other goods that an attempt by companies to shift responsibility for, for example, the climate crisis onto them is simply unfair, and we should definitely fight this kind of narrative.

At the same time, I believe that there are steps that each person can take to protect their privacy. For example: don’t always take your phone with you. Or think twice before installing anything on it, and don’t allow the device to use your location unless absolutely necessary. So there are things we can do, and very expensive devices such as iPhones help us make these choices – because we pay the manufacturer to have more protection. But is this an option available to the average consumer? Not at all. It is a luxury service for the few, but marketed as your choice: 'You want to be protected? Buy an even more expensive device. Think twice before doing something’. This is not fair.

We must attack those who have the power to change the ecosystem, to change the logic of the services and the business models behind the services – the behavioural surplus so accurately described by Zuboff. We should never allow companies to collect and exploit our 'behavioural surplus’ – to them it’s just data, but to us it’s our lives, digitised traces of our lives that should never become part of a service. And that is why the responsibility lies with companies, because as individuals we cannot remove the traces of our lives from the devices and services we use as part of those lives. This is not feasible. I can opt out of sharing my location or receiving notifications, but I can’t opt out of sharing my behavioural data with Google because their services run on that data to some extent. This needs to change and we need to keep up the pressure and demand accountability from the digital giants.

D.L.: I agree that as individuals we could be more careful. Maybe not as careful as I try to be. I don’t have a mobile phone, which is a real inconvenience for people who want to contact me. So, by not sharing the belief that convenience is the highest value known to human beings, I become an inconvenience to others. But that is another story.

The problems we are talking about are not individual problems. We may experience them as individuals, but they are social in nature. How we are perceived by digital corporations is not just based on the data stream coming from us, but also depends on those we are connected to and interact with. It is membership in groups that builds our profiles. When you are online in any way, your profile is built from your contacts – both business and personal.

No one pretends that it is just about us as individuals – and it is really important that we realise this. This is an area where I think civil society has a key role and importance to play. It is civil society organisations, such as the Panopticon Foundation, that take up these issues and propose solutions to diagnosed problems. In the US, computer scientist and activist Joy Buolamwini founded the Algorithmic Justice League to help programmers who create algorithms understand that social justice issues are built into the way they are created, and that algorithms themselves can be grossly unfair and discriminatory. Linnett Taylor addresses issues of data justice by thinking specifically about those who are economically disadvantaged and who tend to be disproportionately disadvantaged not only by the position they already hold, but also by corporate profiling. Civil society action takes us away from thinking in terms of the individual as opposed to the state. Civil society groups are always looking for ways to alert government authorities – who have a responsibility to citizens – that this particular type of technology is having a negative impact on certain groups in society, on their life chances and certainly on their development as human beings.

A.T.: My last question is to Katarzyna Szymielewicz and it concerns the situation in Poland. Does the Polish state invigilate its citizens without their knowledge and consent, and if so, to what extent, and how does this relate to the changes to the Electronic Communications Law that have been proposed until recently?

K.S.: The Polish context is not unique in our view, and we have been studying it for more than a decade. The scale of state surveillance is not shocking in Poland. However, it should be noted that we have less and less clarity on this issue. When we started our activities as the Panoptykon Foundation, we received information about the scale of surveillance by sending requests through access to public information. Later, a law came into force ordering the state to publish statistics on how and to what extent state services use surveillance tools. This solution had been in place for a long time and only recently changed with our current government [the ruling party was at the time Law and Justice – editor’s note]. The information we have, which is not very detailed, talks about a large number of 1.8 million data points on citizens – based on data retention by telecom companies

So it’s not about listening in on phone calls, reading text messages or emails, it’s about locating devices: who has been talking to whom, which phones are travelling together, etc. But such large numbers are usually due to the way mobile phone base stations (so-called BTSs) work. If the police want to check whether a particular device was in a particular location at a particular time – or which devices were present there together – it is usually necessary to collect data from the entire location, and thus access huge amounts of data. I’m just giving an example of how government services use data, and I’m not defending that 1.8 million data points is OK. I don’t know if it is OK.

In our view, the issue is not how many times the services checked someone or how many data points they technically obtained, but how the data was used, whether the scale of the action was fit for purpose and whether data that was irrelevant to the case was immediately deleted without any other consequences. However, if we consider another scenario, in which the police use as a pretext a bomb alarm or other event that can be easily generated to capture data from a single BTS in the centre of Warsaw – thus creating a pool of data to then use operationally – then here we already have a worrying situation. To summarise: the scale of data acquisition by state investigative and intelligence services is not very worrying to me, if I understand how it is then used – which we do not know.

In Poland, we are currently facing the problem of lack of effective oversight in this area. We have courts which decide on telephone tapping, but this is on a completely different scale – thousands, not millions per year. In practice, the courts receive applications that are poorly justified and too lacking in detail for the court to review the case responsibly. And because decisions have to be made very quickly, the result is that 98% of applications are accepted – meaning that this happens almost automatically and as such is criticised by members of the judiciary. Judges are under pressure and – in practice not having the tools to thoroughly review a request – tend to accept it, reasoning that if there was a misuse of data by the services, it will be verifiable when the case is pending, as the data obtained under the control of the court becomes part of the case file

We can therefore assume that if there is misuse of data, this should be apparent to the judge as the case develops – and when the case is closed, the data acquired by the services for the investigation should be destroyed. Is this happening? Well, probably not. In Poland, we had a big scandal related to the use of Pegasus spyware by the services against, among others, opposition politicians, which, in addition to tracking, eavesdropping and voyeurism of a smartphone user in real time, allows access to all information stored on the device, as well as carrying out provocations, planting compromising content, and creating content that never existed (e.g. emails on the user’s email account). In our opinion, the use of this type of software should not be put on the same level as phone tapping. I think we would win this argument in court, but so what? These things do happen

Therefore, as I have already mentioned, the fundamental problem is not the scale, but the possibility of holding the perpetrators of abuse accountable, which does not function in our country. The Panoptykon Foundation has brought a case on behalf of myself, my colleague from the foundation – Wojciech Klicki – and several other lawyers who have reason to believe that we have been under surveillance for some time. We argue that we should have been informed of this when the investigations were closed. We hope that the European Court of Human Rights in Strasbourg will confirm that this standard should apply in every European country – and that we will have a law in Poland that obliges the services to notify those under surveillance when investigations are closed, so as to increase the accountability and transparency of these activities. This is one example of the legal safeguards we do not have.

As I mentioned earlier, we also do not have effective digital surveillance, and access to data stored by telecom companies is done remotely, without any involvement of the judiciary. The authorities also made an attempt to change the Polish law to be even more lenient and flexible for law enforcement agencies – however, this was stopped as a result of public protests in which we participated as the Panoptykon Foundation. The aim of the aforementioned project was to extend the existing data retention mechanism to online services, and this would certainly have increased the pool of data available for remote access without any oversight

So the government services would not only get access to my phone data from telecoms operators, but also to the data that is held by all ISPs – which would go much deeper into our lives, allowing us to see the logs of potentially every activity undertaken online, every email sent, every chat conversation, every instant messenger message, etc. At the moment, people who are concerned about being surveillance by data retention and telecoms companies can use more secure chat apps such as Signal or Telegram. These are not controlled by either telecoms companies or Big Tech, while their users feel that there is at least this space of secure communication for them. If these services were subject to the same data retention obligations, we would lose them. So the fight is still on and we have stopped this attempt in Poland for the time being.

A.T.: Thank you for the discussion.

You might also like:

Ronald Wright przemawia w Myer Horowitz Theatre na Uniwersytecie Alberty, 2007

Put bluntly, billions would die

With Ronald Wright, author of the book „A Short History of Progress,” we talk about how many people the Earth can support,…

monitor

Saving the world is boring

We talk to Domen Savič (www.drzavljand.si/en/) about digital democracy and work in the Slovenian third sector. Domen Savič Director of Drzavljan D…

Read more articles in English