Do you expect the machine to solve the problems? In this wide-ranging interview with the Director of the Open Rights Group members of Open Democracy discuss bulk collection, state bureaucracies, the pre-crime era and trust.
Rosemary Bechler (RB): Few of us understood the full import of what Ken Macdonald QC, former Director of Public Prosecutions, was saying at the Convention of Modern Liberty in 2009 when he referred to the then just published paper by Sir David Omand on the effect of modern data mining and processing techniques on intelligence work.
Omand had stated that public trust in the organs of the state was going to be crucial, because from then on, ”Finding out other people’s secrets is going to mean breaking everyday rules of morality.” This was Ken Macdonald’s response:
Now, what the paper completely fails to address is how that precondition, that essential public trust, could possibly survive a system under which the security services were empowered by law to routinely trawl through the private communications data of vast numbers of citizens suspected of no crime, simply in order, as Sir David Omand puts it, ‘to identify patterns of interest for further investigation’. How would the public regard their security services in that world?
Of course, such a world would change the relationship between the state and its citizens in the most fundamental and, I believe, dangerous ways. In all probability, it would tend to recast all of us as subservient and unworthy of autonomy. It would destroy accountability and it would destroy trust.
This is for one very simple reason: because to abolish the distinction between suspects and those suspected of nothing, to place them entirely in the same category in the eyes of the state, is a clear hallmark of authoritarianism.
How do you respond to that Jim and what do you think has happened between 2009 and now to the response of people across the political spectrum at the Convention who nodded to that sentiment, maybe, as I say, without realising its full import ?
Jim Killock (JK): It is much easier to oppose something when it hasn’t apparently happened: to anticipate the problems and say, “We don’t want this kind of power to exist”.
As soon as you’ve materialised that power, and that is what has happened under ‘bulk warrants, bulk collections’, it is much harder to say, “Well actually the billions of pounds that you have invested in this system, the integration with the NSA that you have done for strategic reasons – that must stop. I wish to oppose this, to dismantle it, and essentially wish you to turn your back on the investment you have put in.”
It’s also harder because, when the system exists, rather than just posing the question of harm, you have to prove that harm is taking place. And because people’s lives don’t appear to have fundamentally altered – there doesn’t seem to have been some seismic shift – it becomes harder for people to imagine that a real abuse is occurring.
If you think that there is a problem with bulk collection, and I do, it makes it harder to pose that question in mainstream politics and change that behaviour.
RB: Do you think the vast majority of politicians have been rather craven when it came, for example, to the rushing through of the emergency DRIP Bill with cross-party support in July 2014?
JK: On a very pragmatic level, politicians have to understand whether any of these systems work and are really solving problems, to the extent justified by the money being spent on them. We have a budget for GCHQ that probably runs well over a billion pounds a year, taking the lion’s share of the security budget – about 80+% probably – we are not allowed to know any more.
That poses a simple enough question. Is that actually delivering results and if so what results? Is it saving lives as people claim? Look at the cases we have had of actual terrorist atrocities in the UK, France and Denmark, and in each case the suspects have been known to the authorities. They have come to their attention in different circumstances, usually several times, for different kinds of crime, not only terrorism, but drug offences for example. If that is the case, is it more data that is needed or more investment in human intelligence?
And since human intelligence is costly compared to data mining, and not receiving the same level of resources by a long way – is it actually our politicians who have been putting us at risk? Have they been misjudging the risks and exposing us to terrorist atrocities through this miscalculation?
Cameron has said, as is often said, that he didn’t want to be the Prime Minister who didn’t give security agencies sufficient tools to prevent terrorist atrocities. But if he doesn’t know for sure whether that money is being spent in the right places – then potentially he is that Prime Minister who didn’t deliver the necessary tools.
RB: In the interview that Anthony Barnett conducted on oD with William Binney, former Technical Director of the NSA, Binney says that fifteen years before the NSA, he and his colleagues were already arguing: ‘there’s no point in collecting it all. It swamps your capacity to identify what you need to know. And it is unconstitutional in its consequences. What they are really doing is stacking up information about everyone for later law-enforcement, it is not primarily about terrorism.” He blew the whistle on it before resigning, because he thought that choice to go into the bulk collection of foreign data, “deprived the NSA of understanding what it was monitoring and this permitted the planning of 9/11 to escape them.” He resigned because he thought the only rationale behind this was totalitarianism.
JK: This leads us to another important question. What is the nature of the risk that the security agencies are trying to combat? There are many kinds of risks, and they only talk about some of them. They tell us about the extreme terrorist who is going to kill innocent people. Actually there are other kinds of foreign policy risks they may be managing, and that they are given strategic powers to deal with – economic risk, threats to our national economy which in practise may mean something more akin to industrial espionage, as certainly seems to be the case in the USA. Then of course there are various kinds of political surveillance. We know about climate negotiations, for example where attempts have been made to break into the phones of negotiators and listen in on what other people are trying to do. There is a question mark over whether that is legitimate or fair or reasonable. Of course people are going to expect governments to play these sorts of dirty tricks – that doesn’t make it ethical.
On a wider level the agencies simply see absence of information as risk. It is the Donald Rumsfeld school of intelligence: there are the known knowns, known unknowns, and then the unknown unknowns. The agencies say to themselves, ‘If there are things we don’t have the information on so that we are running risks we can’t quantify, then the answer is to gather the information.’ The agencies sometimes want to say, “well let’s gather all of the information all of the time in order to reduce and eliminate these risks.” That of course just leads you inexorably towards bulk collection of everything, breaking into all the technologies, compromising whatever you can. The attempt to know everything about everybody on a global level is a totalitarian power, whether or not you think that power is being exercised in order to reduce or to limit democracy. Even if you don’t think that, you certainly have to say that the agencies at this point have a desire to have no risk, and are intent on eliminating it at whatever cost, through removing as much privacy as they have to.
Guy Aitchison (GA): Apart from the logics at work with state bureaucracies and the security agencies whereby they wish to accrue as much power as they can – is there another logic at work in the relationship between the state and technology companies? Is there a revolving door of the kind we see with health and private healthcare companies?
JK: I am no expert on relations between the companies and GCHQ, but it’s well known that these relations exist. Snowden, of course was a contractor, and you have companies like Lockheed Martin and the Detticas of this world essentially making money out of government contracts. Their business model is to sell technologies to governments who pay very well. The defence and security sectors – particularly in the UK – do lack proper oversight. Parliamentary oversight focuses exclusively on the legality of what GCHQ are doing.
Imagine if the Parliamentary Health Committee only ever asked, “Is the NHS breaking the law?” Imagine the lack of debate which would ensue: “Well, they are not breaking the law, so we don’t have to worry if people are dying, or if NHS money is being misspent or if companies are providing inappropriate services. They are not breaking the law, so why are you worried?”
The same abuses can occur in relation to the security agencies, corruption, problems with contractors oversupplying or overselling or exaggerating the technical capabilities, or even just happening to work in the same building and suggesting that maybe their company has got a solution to the latest problem people are encountering. All of that can go on, but the oversight mechanism,the ISC, never mind if they are equipped or not to do so – is not even attempting to ask those questions of efficacy or rationality at all. It is just not part of their equation.
Our oversight model needs to ask first of all: what are their risk models? And what is the cost benefit analysis? When you start asking question about cost benefit analysis, you start to uncover whether people are overselling, promoting the wrong technologies or the wrong methods. Why are you spending one hundred million pounds on this project? What was the justification for that? It is only when you start understanding those two core concepts that you are going to get to these questions. And our system is not even slightly equipped to deal with them.
The NSA Strategy called this the ‘golden age of SIGINT’ (signals intelligence). What was SIGINT? Initially it was telegraphs being sent, usually between governments or very high-powered individuals. Now, SIGINT covers your mobile phone, and the way that you talk to your mates to arrange to go to the pub. It is all of that, plus now it is also information about your heart rate and your state of health, plus your smart meter telling people whether your fridge is on or off and whether you are in or out. SIGINT is every detail of your life more or less. So it’s no wonder that they are calling it the ‘golden age’.
At the same time these agencies are telling us all about the internet going dark and ‘loss of capacity’. So, there is a tangible kind of greed and fear of not knowing. Despite the huge glut of information, the many sources and multiple access points for that information, and whether it is network or computer, another person’s computer or a company who holds the data – despite all of this what they want is to be able to access all the data constantly and at all points.
GA: With the vast amount of information available, we almost have the technological capability to mind-read, don’t we? People look at developments in neuroscience and ask themselves when we will have that capability? But no, we have it already. It’s called the Cloud. You can access people’s innermost thoughts and you may be able to predict their behaviour and their ideas better than they can do themselves. That kind of omniscience strikes one like a new god. That is even worse than classic totalitarianism because you are not going to be able to know where it goes to next, are you? That radically changes the power relations between people and the authorities.
JK: It is incredibly tempting for politicians. We are entering an era of pre-crime, more or less, where you can say, “Let’s work out who the criminals are before they have even thought about acting”.
Bulk collection of data is justified in the following way. We are told that it is not really bulk collection – “but we just have to get everything off the network and collect and combine in order to reassemble and get the bits we need.” That has a small degree of plausibility to it. But a legitimate question that ought to be asked is: is it a cost worth paying, given the huge cost of collecting everything? Secondly – are there better ways of collecting the information, through selection, rather than collecting everything and holding for seven days and all the rest?
But then when you look at the real purposes to which GCHQ are putting this process, you find that they are literally fishing – they are running multiple queries over days to get information out of the data set that they have accumulated. Whatever is in their three day buffer they repeatedly run queries to get certain information out of it. Then they retain the whole buffer for a further thirty days so that they can run more queries on the meta-data. That means essentially profiling, delving, finding one contact, moving out and finding more contacts, seeing whether one social group might be mapped onto another social group, combining and recombining their queries.
So at this point, it becomes extremely tempting to politicians, because suddenly you can track down more criminals and anticipate new constituencies to investigate over time.
But at the same time, you are subjecting everybody to surveillance, profiling anybody and everybody, and sharing all of that data with other people who have even less constraints in what they do with it than we do.
Tempting as those options are for politicians, the question really is, why on earth didn’t we have that debate about just how valid those points were that Ken Macdonald was raising back in 2009? Should we have contemplated this power, shared it out between the agencies, without actually having the debate first? There is a huge failure of democracy here and in the States over this. Surely this was something we could have expected parliamentarians to vote on. It isn’t just something a society should go in for without a serious debate.
RB: Think about the irony of the Murdoch press hacking scandal in the context of the bulk surveillance that we are discussing now, and the way that a public reckoning about that completely unacceptable intrusion into peoples’ lives was neatly deflected into a fracas over whether or not regulation would curb the freedom of the press. Or think of RIPA and DRIP. This isn’t just an absence of debate we have gone through, is it? One gets the impression of a system – whole systems internationally – working rather astutely to make sure that questions don’t arise, or are booted into touch pretty sharply if they do. Does this accord with your sense of what has happened?
JK: In relation to the Snowden revelations I think there has been a lot of ‘management’ going on. It’s partly a result of how the establishment talks to itself. I suspect that there were plenty of people briefing staff editors and saying, “This is a really strategic concern. It’s extremely damaging and is going to create opportunities for terrorists. It will make the internet go dark.” I’m sure those arguments were put, without talking about D-notices and the effects they may have had, or taking into account such factors as rivalry with The Guardian– which I’m sure also played its part. The BBC, I’m sure, has reasons to be cautious. If you look at where the lack of caution was, it was The Guardian,maybe the Independent, Channel 4 – the liberal opposition and not the right wing press, which in other circumstances are often very concerned about state intrusion. Here, they weren’t.
GA: In the States, there seems to have been more mainstream debate, relatively speaking, compared to the UK. They’ve even had talk shows talking to Snowden…
JK: But the Americans complain about their debate too, as do the Germans. And it’s true, all these debates have been lacking. It’s very hard for any society to face up to an existential threat in your relations to an apparently democratic state. It is counterintuitive for one thing: why should a democratic state really be endangering itself in this way? Can we really accept that these threats are as profound as they appear? Are we just worrying too much? So there are lots of reasons why it is very difficult for people to approach the problem.
The biggest problem in the UK is that we have a press and a political class that manages itself. In the UK, the power of the Executive is very strong – and in this particular case, the Liberal Democrats were relatively quiet, disinclined as part of government to create a huge chasm over something their government had effectively already signed up to; the Conservatives are in power and of course are going to support their own government; Labour don’t want to question this because they actually authorised some of it to begin with.
Then, you have to say that our institutions, GCHQ in particular, are bound up with the NSA. It shares the same technology. It is even paid to a certain extent by the NSA to develop technologies. They implement programmes together to their mutual benefit, like Operation Socialist where they took over the networks of Belgacom and invaded the Gemalto systems in order to steal encryption keys and sim chips. You can’t separate, therefore, the UK’s infrastructure, programmes, specific acts from the NSA. And if that is the case, UK politicians don’t really want to ask too many questions, because then you are going to be faced with realpolitik. Do you really want to smash up the special relationship and our strategic arrangements with the US? If not, you are just going to follow the US lead. It’s very difficult – in the US it’s your programmes, your government at least, you have got more leeway; and then, of course, there is the whole aspect of the relation the citizen has with the state under the US Constitution for Americans to draw on. It’s a big country and the state has a lot of power. There is more scepticism about the overweening state in the US. In some ways that’s quite healthy.
GA: Do you see any reasons for optimism in the way the election landscape is shaping up in the UK?
JK: Yes, because one of the ways in which the security agencies have got away with minimal regulation for years in the UK is thanks to the two-party system. It’s worth remembering that the agencies were only put on a legal footing in 1989, really recently. The codification of investigatory powers was only drawn up around the year, 2000. Democratic and legal control through proper constraints in any sense is barely 30 years old. That is extraordinary in a democracy. How was that situation allowed to go on for so long? If you only have a couple of ministers to persuade within a cabinet, within a strong Executive, within a Parliament where they dominate, it is easy to keep control and make sure the questions never really arise. If instead you have to persuade several parties that have to decide together and consult with their backbenchers to keep the whole thing rolling, you have to make sure that safeguards are in place – and you have to be more careful. For now, the relatively weak executives seemingly in the pipeline are a good thing.
GA: We have touched on freedom of speech, privacy, political organising, dissent. But there are broader questions to do with how big data can be used to manage populations. Evgeny Morozov, well-known for his pessimism with respect to the internet, has an interesting concept of ‘algorithmic regulation’ – whereby Silicon Valley tech companies offer technical solutions to political problems, essentially through using data to try and craft individual behaviours, create certain incentives in order to prod people in certain directions. So, he argues, the whole idea of democratic disagreement is becoming replaced by this kind of nudge theory and tailored service provision in areas such as crime or health policy.
JK: This is another huge area. The idea of tailored services is again beguiling for state actors, but probably won’t lead you in the right direction. Because, what are you going to say? Let’s profile people who look potentially troublesome, and then we’ll send social workers in to interview them, or maybe set them up with some compulsory or semi-compulsory programme of attendance on courses they must do. The idea that you can use data to profile people who are in some way a risk, then target things at them that will deliver you better outcomes sounds great! But actually you are talking about people. And the way that you get people to behave better is by changing their circumstances and dealing with them as individuals, so that they can getthemselves out of their own problems. This is just another way of saying, “Let’s have targeted benefits, means testing” and so on.
GA: I was just thinking in terms of the health system of the future, someone turns up at the hospital and is told, “Well look here, we have just checked, and you haven’t been recording the right amount of miles on your treadmill or you smoked ten years ago…”
JK: Or we can tell from your bus data that you aren’t walking enough…A lot of this is very wrongheaded because ultimately social change does not arise at a micro-level when initiated by the state. It might work when initiated by people. But not through state intervention.
GA: What are the rights and claims that are important in this context?
JK: Privacy is hugely important.
GA: What about the right to be forgotten? The European Court talked about this right recently.
JK: This was a very controversial judgment and I’m not sure if it was as important as people made out. But it touches on something wider which is very important which is that digital appears to mean no forgetting at all.
It appears to mean permanent records of everything to a level of intensity never previously anticipated. And of course the ability of societies and individuals to forget is very important for social norms of all sorts. If one had a perfect record of everything one’s partner had done or said in some conversation, it would be impossible wouldn’t it to have human relationships? So the idea that the state is going to remember everything you did in order to hold it against you or for you is very, very disturbing. It sums up a world in which people don’t have second chances and are discriminated against.
GA: That kind of judgmentalism already seems to be in preparation in the digital world. Digital cultures – twitter plays a role in this it seems to me – are very intolerant of the fact that human beings make mistakes.
JK: In human culture you have to hope that people start to adjust to this and get to a point where, you say, “Look we can’t hold people responsible for something they did 20 years ago. If we do, we end up with politicians who are hermetically sealed until they are old enough to be elected, so that they don’t make mistakes for 20 years. This reduces them until they become grey and bland and have never been touched by sunlight! “ You hope that people understand that real life can’t be like that.
But, when a machine is involved in making these judgments, they are going to have many many potential points of reference, and algorithms create a situation where you don’t really know precisely what is being judged or what has been put into the system. That is partly what is disturbing about government by algorithm.
Take two examples. Government has to use algorithms ultimately to model climate in order to understand climate patterns. We want them to do that. But the critical thing is to know what this algorithm is predicting; how to justify it; what is the model of climate change built into the algorithm; what risks of error are built into the algorithm that is predicting climate change? This climate model, therefore, has to be exposed to the public and in the UK it is. There is a scientific debate about how that algorithm is constructed and what the model is doing. That’s good, because you are able to query and decide whether it is right or wrong. You are saying, the model says that we could run the risk at two degrees of climate change and these are reasons why we might do that, or you could run it at 1% and these are the political implications. You are essentially deciding whether the political context would allow you to do one thing or another. Maybe politics isn’t up to this kind of decision-making. But ultimately this is a political decision. You are not just trusting to the machine. Nobody in their right mind would argue that on climate change we are leaving the decisions to the machine.
But apply the same logic to road transport. We find politicians saying, “The machine says we are going to have to deal with more cars, so we have to build the roads.” You have models in the case of road transport where the algorithms may be proprietary and not available, but they are being made to determine government policy nevertheless, because it is convenient for politicians to say, “The reasons we are building these roads is because the computer said so”, rather than – “I decided that we would have more roads because I want to see more roads.”
These examples show that the interaction between modelling and algorithms and transparency and politics can lead to two very different kinds of discussion. The key thing to learn with data, is that it is very possible for people to push the decisions well away from you. And if algorithms are involved, that makes it even less easy to deal with. The solution is that you make the data and the modelling as transparent as possible.
When you have personal data involved, of course, a whole set of other questions arise, as we have seen recently with Care.data. You can’t just simply make it available to lots of people. Questions of trust and ethics are just as important as the data and the outcomes, because you can’t have any of this without all those things.
You have to have trust and ethical procedures and people must be able to have autonomy. If you try to take any of those things away, you end up with the whole project collapsing and people refusing to cooperate. You end up not getting the data because nobody is prepared for you to use it in that sort of way. So we must differentiate between where personal data is the core issue and when it is more systemic and performance related and not about people but about things.
We were told back in 2010 that open data was going to make government really transparent and improve the way that government was done. Government did do a lot to open up the data they found easy to publish. What they haven’t done was to follow up with the more important questions in my view – which was to ask what data people really did need to understand the outcomes that government are creating. Secondly if we are processing this data in these algorithms and models, how do we open those up to scrutiny to ensure that people really do understand what government is saying.
GA: What about the way in which corporations use big data to micromanage their employees?
Jim: This has been omnipresent for a number of years. There are lots and lots of tools for employers to see who is lazy, inefficient, bad at their job. You have different levels at which these problems exist. In JPMorgan you are talking about high salaried people who are risky assets for companies because of the knowledge they hold. Then you have people at the other end of the social scale who are being RFID-chipped to check that they are stacking the shelves quickly enough, not taking long toilet breaks and that their fag breaks are short and they return to work. Why is this being done? Well this is rather different. This is being done because they don’t want to spend the money on training supervisors. They don’t want to have to trust their individual employees. Staff turnover is high, so high degrees of monitoring essentially do management without investing the resources in treating people like human beings. So it is a very different set of reasons for monitoring.
In both cases you have an underlying lack of trust. That is the ultimate message that all surveillance sends. We don’t trust you. And secondly, please change your behaviour or feel intimidated into behaving in the way in which we do want you to behave.
You have to ask, is that a good way to build a society?
RB: So, we are back where we started with the issue of trust, and as you also said, you have to have a certain level of trust for this kind of data extraction to work. Isn’t there rather a paradox here?
JK: Yes. GCHQ, MI5 and MI6 do rely on certain degrees of trust. They need the trust of the whole of society to allow them to get on with their work. Secondly, they need the trust of some of the groups who are right in the problematic front line to report people to them and decide that they should do so. One of the big problems in Northern Ireland occurred because the security agencies very quickly lost the trust of the nationalist or Catholic part of the populations, that community was not prepared to report people to the authorities and effectively preferred to shelter criminals because they thought that was the ethical choice. It didn’t matter if they were dangerous and criminal: it was also wrong to report them to the authorities because of the sorts of things they might do to those people, whether innocent or guilty, whether their suspicions about them were justified or mistaken. So you just didn’t do it.
If you undermine that trust you stop policing by consent basically. You end up with worse outcomes. So everybody has to be incredibly careful about maintaining trust – that is how society is built.
GA: Work itself never seems to stop thanks to digital. Do we need employee rights to disconnect, to free time, to not be switched on all the time? There wassome story about a new French law enabling certain workers not to receive e-mails from their workplace after a certain hour.
JK: Another property of digital is the ability to connect all of the time, the connectivity, the ease and cheapness of communications. People shouldn’t feel obliged to check their emails all the time. Of course some people will want to and it is hard to legislate against it, but at the same time responsible employers should say, “I don’t want to work my employees to death. I want them to be in place in five years’ time, and to have a fruitful rounded life so that they can contribute to my business or organisation effectively”, as opposed to just wanting to squeeze everything out of them.
This is again about attitudes and the question of what kind of society we want. What we don’t want is just to be led by the nose by technology. A lot of the applications and social media that we are using are driven by a desire to attract your attention and to make you continue to engage with the product by constantly asking for your attention. That incentive ultimately helps sell advertising. Similarly with profiling where you have to know more about the person in order to sell more advertising.
The overcollection of data related to both of these phenomena does exist. They are dynamics partly of the technology, but also of the nature of profit-driven systems coming together with ones where people are ‘choosing free’, and therefore paying through the use of the product rather than through subscription or whatever.
Obviously we are not going to get rid of technology since technology offers us lots of possibilities – so we should have a genuine debate about the society you want, first.
GA: It has been said that technology in itself is neither good or bad, but neutral. And yet when introduced into unequal societies, its logic is to empower the already powerful. They are going to use it for their own purposes first, even if the internet is also empowering people to challenge them. But when we come to think about questions of what kind of society we want, to what extent broadly thinking do you think we should be relying on the language of rights as a defence against these types of abuse?
Is it enough? Or do we need a more positive collective project? Tim Berners Lee with his Web We Want project – good name, it’s catchy – is clearly inviting us to reimagine the internet and the role it has in our society, and not just to be left fighting on the back foot and on the defensive?
Jim: You have to say that technology makes things more efficient – you can see that on all sides. But the question is do you make things more efficient in order to reduce work? Or to maximise work or maximise profit? You have a social choice about how you want to see the benefits distributed.
Rights must be a starting point, I think. You do need those very legitimate questions to be raised about where you want the benefits in society to occur. How do you want these things to work, where do you place the limits, who do you place the limits on? Sometimes rights are very important because they explain to you how you might create a strategy around deciding where power lies. The right to privacy, for example, implies information and self-determination leading to the ability to control information flows and decide who knows what about you. In the digital age, somebody knows a lot about me. The question is: Can I get that information back? Can I use that information? Can I get it deleted? Can I take it somewhere else? Can I trust how it is used in the future? Where does a copy of this go and who else gets it? Rights of Informational self-determination – the ability to decide on these things – become important. They are implied by the interaction of privacy and digital but they need an intellectual think through in order to get from A to B. You can’t just assume that because we have a right to privacy you are going to get extra things like the right to delete or the right to get information back. At the moment our Data Protection Act says that for £10 you can get a paper copy of the data. Well that’s obviously insufficient in the current digital age. But if you did have those rights simply to get everything back or everything deleted – well that does give you power. It allows you to move from company to company or service to service and to choose who is going to supply you with products. You start to determine the sorts of relationships you are going to have.
Once you begin to see the direction in which they might go, how do you then get those rights? That takes us back to political will. In the EU there was some political will for some of this. But now the nation states have started trying to block any of these changes in data protection. They don’t want it. They want less data protection and they want the companies to have all of the benefits because that allegedly benefits innovation!
Well, it benefits a particular sort of innovation. It allows the companies to decide how the data is going to be used, and it minimises all the trouble that they might have if they have to go back to the people whose data it is, asking them how this data ought to be used. But is that the kind of society we want?
As to the really interesting question of whether digital technology is neutral or not. As an idea or a system, it may be neutral. But when technology is implemented as in the technologies that GCHQ is employing to amass data from pinchpoints on the internet, or in the sense of Facebook or twitter – clearly digital technology is not neutral. Facebook is clearly there essentially for Facebook’s benefit. And it defines the uses that we can make of it in order to benefit itself. So in that sense, you have to say that Facebook as a technology is not neutral. If you are a developer using Facebook you will know that it is not neutral.
Is your mobile phone running Apple’s iOS or Google’s Android neutral? I think you’d have to say, well no, they are not. But in that case does that matter?
The question of who is benefiting is a really important question. Why? What is our relationship? Is that fair? Are we in control of the technology? Are we able to decide how to redeploy it or to alter it? Because if you haven’t got those rights, the technology is certainly not behaving in a very neutral way and you are likely to be having to make some quite big compromises around it.
All of this is profoundly philosophical and it needs a great deal of attention, and people are not giving it the attention that is necessary. The question is: Do you expect the machine to solve the problems? I’m not sure that is a good idea. Do you expect law enforcement to deal with it? It is probably a large part of the answer, but this is also a challenge that implies the need for social change.
There are many different facets to these problems. But the critical thing, whatever the problem, don’t react to this at a knee-jerk level. Do think it through from a principled point of view, and do also appreciate the key affordances of the technology, the key factors of the technology and how it behaves in order to have a good critical analysis. This is one of the big issues of our age.
This post is part of our Great Charter Convention series, hosted in collaboration with Open Democracy, IPPR and the University of Southampton. This post originally appeared on Open Democracy.
No Comment