Pentest Logo

Insight

Acting responsibly in Cyber Space – with Marcus Willett CB OBE (Ex GCHQ Director of Cyber)

In July 2021, our Managing Director, Paul Harris, caught up with Marcus Willett (pictured), to discuss some of the biggest issues affecting cyber space, especially around state-led cyber operations. Marcus led national cyber programmes during his career at GCHQ and is now, amongst other things, the Senior Adviser for Cyber at the International Institute for Strategic Studies. Recent publications include an article for Survival magazine on the SolarWinds hack, a methodology for assessing state cyber power and an accompanying net assessment of state cyber capabilities and national power.

PH: Firstly Marcus, many thanks for taking the time to speak to us about the fascinating world of state cyber operations and state capabilities. It’s rare to be able to talk to a subject matter expert such as yourself who has first-hand experience in this often-secretive world.

MW: Thank you for having me and I am very happy to try and shed some light on a subject I too find fascinating.

PH: So, let’s start with the current geopolitical landscape. In your opinion, has the escalating volume of cyber operations and grey zone activity reached ‘Cold War’ status, or perhaps a cyber cold war status? Particularly when you consider the ongoing cyberattacks against Government institutions (SolarWinds), Critical National Infrastructure (Industroyer, BlackEnergy) and influence operations (US presidential election, DC Leaks).

MW: In terms of there being widespread state-on-state espionage, the use of cyber space has taken it to a different level. In some ways it is now more extensive than during the original cold war. Spying in cyber space allows states to conduct covert operations in a standoff way, allowing them to get into targets quicker, get to places they couldn’t previously reach and pull back information at greater volume, all from behind a computer screen, with little chance of being physically caught. So, it feels like the original cold war spying mentality is still there, but the spying is now enacted faster, further and at higher volume than ever before.

Spying is one thing. Cold-war-like covert action is another. Here too, states have realised they can use cyber space to influence others in ways previously unimagined. At the moment, some states, in particular Russia, seem to be operating within what has become known as a ‘grey zone’, the space between peace and war, using their strategic cyber war fighting capabilities to have an effect that is deliberately calibrated, they hope, below a threshold of war. If you think about it, the Russians have been quite thoroughly called out for their actions over the last few years. Like trying to interfere in elections. It’s not like they tried it, nobody noticed, and they got away with it. They tried it, they got caught and it has been called out. But, sure enough, the activity was judged to not have crossed a threshold requiring a more robust response than the sorts of things raised by President Biden with Putin recently.

The big risk here is that someone miscalculates. They think they are operating within the grey zone, but the outcome of the operation isn’t quite perceived that way, or the operation goes wrong, leading to escalation. Imagine a scenario where a state wants to signal their cyber capabilities by having a pop at someone’s Critical National Infrastructure (CNI) in a mild way, maybe someone’s water supply. What happens if it goes wrong, the malware malfunctions, you end up poisoning the water supply and you kill people? It was never the intention, you tried to operate in the grey zone, but low and behold, it doesn’t turn out that way. I think this is the biggest risk with state-on-state cyber operations and why people are comparing it to the cold war, that states are trying to operate at a threshold below war and there is a potentially dangerous scenario that it goes wrong. I’m not surprised that President Biden said recently that if the US finds itself in a real shooting war, it is most likely to have been the result of a serious cyber incident.

PH: According to reports, the US President Joe Biden told Russian President Vladimir Putin that Critical National Infrastructure should be “off-limits” to cyber operations at their recent face-to-face meeting in Geneva. Should there be international cyber warfare agreements for CNI? Or perhaps specifically for CNI attacks that result in the loss of human life?

MW: It’s an interesting question and I would be surprised if Biden did suggest that all areas of CNI should be made off limits for all cyber operations. I’ve previously argued that putting all CNI offline for offensive cyber operations would be unrealistic, as all states would consider certain aspects of CNI viable targets during conflict. As we know, you can’t go from a standing start on these sorts of targets. You have to do your reconnaissance and prepositioning, which generally needs to be done in peace time. What’s more, I would be very surprised if the US attempted to put CNI off-limits for, specifically, cyber espionage. The US have world-leading capabilities in this area and when you know you have an advantage, you’re not going to sign that away in a hurry.

In terms of international law, if death, injury, or serious damage were caused by a state attack against your CNI, then that would constitute a use of force and you would be justified in using force back. Liberal democracies would argue that existing international laws are sufficient and it’s all about the correct application. This is something I’ve argued in the past, the UK Foreign Office has and so has the State Dept in the US. However, I’m starting to believe that we might need to think slightly differently. Recent examples of cyber operations have started to make some international laws look a little tired and I wonder if it’s time to re-evaluate these existing laws, or consider new ones, to reflect the modern nature of operations.

When it comes to CNI, I don’t think it’s sensible to put all CNI off limits in a legal sense, as I don’t think that’s realistically what states will do, and we would all agonise over differing definitions of CNI. I would be keener to come to an agreement on the things that should obviously be off the table, the things that nobody can argue ought to be viable or sensible targets, like hospitals, emergency services, nuclear command and control, etc, rather than all CNI. For CNI in its broadest sense, I would argue that we must accept that it will always be a target and therefore concentrate on its defence and resilience.

PH: You mention that hospitals and emergency services should not be considered viable targets for cyber operations. If we look at Wannacry (a worming ransomware attack attributed to the North Korean based Lazarus Group that infected 200,000 computers across 150 countries), this state backed operation impacted up to 70,000 devices in UK’s NHS including MRI scanners and theatre equipment, seemingly without any repercussion. Was the lack of action due to a difficulty in accurate attribution, or is it the challenge in having sanctions or diplomatic measures that work against countries such as DPRK?

MW: I don’t think attribution was difficult in that case. In fact, depending on your risk appetite, attribution is not as problematic as many believe. The biggest problem is being able to speak publicly about how you know, without jeopardising ongoing intelligence operations. This is a key point that adversary states need to understand, that if a threshold is crossed, governments will be more inclined to show the world how they know who the perpetrators are, even at the risk to their intelligence capabilities. And of course, close co-operation between governments and the private sector has led to a burgeoning cyber security industrial sector capable of confidently attributing state cyber-attacks itself, as happened with the attribution of Russian intelligence agencies for SolarWinds.

In the Wannacry instance, it was North Korea, and they were called out. In terms of any retaliatory action being taken, it is crucial to understand that the intention behind the North Korean operation wasn’t to bring down the NHS or hospitals. Rather, the North Koreans lost control of an operation and of their malware, which consequently hit a swathe of unintended victims. It was reckless, but there was not the serious intention to harm health services that might, of itself, warrant more robust and collective action.

What this does demonstrate though, is how easily things could escalate beyond control, especially when state-led operations are being conducted irresponsibly.

PH: With the potential dangers of grey zone activity going wrong due to states acting with an irresponsible lack of command and control around their cyber operations, is there a need for new international laws that attempt to prevent escalation?

MW: I think the thing that needs addressing in terms of international agreements is defining what behaving responsibly in cyber space really means for a state. I would argue there are several dimensions to this. One is about how cyber operations are run, are they surgical, with ring-fenced targeting, and with careful command and control, kill switches and the like; or are they reckless and indiscriminate, using global IT vulnerabilities willy-nilly and/or employing uncontrollable worms.

One is about defining a small number of targets that really must be off-limits, like hospitals and emergency services – not this amorphous thing called ‘critical national infrastructure’, however well you try to define it, for the reasons I have elaborated.

And finally, it is about how a state deals with cyber criminality known to be emanating from its territory. This last one is a good place for the international community to start, I would argue, and was presumably at the heart of the Biden/Putin conversation. The degree to which any of this is enshrined in new international agreements, new law, or left to non-binding norms of behaviour and the like, is a big question to which, for the present, I don’t think we know the answer.   

PH: Focusing specifically on those targets that we believe should be considered off-limits, if a nation should breach their obligations in this respect, do you believe a more significant response than sanctions and diplomacy is appropriate, disruptive or destructive cyber operations for example, or perhaps even a kinetic response, as we witnessed with the Israeli missile strike targeting the building Hamas cyber operatives had been working from?

MW: It’s equivalent to what would happen if a state dropped a bomb on a hospital. Taking it out through cyber means should be no different and it requires a similar response. That’s at the extreme end of the spectrum and to a certain extent it’s easy to see how the international community would quickly come together at the UN and think about sanctions, or other responses, when such action is taken.

The more difficult area is the one that Biden has had to confront recently, when a state is caught all over your networks, such as with SolarWinds. Maybe they’ve got to your CNI, maybe their intention is espionage. But as we all know, a bit of malware put down for spying can be easily turned into something else. What is an appropriate response in that instance?

The US has used all sorts of measures in the past, such as diplomatic demarches synchronised with allies, economic sanctions and criminalising the perpetrators in the courts, when they know who they are. To a certain extent that last one might seem ineffective as the perpetrators are unlikely to be prosecuted in their own country’s jurisdiction, but it means they can’t travel abroad, or at least to anywhere that has an extradition treaty with the US, so it’s quite a powerful tool, not to mention, it’s a great way of publicly shaming states and individuals. The US might also respond with its own cyber operations and, ultimately, if a threshold were crossed, a state might respond with a use of force. So, I think the international community has an array of actions it could take, and it would come together to act if someone was seriously irresponsible in cyber space.

In terms of when an armed response might be justified, it is interesting to consider NATO’s Article 5, whereby, if one member is the victim of an ‘attack’, then it would be deemed an ‘attack’ on all. The Estonians argued that the cyber-attack they experienced in 2007 could have been called an Article 5 attack, but in reality, its consequences were not the equivalent of what NATO means by the word ‘attack’. Ultimately, NATO has realised that it would know when a cyber-attack might justify an Article 5 response, because it would have caused loss of life and damage of an equivalence to a real-world attack. In other words, it would feel like the most recent time Article 5 has been declared, which was the attack on the Twin Towers. This is a key point, the responses to cyber operations must be judged according to the real-world effect of those operations. Someone hacking into the power generators of the Twin Towers would not be Article 5; somebody using the hack to destroy the Twin Towers would be Article 5.   

This raises a crucial question; can I strike back just because I’ve found a Nation State in a position where it could potentially cause loss of life and damage? In other words, my right to pre-emption. But in this situation, when not in conflict, there are a whole range of other things I would do to make my point to them, which could be about sanctions and diplomatic actions etc. But as I alluded to above, the uncertainties around this all add to the risk of potential miscalculation.

Generally, of course, semantics really matter in this space, and we must be particularly careful when using the word ‘attack’. There’s one understanding in international law of what an armed attack is and then there’s people conflating the meaning with the specialist use of the term ‘cyber-attack’, by which they mean a technical attack, when someone has ‘attacked’ your code or data. Unfortunately, it is that conflation that can lead to all sorts of problems when judging appropriate responses.

PH: Stuxnet, allegedly a joint US/Israeli cyber operation designed to destroy centrifuges at Iran’s Natanz nuclear enrichment facility, is a good example of how state cyber operations can be used to deal with a threat before it potentially escalates, buying time for diplomacy to do its job. Is this approach something you would advocate?

MW: Yes, this is one of the things we’ve been working on with the International Committee of the Red Cross. There might be a tendency to view cyber operations as just some new type of indiscriminate weapon of warfare, with, for example, NGOs and humanitarian organisations potentially caught in the crossfire. Whereas another way of looking at cyber operations is that, if used responsibly, i.e. surgically and with proper command and control, in a time of conflict they can be used to disrupt an adversary’s infrastructure without destroying it physically and without loss of life. To a certain extent, Stuxnet is an example of this, where carefully targeted and controlled cyber operations were used as an alternative to more warlike and destructive solutions, thus allowing the space for international diplomacy and agreement.  

As another example, imagine the need to disrupt an air defence unit during a conflict. You could use cyber means to disrupt its ability to operate by interrupting its communications infrastructure, or you could use missiles to blow that communications infrastructure to pieces, with resulting collateral damage. In theory, with a cyber operation, you can remove that infrastructure just for the duration of your attack and then you switch it back on when it’s over. Damage to human life is minimised and when it comes to the reconstruction phase, after the conflict, you haven’t turned important infrastructure into a heap of twisted metal that needs rebuilding. Of course, commanders will need assurance that the cyber operation will work and what the collateral damage from damaging an IT network might be, or, put another way, they will need to understand cyber operations the way they currently understand missile strikes – we shouldn’t underestimate the difficulties in relying on cyber operations for a particular effect during a conflict. Nevertheless, effective cyber operations can be seen as a more humane alternative to physical attack, and they could even prevent conflicts from escalating.

PH: There is no clear equivalent of the Geneva Convention for cyber space, which means that cyber warriors are operating without the restrictions or protection that such international law provides. NATO only applies pre-cyber era international law to cyber operations, both conducted by and directed against states. The Tallinn Manual was a step in this direction, but without endorsement from Russia, China, DPRK, Iran for example, is a legal framework for acceptable responsible behaviour in cyber space worth pursuing?

MW: States such as Russia and China aren’t against new international law, they want it. In fact, the Russians tabled something at the UN designed to outlaw the military use of cyber operations, which is quite interesting, why would they want to do that? I suspect, they know the US has an advantage over them in this space and the easiest solution to level the playing field is to take that advantage away under international law.

This is the problem with tabling a new convention, no state is going to sign up to something that is not in their national interest and it’s interesting to see where states come from, especially given that liberal democracies tend to abide by international law and their authoritarian adversaries do not. Despite how it can seem, Russia and China see themselves at a disadvantage because cyber space isn’t their friend, its where liberal democratic ideas spread, where the ideas of Adams, Jefferson and Lincoln can get into China if the CCP isn’t incredibly careful. Which is why they like the idea of cyber sovereignty, where you control your own bit of cyber space. This is anathema to liberal democracies who view the internet as where ideas are generated and spread, as a fulcrum for innovation, who espouse the principle of ‘internet freedom’.

This clash of ideologies is at the heart of what people want international law to look like. The Chinese and the Russians didn’t think previous attempts, such as the Tallin manual, went far enough and they want international law to say a state can and should control – we might say, surveil – its own bit of cyber space. The liberal democracies disagree, preferring a multistakeholder and a freer approach to how the internet is governed, believing that existing international law can be successfully applied to cyber space. as indeed I have argued when it comes to the use of force, Article 5, etc.

That’s all fine and good, until we start worrying about criminal and terrorist use of encryption, when even the liberal democracies talk about new laws to force the internet companies to make content available, when required. That might sound a bit like cyber sovereignty rather than internet freedom. So, it’s an interesting area ripe for debate, and I can see several reasons why international law might need to be updated with cyber space in mind. But it’s not that Russia & China don’t want new international law, they do, and thus far they’ve been the main pushers for it. It maybe that the liberal democracies ought to seize the initiative on this.

PH: In terms of responsibility, intelligence agencies around the world are developing sophisticated cyber weapons which include exploits for 0-day (previously unknown) vulnerabilities. Can this be considered responsible when it requires that those vulnerabilities remain 0-days and are therefore not disclosed to vendors, putting personal and corporate infrastructure at risk?

MW: This is a big question for intelligence agencies and one they have grappled with for a while. There’s a recent bit of Chinese law that says that any Chinese individual that discovers a vulnerability, such as a 0-day, must disclose it to the state. They worry that there is a danger vulnerabilities will be sold off to the highest bidder and therefore compromise China’s own security.

You can look up the process used in the UK and the US for how equity decisions are made around vulnerabilities. The default position is that you disclose them, but in exceptional circumstances agencies can withhold them, provided they are certain there is no national security risk to their own infrastructure and their own companies in doing so, which obviously takes in those of allied states, and therefore is fundamentally about the fabric of the internet. Contrary to popular belief the vast majority are handed over, we’re talking 1000’s, it really is the absolute exception for them not to be disclosed and takes very careful, painstaking judgement when that is the case. I don’t think that’s what people think happens, they think that agencies are stockpiling these vulnerabilities, ready to deploy them for espionage, influence or warfare purposes come the moment… but that’s not what’s happening.

‘Sophisticated’ is a term that gets banded about when describing cyber-attacks, but widespread attacks are often unsophisticated in their nature. You can breach hundreds of organisations and get into all sorts of places in an unsophisticated way. Which leads me back to the theme of acting responsibility. To me, sophistication means that an attack is very carefully targeted. You only want to breach your desired target if possible, and controls need to be put in place to prevent wider damage from happening once an operation is deployed.

Stuxnet is a perfect example of this sophisticated approach. It was very controlled, with kill switches and command & control measures, as well as being extremely targeted to specific kit running centrifuge spins. Meaning if it did spread beyond the target, which it did, it wasn’t going to do further harm.

Contrast that with operations which used uncontrolled worms in combination with global IT vulnerabilities, like the Russian use of NotPetya, the North Korean use of Wannacry, and the recent Chinese hack of Microsoft Exchange, where the Chinese effectively left a set of 0-day vulnerabilities wide open for cyber criminals to abuse.

You’ll see the NSA being far more public now about when it reveals vulnerabilities, in the past it was all very quiet, now they are talking much more openly, in part to try and land the point about their default position. The really frightening thing is that, even when the vulnerabilities are revealed, all the world’s cyber criminals know they have a window of opportunity, potentially months, from patch release to people implementing it, which is why so many were the unintended victims of Wannacry – they hadn’t patched a known vulnerability. 

PH: It is apparent that supply chains are becoming more of a target in state-led operations, with the likes of MEDoc (NotPetya), SolarWinds and most recently Kaseya (REvil ransomware) hitting the headlines. Is the supply chain therefore fair game in a world of responsible cyber operations?

MW: Supply chain operations in cyber space are the equivalent of real-world, in-person operations that have been going on for as long as we can remember. Just like in the real world, cyber supply chains can often provide a necessary route into a high security target. So, if you were sitting in the space I used to occupy and you were presented with a legitimate target, one that had a national security reason for going after, as well as political and legal authorisation for doing so, and the only way in was through a supply chain operation, then that’s what you’re going to use.

As with CNI, states will view most supply chains as legitimate targets for reconnaissance and espionage activities. So, the onus must be on customers ensuring – demanding – better security and resilience in their supply chains, and not signing contracts until they can get that reassurance.

PH: Criminal hacking teams are also increasingly exploiting the financial opportunity presented by supply chain attacks. Certain states appear to overlook these criminal operations when they are focused on targets outside their own nation, or when the activity aligns with the interests of the state. What do you believe should be done to address this behaviour?

MW:  Kaseya is an interesting example of this, a criminal group using a 0-day supply chain operation, that really is a step up. But they’re a criminal group, you can write as much international law as you like, they are not going to abide by it, so it won’t work. And ditto when it comes to going after them with cyber operations, they’re just going to shut down, change name and then pop up somewhere else, it becomes a whack-a-mole approach.

So, what can we do about it? All states are the victims of cyber criminality and have a vested interest in coming together to find a workable solution, yet some states are simply turning a blind eye to the activity that is happening within their own territories. I think there is a real opportunity that springs from the Colonial Pipeline incident, Kaseya and the ransomware situation, to start a dialogue, trying to formalise what it means to be told, indisputably, that there’s a cyber-criminal gang operating from your territory and agreement about what action can, and should, be taken.

At their recent meeting in Geneva, I can envisage Biden saying to Putin, if you don’t sort these Russian gangs out then we will, in an attempt to get the Russian government to do something about it. The only trouble is, if Putin calls that as bluff, then the US must act.

PH: Because of corporate influence in cyber space and corporate control of the underlying infrastructure of the web, should any cyber treaty actually include, or be led by a non-government body. Is something like The Paris Call for Trust and Security in Cyberspace, which is endorsed by 703 companies and private sector entities, as well as 79 States, a more appropriate direction to pursue?

MW: It needs to be a multistakeholder approach. So much is about international law, conventions, and agreement, but if states are the only parties involved in the decisions, then there will be loopholes. We need to involve the big companies, the ones that own the infrastructure that underpins the internet and those that build the applications that run over it, they need to be part of the conversation on how we deal with these issues.

The SolarWinds attack, a state level espionage operation that was caught and attributed by the private sector, shows the vital role private corporations can play in terms of protecting cyber space. Some people in the States would question what they pay the NSA for, shouldn’t they have uncovered this? Yet nobody thinks it would be a good idea to have the NSA crawling all over the big US cloud providers. Having a private organisation, like FireEye, expose the operation just shows the strength of the cyber security ecosystem in the US. Of course, not every corporation is going to have the capabilities to expose a state level operation, but everyone has a responsibility to protect themselves against attacks. People may argue over the figures, but if you get the cyber security basics right, you protect yourself against 90% of the attacks out there. It will never be 100%, but that doesn’t mean it’s not worth doing and companies need to be taking their responsibilities towards cyber security seriously. When it comes to the other 10%, the sophisticated, state level attacks, SolarWinds has thrown a real microscope on the need for companies to plan for resilience and redundancy, you must assume something is going to get through and plan for when it does.

One of the other lessons from these recent examples, is that security is also the responsibility of the ultimate customer for that supply chain. Do you know what the security is within your own supply chain? I know it’s an easy thing to say and a difficult thing to do. But, which of the many companies that bought a service from SolarWinds asked basic security questions about their security practices before signing up to that service? Questions such as, have you had an external security audit within the last year, who did it and what did they find? If someone had asked that question, they would have probably got a very blank answer. It’s all too easy for the final customer to say, it wasn’t me, it was an issue in my supply chain, when they haven’t asked their suppliers some obvious security questions. Insurers are also starting to be more sophisticated in the questions they ask before they insure someone, but the customers also need to develop a list of questions and should require evidence before they sign up to such services.

PH: How does spyware fit into a responsible toolset? It is used for example by the Police in certain countries with the ‘intent’ of law enforcement, but it is also misused to violate the human rights of dissidents, opposition figures and journalists. NSO Group’s controversial Pegasus spyware is one notable example.

MW: Well, there can of course be legitimate, legal, necessary and proportionate uses of spyware – for example, a state trying to bust an international terrorist cell or a paedophile ring. And, as you say, it can be misused. I think the issue here is one of proliferation – the proliferation of sophisticated state-developed capabilities in an irresponsible way, meaning that the capabilities end up in the hands of states or non-state actors with no qualms about misusing the kit to violate human rights. The onus must therefore be on responsible states to put in place checks and balances to prevent such proliferation, as perhaps the Israeli government should have done with Pegasus and the NSO Group. We should perhaps add this to our definition of a responsible state cyber actor.

 

Looking for more than just a test provider?

Get in touch with our team and find out how our tailored services can provide you with the information security confidence you need.