A New Form of Warfare? Implications of the Cyber Attack on Sony

‘I fear I’ve let you all down. Not my intention. I apologize,’ lamented George Clooney in an email released to the public as part of a massive cyber-attack on Sony. ‘I’ve just lost touch. Who knew?’

While the attack caused a stir in the entertainment industry, it also has significant implications for the cyber-security landscape. Groups such as Anonymous have long been hacking government agencies, media companies and other targets of their choosing, but the Sony attack is notable for having reportedly been carried out by a state. This poses questions about the capacities of various states to launch attacks over cyberspace, as well as regarding issues of retaliation, proportionality and the absence of rules of engagement.

Although ostensibly conducted by a group calling itself the Guardians of Peace, the FBI has declared that it believes with ‘high confidence’ that North Korea is ultimately behind the attack, which took place in November 2014. Although this remains contested, North Korea has indeed devoted substantial resources towards enhancing its capacity to operate in cyberspace. According to the testimony of defectors, the North Korean military sent a select group of budding hackers to Beijing in the mid-1990s to be schooled in the art of cyber-war. Soon after they returned, a specialist military intelligence unit dedicated to cyber-operations, Bureau 121, was established; it has since swollen to almost 6,000 people.

Of course, it is not just North Korea that has been pouring resources into improving its cyber capacity. As the United States’ signals intelligence agency, the much-maligned National Security Agency (NSA) is responsible for improving the nation’s defensive and offensive cyber-capabilities. Much of this work is undertaken by the Office of Tailored Access Operations (TAO), a highly secretive unit described by a historian as ‘akin to the wunderkind of the US intelligence community.’

TAO outline

Analysts, developers and operators within TAO work together in ‘Mission Aligned Cells’ (MACs), directing their combined expertise towards specific projects. These include traditional intelligence concerns such as ‘Counterterrorism’ and ‘Cyber Counterintelligence’ as well as regional cells focusing on areas of interest such as Iran, Russia, and, of course, China/North Korea. Indeed, the most convincing and as yet unreleased evidence linking North Korea to the Sony attacks almost certainly emerged from an NSA operation targeting the country back in 2010.

Mission Aligned Cells slide

The war on terabytes

In the aftermath of the Sony attack, there has been some discussion as to how to classify it. While some, most notably Republican Senator and presidential fall-short John McCain, have decried the attack as ‘a new form of warfare’, most agree that it is a ‘despicable, criminal act’, or, in President Obama’s phrasing, simply ‘cyber-vandalism’. The lines are becoming increasingly blurred, however. Mr Obama noted that if ‘a dictator in another country… disrupt[s]… a company’s distribution chain or its products’, the US will ‘respond proportionately.’

The principle of proportionality is an accepted concept in the laws of war, defined in the Geneva Conventions and elsewhere. But there are no treaties, conventions, precedents or established rules of engagement which govern attacks in cyberspace. This legal vacuum will prove problematic when cyber-weapons not only cause substantial disruption, destruction, and damage but also have lethal effects.

Such a scenario is not difficult to imagine. In our modern, hyper-connected world where even our toasters are connected to the ‘internet of things’, critical national infrastructure such as power grids, transport networks and financial systems are spectacularly vulnerable to cyber-attack. A successful attack on the City of London or Wall Street could have devastating effects on the nation’s economy.

Perhaps the most well-known and most destructive cyber-attack to date wasStuxnet, a highly sophisticated cyber-weapon which attacked Iran’s Natanz nuclear facility and widely believed to have been designed and unleashed by the US and Israeli governments. As well as destroying up to a fifth of the centrifuges – over 1,000 – in the Natanz facility, the worm also unintentionally infected over 60,000 computer systems in the US, UK, Australia, Germany and elsewhere.

If the Sony attack was only cyber-vandalism, Stuxnet is a step closer to an act of war. While the US response to North Korea comprised further economic sanctions targeting the country’s remaining links to the international financial system, Washington is prepared to use force in response to cyber-attacks. The White House International Strategy for Cyberspace notes that ‘the United States will respond to hostile acts in cyberspace as we would to any other threat to our country’ through  ‘all necessary means’, whether ‘diplomatic, informational, military, [or] economic’.

No cyber-weapon as yet has caused lethal damage, and force has never been used in response. But this is unlikely to remain the case forever. The Sony attack, if nothing else, serves as a timely reminder that there is a real threat that information or even infrastructure may be compromised as part of a cyber-attack. It would be prudent to establish international conventions governing the rules of engagement in cyberspace before such a threat is realised.

For further discussion around this issue, listen to our podcast.

Post by Kit Weaver, Research Assistant, International Centre for Security Analysis (ICSA). Originally posted on ICSA’s blog – 22 January 2015.

The Sony Pictures hacking: lessons for policymakers and security specialists

Dr Tim Stevens is a Teaching Fellow in the Department of War Studies at King’s College London

The story of how the US government identified North Korea as the source of massive data theft from Sony Pictures at the end of 2014 continues to unfold. As it does so, it provides an opportunity to consider the interplay of government intelligence practices and open source intelligence (OSINT), neither of which emerge with great honour from this particular episode. Importantly, it highlights the need for policymakers facing similar situations to capitalise on the knowledge of individual experts and communities for the public good.

The FBI was quick to assert North Korea’s guilt but its statements were long on certainty and short on details. Attribution of cyber security breaches to specific sources is notoriously difficult and researchers were understandably sceptical of the FBI’s claims. Subsequent FBI statements attempted to provide more technical details did little to assuage researchers’ concerns, which were twofold. First, how was it possible to establish attribution so quickly? Second, if the FBI’s attribution was incorrect, this established a dangerous precedent: the first public identification of a state as the source of a major cyber-attack would eventually show the US to have been duped by a presently unknown third party.

Experienced open source intelligence (OSINT) researchers quickly pulled apart the FBI’s case. Linguistic analysis showed the hackers’ native language was probably Russian, not Korean. The allegedly North Korean internet addresses identified by the FBI could have been ‘spoofed’ and were not proof of the attackers’ geographical location. Internet security company Norse claimed the attacks were an inside job by disgruntled Sony employees rather than any shadowy North Korean military unit.

Security researchers subsequently demanded more evidence from policymakers, though they realised an intelligence agency would not reveal its ‘sources and methods’, for valid technical and political reasons. One OSINT researcher even started an online petition, which has attracted just 127 signatures, calling on the White House to release its evidence for independent review. None was forthcoming, and the FBI contained to press its case, as did the president, who imposed sanctions on Pyongyang.

One leading cyber security researcher, Thomas Rid of King’s College London, responded to the FBI’s claims in mid-December by tweeting that its attribution was ‘as good as it gets’. Whether this was damning with faint praise or a genuine belief in the FBI’s analysis was unclear but Rid looks to have been right. In January it became clear that the FBI case had a longer analytical pedigree than security specialists or the public had been led to believe.

The US government’s case against North Korea was strengthened further when The New York Times reported that the National Security Agency had been active in North Korean networks since at least 2010. The NSA built up a picture of North Korean capabilities and intentions enabling it to attribute the Sony hack to individuals and units it knew plenty about.

As this episode and the Snowden revelations have shown, US intelligence agencies are no strangers to infiltrating foreign computer networks, but there is an irony here. American accusations of foreign attacks on American assets have seemingly only been corroborated by information concerning American attacks on foreign assets. This is clearly a tricky question of policy for the US government, even if it does signal the reach of American capabilities in this field. More worryingly for the US, few people, especially cyber security experts, believed its initial claims, nor its subsequent explanations. The fiercest criticism of the FBI came from the OSINT community, which perhaps says a lot about its integrity and its desire not to create precedents deleterious to future cyber security.

As a case study, the Sony hacking demonstrates how difficult technical attribution of cyber-attacks can be and how incomplete evidence can lead to incorrect conclusions or, at least, multiple interpretations. There are no easy resolutions to this situation. OSINT researchers operate in the absence of classified information and in environments of incomplete knowledge. Government agencies face institutional straitjackets that limit their capacity to make the conceptual leaps necessary to entertain alternative explanations. It is the task of public policy to capitalise on the strengths of both communities for the public good. We can only hope that recent events will encourage governmental efforts to do just that.

Follow Dr Stevens on twitter: @tcstvns

A call to arms: open source intelligence and evidence based policymaking

Post by Mick Endsor, Research Assistant and Dr Bill Peace, Visiting Senior Research Fellow at the International Centre for Security Analysis

Policymakers have access to a wealth of open source information that has yet to be incorporated into the policymaking process. As Eliot Higgins argued in a previous post on this blog, social media provides a crucial and yet untapped source of evidence which can underpin effective policymaking in relation to conflict zones. We wholeheartedly agree with this – indeed, it can be taken much further. Policymaking based on information that has been acquired through open source intelligence techniques can provide a template for decision-making based on rigorous evidence and a verifiable methodology. In the business world, for example, OSINT is now common analytical practice. But as yet, it has not gained anything like the same kind of traction in the policy sphere, and there is a genuine lack of evidence-based policymaking based on open source intelligence.

From our perspective, this is a worrying state of affairs. There is a powerful case for incorporating OSINT approaches to evidence-based policymaking. In the first place, evidence produced by OSINT methods can be both robust and rigorous, not least because it can be underpinned by extensive datasets. And in the second, it has the potential to be both transparent and verifiable; all open source evidence is, by definition, based on data that is publicly (and often freely) available.

A powerful case indeed, then – so why is open source evidence rarely used to inform policymaking? At the heart of the problem is the fact that OSINT approaches are still relatively ‘young’ and, all too often in our experience, lack the rigour and reliability needed to underpin effective policymaking.

For us then, we need a call to arms – or, more accurately, three calls to arms. First, those who specialise in OSINT need to develop a much stronger, more reliable set of approaches for collecting open source data and, perhaps more importantly, for analysing that data. All too often, analyses rest on a set of assumptions that remain untested. One of the most obvious and frequent of these assumptions is that Twitter and other online populations are representative of wider public opinion and that software can accurately assess the sentiment of these populations. We should critically challenge such assumptions as Nick Halstead, CTO of DataSift, a provider of sentiment analysis software, did when he bluntly stated that any company that claims they can achieve better than 70% accuracy in tracking sentiment is ‘lying’. It is also imperative that we address the real issue of the availability and reliability of information, particularly as organisations are often reluctant to share that information with researchers and academics. This is a policy issue in its own right that could be addressed by developing public-private-academic partnerships, and by broadening the evidence base for policymaking through open data initiatives.

In the second place, those who champion open source intelligence approaches must communicate the advantages of those approaches. Compelling arguments can be, but aren’t, made and as it stands, policymakers lack a real understanding of the potential for OSINT to inform policymaking. To overcome this challenge we need to be far more proactive not only in establishing more effective partnerships and information-sharing practices, but also engaging with policymakers to highlight the policy-relevant benefits of open source research and analysis. OSINT has real potential to add significant value to policy debates: it has the power to identify knowledge gaps, analyse the efficacy of existing policies and, to stress our key point again, to provide new, rigorous evidence bases to support the development of effective policy.

The third point is that policymakers need to be receptive to the enormous advantages that OSINT as an evidence base for policymaking can provide. The big data phenomenon has filtered through to OSINT and opened up new, and as yet underused, avenues of research. As open source approaches become more rigorous, and researchers become better at communicating their applicability to the major policy issues of the day, policymakers should capitalise on this opportunity to improve – significantly – the quality of policymaking.

Social media and conflict zones: the new evidence base for policymaking

Post by Eliot Higgins, citizen journalist and founder of the website Bellingcat

In recent years, content shared via social media from conflict war zones has allowed us to gain a far deeper understanding of the on-the-ground realities of specific conflicts than previously possible. This presents a real opportunity for providing robust evidence which can underpin foreign and security policymaking about emerging, or rapidly escalating, conflict zones. Despite this opportunity, however, policymakers have generally been slow to adapt to the volume of information disseminated on social media from various armed groups in Syria and slower still to use this as an evidence base for policy generation.

In the case of the Syrian conflict, for example, far more information in the public domain comes not from journalists on the ground (who, it goes without saying face extreme danger in attempting to report from many parts of Syria) but from social media and user generated content online.

In 2012, the video streaming site Bambuser was adopted by groups opposing the Syrian government to livestream videos of protests and violence from across Syria. The situation changed as violence escalated and internet access in opposition-controlled areas became increasingly limited. Today, this means that the vast majority of social media accounts being used by the Syrian opposition are associated with specific locations, media centres, or armed groups. This provides researchers with a real opportunity to understand the actors and events taking place on the ground.

This approach has its advantages: because comparatively few people in opposition areas are using social media, and they tend to be associated with specific groups, it is possible to collect and organise the majority of the social media accounts that are being used in opposition controlled areas. These accounts can be monitored for activity, and cover the majority of information coming from opposition controlled areas through social media. This can provide a detailed and dynamic picture of the conflict as well as a detailed understanding of the various armed groups.

There are challenges in this approach, however. The most obvious, in the Syrian context, is that researchers are limited to what those groups post online – and perhaps more importantly, that this often represents a narrow perspective on the conflict in opposition areas. A second problem is neatly illustrated by the Bellingcat research on the current conflict in Ukraine. Here, the opposite is true: because internet access is not limited, anyone can post anything they like online and they can post to a wider variety of social media sites. Granted, this vast amount of information posted from a huge variety of sources provides a richer picture, but it is also one that is harder to access through the ‘noise’ of other material.

Open source intelligence analysis has the potential not only to inform us about various actors and events on the ground, but also to allow us to piece together disparate pieces of information into a wider, more investigative, piece of research. In Bellingcat’s MH17 investigation, the research team compiled information from dozens of different sources, discovered by searching through thousands of social media accounts from Ukraine and Russia, on a wide variety of social media sites. Using this information it was possible to track the movements of a Buk missile launcher on July 17th through separatist-controlled territory to the likely launch site of the missile that downed MH17. The team was also able to find videos from multiple sources of exactly the same missile launcher travelling as part of a convoy through Russia towards the Ukrainian border a few weeks before MH17 was downed. The evidence was sufficiently robust to allow us to conclude that the Buk seen in Ukraine on July 17th originated from the Russian military.

Our research on the Buk missile launcher demonstrates that not only is there a wealth of largely untapped information available online and especially on social media, but also that a relatively small team of analysts is able to derive a rich picture of a conflict zone. Clearly, research of this kind must be underpinned by an understanding of the way in which content is being produced, who is sharing it, and, crucially, how to verify it – and these are methodological challenges which need to be addressed systematically. Nonetheless, the overarching point is that there is a real opportunity for open source intelligence analysis to provide the kind of evidence base that can underpin effective and successful foreign and security policymaking. It is an opportunity that policymakers should seize.