Why Is Communication Important?
We saw in the previous chapters that understanding the threats, vulnerabilities, and risks associated with cybersecurity is complicated. Part of this complication is that while for non-cyber risks most companies have data about previous cases where things went wrong (in other words, there is data about threats which materialized into harmful events in the past), there is no such data for cybersecurity risks. For example, we know quite well the probability of being injured in a car crash or the probability of being robbed in different parts of the city because we have historical information or data about the number of car crashes and robberies. This data is collected from verifiable sources and recorded officially by reputable public or private entities. Furthermore, anyone who wants to know these numbers can obtain this information in a few seconds from governmental databases or the databases of global international organizations (such as the OECD).
Unfortunately, the nature of cybersecurity events is such that collecting publicly available and verifiable datasets on cyber risks is a highly complex task. Information about possible threats and vulnerabilities is isolated in different private and public sources and this “siloed” information is not accumulated, not cross-checked, and not verified. Even if an adverse event is confirmed and reported by a particular organization, there is often no way of knowing how harmful it was and it might not be possible to exactly identify the victims. Again, the uncertainty with regard to the victims comes from the fact that (i) harm from an adverse cyber event is not always certain; (ii) victims need to spot the harm to be able to report it; and (iii) even when victims can identify the harm, they often do not have an incentive to report it.
In other words, the probability and level of harm from an adverse cyber event is not easy to measure. This is why quantitative risk measurement tools as we know them, which are very helpful to us in physical spaces, usually cannot be applied to cybersecurity issues (at least not directly, without being significantly modified). This mostly occurs because information- sharing mechanisms fail at several levels simultaneously: at business-to-regulator, business-to-consumer, within business, and business-to-business levels.
Whom Do You Call If You Are Compromised?
Imagine that you log in to your computer and suddenly spot something weird. For example, you notice a bunch of sent emails to unknown addressees which went out from your email address. Does it mean that you are under attack? And if so, what are you going to do about this? Whom do you call?
We have conducted a series of questionnaires with representative samples of the US, UK, and German population and discovered a very alarming trend. It turns out that the majority of people often do not even realize they have experienced an attack. Specifically, using a sample of 1234 people from the USA (450 individuals), UK (450 individuals), and Germany (334 individuals), we first asked them a very simple question: “Have you been a victim of a cyberattack in the last 12 months?” If a respondent replied “Yes” to that question, we would ask them to briefly describe the event. If they answered “No”, we would ask the respondents a series of questions to test whether they could have been subjected to an adversarial action without their knowledge. In our survey, 28% of US, 27% of British, and 29% of German respondents said they had been a victim of a cyberattack. So, over 70% of individuals from each country reported that they had not experienced a cyberattack in response to our direct question. Yet, in the subsequent questionnaire, it became obvious that the majority of those who replied “No” to our first question (equating to over half in each country) actually were victims of some type of an attack without realizing it. Furthermore, over 55% of people in all countries did not know whom to call and where to report cybersecurity issues; and about 30% in each country said they would first call their Internet provider, irrespective of the issue.
Unfortunately, the situation with businesses is not a lot better. We have recently been in contact with a company where a CEO noticed that several things had gone wrong with his laptop login. He did not pay much attention to this and simply recounted the story about the problem as an anecdote to his friend, who happened to be a police officer. Luckily, this friend advised the CEO in question to call cyber police unit asap, and they helped the company to uncover a major cybersecurity threat and prevent a cyberattack. Though this story ended well, it is very characteristic of the current state of affairs in the private sector. Even when an attack or potential start of an attack is spotted, businesses often do not know how to react and whom to call.
Why Do Information Sharing and Communication Fail?
First of all, while information sharing in the business environment is only possible among trusted parties, adversaries operate a community system where anonymity allows them to openly transfer and obtain information. Businesses often share information through proprietary platform channels (accessible to trusted parties only), whereas adversaries have an opportunity to share information through the open sources of the Dark Web. Businesses have to operate openly and, therefore, can always be identified or traced when they share information, whereas adversaries are hiding behind aliases.
While businesses often face attacks individually and can only count on their own expertise and resources when dealing with a threat, adversaries work collaboratively. This stems from the fact that adversaries have a common goal while going after the digital valuables of certain actors, whereas businesses have competing goals, or at least perceive their goals to be competing, when operating in cyber spaces.
Businesses also often operate proprietary technological systems, while cybercriminals openly share technological know-how and intellectual property. Even when businesses share technological tools or human-centered methodologies, they do so within a circle of partners, whereas adversaries operate in a multisided market environment where they share information, expertise, and where any actor, no matter how small or insignificant, can get in direct contact with any other actor (even when that actor is large and powerful). In the business world, innovations are subject to ownership and patenting, while adversaries effectively share innovations. As we saw earlier in this book, cybersecurity expertise is often outsourced when we talk about businesses; whereas adversaries are in possession of unique (“elite”) skillsets which allow them to directly engage in attacks at different levels.
Cybersecurity operations within businesses are either outsourced or centralized, dependent on the level of technological and strategical maturity of the business, while adversaries often operate decentralized systems (centralized operations are also possible when we talk about organized criminal groups). Finally, as we have discussed earlier, while the average level of technical and attack-recognition training in businesses is very limited, most adversaries have an advanced understanding of the field.
Academic research shows that the main reason fueling deficiencies in information sharing and communication is a common view among the business community that barriers of information sharing outweigh its benefits [1–3]. Usually, eight groups of barriers are identified [3]: (1) legal barriers are associated with potential disclosure of private information; (2) technological barriers reflect a lack of synergies and comparability between sharing the systems of different businesses; (3) informational barriers include the availability of excessive, irrelevant, or even misleading information; (4) collaborative barriers refer to a lack of trust between businesses; (5) managerial barriers comprise risk aversion due to concerns of being potentially subjected to uncontrolled risk, disagreement about trusted channels through which information should be accumulated and shared, etc.; (6) organizational barriers reflect a lack of operational capability and, sometimes, a lack of expertise to process cybersecurity information; (7) performance barriers are associated with the potential reputational costs and loss of profit should undesirable information surface in the public domain; finally, (8) cost barriers are the associated investment needed to increase information-processing capabilities and create the systems associated with them.
At a business-to-consumer level, communication and information sharing fails primarily due to business inability to effectively deliver information to the targeted audiences. In circumstances where the overwhelming majority of attacks start with a phishing email, 1 it is very important to equip customers with useful information, allowing them to spot cybersecurity threats. Many businesses devise information campaigns to warn their customers of potential risks. However, like any social marketing campaigns, they are “one-size-fits-all” tools. Yet, we know that people have a different propensity to detect cybersecurity risks and engage in risky activities in cyberspace.
Within businesses, we often observe either (i) no investment in cybersecurity training or (ii) the supply of too much “one-size-fits-all” information to staff about cybersecurity issues. The former is equivalent to buying the best defense systems to fight the war but not training the population to use those systems. The latter is equivalent to providing the same information about flu to a highly anxious, nervous person and a laid-back person. Clearly, both of these strategies are suboptimal.
At business-to-business and business-to-regulator levels, the information- sharing channels are also broken, although recently the trend towards establishing better systems of communication has started to pick up in many industries. For example, the MISP platform (https://www.misp-project.org) positions itself as “a threat intelligence platform for gathering, sharing, storing and correlating Indicators of Compromise of targeted attacks, threat intelligence, financial fraud information, vulnerability information or even counter-terrorism information,” and seems to be making steps in the right direction. There are also interesting moves in the same direction within particular industries where businesses understand that if a particular threat hits their competitor today, it might hit them tomorrow as well. Yet, again, very often information sharing within industries happens among a circle of partner organizations, fueled by trusted personal relationships between CEOs, CIOs, and cybersecurity architects, rather than within an open and multisided platform.
How Can We Aid Information Sharing?
There are many ways in which information sharing could be improved, especially if we consider the fact that it is highly dependent on establishing trustworthy relationships between various actors within and outside organizations. For example, there is a significant amount of work in cybersecurity which offers framework solutions to these issues [4]; these framework solutions often have information technology, operations management, or strategy concepts at their core and overlook behavioral aspects of information sharing. Yet, understanding organizational and human behavior in cyberspace may offer significant benefits and suggest ways in which existing barriers to information sharing can be alleviated or event eliminated [5]. Despite the multiplicity of tools which could potentially be applied here, we would like to highlight one method as a good contender for solving the issue of information sharing to ensure (i) that historical data for various types of risks could be accumulated, and (ii) that traditional quantitative risk management methodology could be applied to cybersecurity issues.
Specifically, behavioral science can contribute to the improvement of information sharing, including: understanding information flows through the prism of behavioral theories; modeling risk associated with information sharing; development of algorithmic solutions for information sharing rooted in behavioral science models. It can also provide behavioral segmentation tools to solving information-sharing problems. Behavioral segmentation, in particular, is a simple approach used to group individuals or organizations according to a menu of common behavioral characteristics into “types” and then use these “types” to (i) predict, (ii) understand, and (iii) influence a wide range of behavioral outcomes. Behavioral segmentation can contribute to optimizing information-sharing behavior in business-to-consumer; within business; as well as business-to-business and business-to-regulator layers in the following ways.
Behavioral segmentation for business-to-consumer information sharing:
Behavioral segments of population according to CybeDoSpeRT in the USA and the UK as elicited by Kharlamov et al. (2018a) [6]
Behavioral type | Country | |
---|---|---|
USA (%) | UK (%) | |
Relaxed | 29 | 16 |
Anxious | 34 | 45 |
Opportunistic | 12 | 17 |
Ignorant | 25 | 23 |
CyberDoSpeRT suggests that consumers are likely to be of different types when it comes to cybersecurity risk attitudes. Since consumers of different types comprehend and process information about cybersecurity risks in different ways, we need to design different campaigns for each group instead of using the same campaign for all groups.
Our research also shows that heavy cybersecurity regulation as well as excessive information do not solve the problem as both measures make people more risk-taking in cyber space. Specifically, the Chinese population has predominantly Opportunistic types due to the high level of cybersecurity regulation in the country [7]. Equally, large UK companies who create an overflow of cybersecurity information for their customers create a false sense of security and overconfidence, increasing the percentage of Opportunistic types in their customer base [7]. This tells us that providing too much information about cybersecurity risks is just as damaging as providing no information at all. Our ongoing work in this area is currently looking at how these types are associated with other observable behavior (e.g., social media behavior, spending behavior, etc.)
Behavioral segmentation for within business information sharing:
Combining CyberDoSpeRT behavioral type information (e.g., Opportunistic, Ignorant, Relaxed, Anxious) with organizational role information (CEO, board member, manager, employee, etc.) and designing targeted information campaigns and information exchange mechanisms can also significantly decrease human-centered risks within organizations. In other words, behavioral segmentation for information sharing within organizations allows us to (i) measure the cyber risk attitudes of people within different layers of the organization and model their types; (ii) based on the obtained topology, algorithmically model human-centered vulnerabilities and risks; and (iii) propose solutions based on (i) and (ii).
Behavioral segmentation for business-to-business and business-to-regulator information sharing:
Behavioral segmentation is also valuable for business-to-business and business-to-regulator information sharing. Even though modeling informational exchanges between businesses and, more generally, between organizations is more complex than looking at individuals and their behavior in cyberspace, the behavioral segmentation approach allows us to identify how the actual risks which organizations are facing in cyberspace are mapped onto how these risks are perceived by the key decision-makers in the organizations. Highlighting the mismatch between the two creates opportunities for alleviating or even significantly diminishing many existing barriers for information sharing. In order to obtain the behavioral profile of a particular organization, the following components are taken into account: (i) the underlying business modeling approach (understanding the value creation, value proposition, and value capture of a particular organization); (ii) the underlying organizational structure (horizontal or hierarchical, etc.); (iii) the mapping of cybersecurity risks onto the underlying business modeling approach (understanding how various cybersecurity risks can damage the value creation, value proposition, and value capture of a particular organization); (iv) comparing where current efforts and investment are concentrated with where they should be concentrated. For a group of businesses, the behavioral segmentation approach allows us to measure and highlight (a) the cyber risks and vulnerabilities which businesses perceive to be important versus (b) the cyber risks and vulnerabilities which businesses are most likely to suffer from based on (i), (ii), and (iii). By highlighting the inconsistencies between the two measures and finding the overlap in these inconsistencies for a range of businesses (i.e., concrete-use cases where information sharing benefits the business models of all participating parties), we can show how the benefits of information sharing can outweigh the barriers.
In later chapters we will offer an algorithmic example of how a multi-attribute model rooted in behavioral science could work in practice. We will also discuss further how cybersecurity risk assessment can be understood within the frame of the underlying business model of a particular organization.