© The Author(s) 2019
Ganna Pogrebna and Mark SkiltonNavigating New Cyber Riskshttps://doi.org/10.1007/978-3-030-13527-0_9

9. Future Solutions

Ganna Pogrebna1, 2   and Mark Skilton3  
(1)
University of Birmingham, Birmingham, UK
(2)
The Alan Turing Institute, London, UK
(3)
Warwick Business School, University of Warwick, Coventry, UK
 
 
Ganna Pogrebna (Corresponding author)
 
Mark Skilton

In earlier chapters, we have talked about three types of solutions encompassed in the Canvas, Technology -driven, and Human-centered approaches. It is, therefore, logical to suspect that, in the future, all three types of approaches will coexist.

Canvas Solutions of the Future

Considering the fact that future threats will become more and more complex, solutions also have to be complex and multi-layered. Let us for a second imagine what a “possible” idealized cybersecurity architecture of the future might look like.

Figure 9.1 represents the three layers of organization structure: Strategic, Tactical, and Operational. While access protocols may differ for various organizations, generally one can think of the Strategic layer as being accessible to a limited number of people in the organization (e.g., business owners , key decision-makers, board members , top managers, etc.). The Private layer is accessible to all members of organization (e.g., employees, managers, etc.) but not to the outside world. The Public layer is available to a wide number of users (including customers, partner organizations, as well as the general public).
../images/467596_1_En_9_Chapter/467596_1_En_9_Fig1_HTML.png
Fig. 9.1

Architecture of an idealized system

These layers are overlaid with three types of front-ends: Closed, Private , and Public interfaces . Each interface has its own network and is separated from other interfaces by firewalls. In turn, layers within each interface are separated by secondary firewalls in order to keep the organization functioning and avoid vulnerabilities that come from the usual challenge of keeping the operational level secure due to the large number of interacting parties and agents.

The Closed interface is the most important and keeps the organization functioning at its core even if the Public and Private interfaces fall victim to cybercriminals. Relating to our previous discussion, a Closed interface is where all essential digital valuables of the organization (imperative to its survival) should be kept. This level holds the essential databases for the operational, tactical, and strategic decisions respectively. Its databases are local and some high-value data is stored physically (in vaults) without any network connection (not even ethernet) on solid-state drives or other secure medium (this data can even be paper-based). A Closed interface implies that Wi-Fi networks are forbidden. Interaction with the strategic levels must be physical since physical breaches of buildings are much harder than digital ones.

The Private interface is the one that keeps the organization running and is the largest of the three as it communicates with the management and information systems of the company , e.g., Enterprise Resource Planning (ERP) systems. This level can be the keeper of important and meaningful digital valuables and rely on cloud-based services as well as data storage to allow agility, resilience, and efficiency. Compromising the Private interface is a potential problem but it is not critical and is unlikely to have catastrophic effects for the entire organization. The focus is on traceability, rapid detection, recovery, and overall resilience. An echelon (layered) structure and firewalls ensure that breaches are contained and slow down the attack progress allowing time for countermeasures.

The Public interface does not include any Strategic or Tactical layers. It only has an Operational layer. Therefore, only secondary digital valuables could be kept within the Public interface. Its main purpose is to communicate with the public, allowing user accounts, cloud-based storage, and other web services . The focus in case of security failure is on rapid detection and recovery. The assumption is that it is not a matter of whether it is going to be hacked or not (the assumption is that it will be), but when.

The Strategic layer inside the Closed interface should be physically disconnected from the rest of the organization . The assumption is that if the Strategic Closed network is somehow connected to any other network, then it is vulnerable. Therefore, communication with this network must be restricted and controlled; any terminals able to access this network must be respectively set up, preventing any data being offloaded or uploaded without control (e.g., all USB ports should be cemented); any mobile phones or other smart devices should be removed on entry.

This, of course, is just one example of an “ideal” architecture. It may or may not fit the needs of all organizations, yet it provides a general guideline for a set of organizations operating across three layers and holding different types of digital valuables.

Note that while Fig. 9.1 may appear to summarize a possible architecture for perimeterized cyberdefense systems, it can easily be applied to de-peremiterized systems.1 Specifically, while many cybersecurity systems are built relative to the definition of certain security perimeters (for example, internal and external systems), a security perimeter does not necessarily represent a physical or digital boundary. It could be defined in terms of the policy enforcement mechanism which is applied to different segments of the security system  (e.g., in terms of layered verification and validation segments). For example, many contemporary cyberdefense systems are designed based on the so-called “zero-trust” logic. This logic has “never trust, always verify” principle at its core. In other words, instead of assuming that users located within a certain perimeter can be trusted, “zero-trust” systems abolish perimeters and assume that no one can be trusted. Such systems apply continuous verification and validation to all users. Note, that our proposed architecture can be easily adapted to “zero-trust” systems by replacing firewalls on Fig. 9.1 with verification and validation segments. Our suggested architecture adds an one more aspect to the way “zero-trust” systems could work: we propose to apply multi-layered instead of single-layered verification tools to system users. It is important to mention here that the internal logic of the “zero-trust” concept is not always entirely clear (as suggested, for example, by Boris Taratine in his recent article on the “zero-trust paradox”) [1]. Yet, the conceptual architecture described on Fig. 9.1 allows us to avoid this paradox [1].

Future Technology-Driven Solutions

Now let us turn to the technological solutions of the future. Can you imagine the gap between theoretical and empirical physics? While theoretical physics already offers us “The Theory of Everything” as well as theorizes about a world full of strings and quantum constructs, empirical physics is so far behind that we won’t be able to find out whether theoretical physicists are right or wrong for many decades, if not for many hundreds of years. Well, the gap between theoretical literature on cybersecurity and practice is currently almost the same as the gap between theoretical and empirical physics. However, in the future, this gap will decrease and we should be able to put many of the existing theoretical concepts to the test. Below, we discuss only a handful of such methods, each of which is equally exciting.

Scenario testing:

Scenario-testing methodology implies that many organizations will reach a level of maturity which would allow them to simulate different attacks in order to understand the set of consequences which these attacks may bring. This would allow various businesses to collect simulated rather than historical data, which could then be entered into the risk assessment and risk management algorithms. To some extent, the trend towards these solutions is already observed in some industries where ethical (white-hat) hackers are recruited to explore companies zero-day vulnerabilities .

Terraforming cyberspace:

Terraforming cyberspace techniques [2] essentially represent next-generation zero-trust systems . It is implied that such systems would allow bullet-proof zero-trust networks where human errors or abuse of trust will be virtually impossible. It is envisioned that such systems will be fueled by artificial intelligence (AI) or even quantum computing. The way in which this methodology works is effectively to remove all possible human-related risks to the system . While the proponents of the terraforming methodology often comment on its efficiency, it is hard to imagine cyber systems not requiring any human input whatsoever, though it will be exciting to watch this space in the next few years.

Cryptography of the future:

Theoretical cryptography has already progressed way ahead of practice, where the limitations lie in the lack of computing power and speed of encryption. Yet, with the development of new technologies, the border between theoretical and practical cryptography will eventually become obsolete. In that regard, we might be able to observe serious breakthroughs in such tools as password managers, which will become more efficient and more difficult to “break”.

Zero-knowledge proofs:

It is envisioned that zero-knowledge proofs—defined as “a method by which one party (the prover) can prove to another party (the verifier) that something is true, without revealing any information apart from the fact that this specific statement is true”—will become one of the new standards [3]. If successfully implemented, zero-knowledge proofs [47] will be particularly useful for ensuring human rights protection as well as privacy assurances in digital spaces of the future.

Mobile targets:

Think of the way in which the US presidential security detail protects the president in case of an attack . The adopted practice is that the president should be put on a plane and his location remain uncertain and dynamic. While moving databases is, of course, costly and counterproductive, making digital valuables mobile could be beneficial and could be achieved in ways other than actually moving the data . For example, in the case of an attack where data is the target, adversaries would require not only the data itself but also the toolboxes which index the data. By making indexing mobile in the case of a suspected system compromise, it is possible to significantly complicate the job of cybercriminals.

Algorithmic active cyberdefense:

Algorithmic ACD already talks about ways in which, in addition to smart honeypots, smart mazes could be designed to lure and capture adversaries. In the future, we will see more and more such systems being implemented in practice. The traps will become more sophisticated and algorithms will be more effective in capturing, diagnosing, and even predicting potential threats.

Future Human-Centered Solutions

Human-centered solutions of the future could be split into Defensive and Offensive. And within each category, we can identify quantitative and qualitative methods:
  • Defensive quantitative methods include behavioral segmentation for organizations and individuals; ambiguity and uncertainty estimations; and cost–benefit analysis of human-related vulnerabilities .

  • Defensive qualitative methods consist of cybersecurity hygiene methodologies.

  • Offensive quantitative methods incorporate behaviorally layered active cyberdefense (ACD) mechanisms; creation of fake digital personas (chatbots) to distract cybercriminals, etc.

  • Offensive qualitative methods include creativity through non-technical consultations.

Behavioral segmentation for organizations and individuals:

Current behavioral science techniques allow us to segment individuals in terms of their risk perceptions and risk attitudes in cyberspace (e.g., using the CyberDoSpeRT technique described in Chapter 7). Using these segmentations, risk in the system can be anticipated and modeled using the proportions of different types in the population and making inferences about the way in which these types of individuals learn. This, in turn, could be used to design policies and measures to tackle cybersecurity risks. By treating cybersecurity as a behavioral science, new methods of risk assessment can shed light on unknown or ambiguous events .

To give a specific example, assume that using a behavioral instrument (e.g., CyberDoSpeRT), you were able to classify your customers into four behavioral types $$x_{i}$$, where $$i \in \left( {R, A, I, O} \right)$$: either Relaxed ($$x_{R}$$), Anxious ($$x_{A}$$), Ignorant ($$x_{I}$$), or Opportunistic ($$x_{O}$$). Since the total percentage of the customers should add up to 100%, we can safely assume that $$\sum x_{i} = 1$$. Since each behavioral type has a unique profile with regard to $$N$$ potentially risky cyber activities, it is possible to obtain a relative ranking of all activities $$A_{j}$$, where $$j \in \left[ {1, N} \right]$$, in terms of risk- taking by behavioral type ($$A_{j} | x_{i}$$). Since each activity is linked to compromising a particular set of digital valuables associated with a set of related costs , it is then easy to define probability and cost correspondence for each type $$x_{i}$$, where for each activity one can identify probability $$p_{j}$$ of a particular cyber risk materializing (calculated as normalized relative ranking $$A_{j} | x_{i}$$) relative to potential cost $$c_{j}$$ associated with this risk. Therefore, for each $$x_{i}$$, you can calculate both $$\alpha |x_{i} = \mathop \sum \nolimits_{j = 1}^{N} p_{j} \cdot c_{j}$$ as well as identify $$\beta |x_{i} = \max_{1 \le j \le N} p_{j} \cdot c_{j}$$. The weighted sum of costs by behavioral type ($$\alpha |x_{i}$$) will allow you to understand which type is most and least costly in your customer population, and coefficient $$\beta |x_{i}$$ will help you determine within each behavioral type a customer activity which poses a major risk to your business. That way, you will be able to determine how to structure a multilayered social marketing campaign in order to (i) change the relative shares of different behavioral types in your customer population, as well as (ii) to target specific costly customer behaviors.

Ambiguity and uncertainty modeling:

Another option to consider is that we normally think of risk measures as some discrete values (probability, chance, etc.) But there is no reason why these measures should be discrete. For example, we have ambiguity/uncertainty models and models of stochastic choice in decision science which allow us to obtain, for example, interval measures, distribution measures, or vector measures of probability. These measures might be more informative. Specifically, if you know that a probability of a particular event is between 45 and 60%, that might be still helpful. The difficulty is that we need good communication tools to translate these theoretical measures into practical methods to make sure that they are actually informative and helpful to those working in the field.

Even though there are multiple different methods in which ambiguity could be mathematized, one of the easiest approaches (a linear model) was proposed by Daniel Ellsberg in 1961. This approach is based on the concept of Knightian uncertainty [8], which implies that in the absence of information about precise probability distribution over events , individuals form subjective beliefs about those probabilities. Ellsberg argued that these subjective beliefs instead of the actual probabilities enter into a decision-making process and could be used to calculate optimal responses [9].

Let us consider the following example. As we saw from the previous chapters, a business should be able to identify a set of digital valuables which might be of interest to adversaries and rank these valuables according to their importance to the business’ survival as essential, important, meaningful, and secondary. This ranking, in turn, will help a business formulate $$\rho$$—some degree of confidence over a probability distribution $$y_{0}$$, which corresponds to all judgments of the relative probability distributions over a series of adverse events . If we then let $$\min_{x}$$ correspond to the minimum expected payoff from an action $$x$$ (say, a new cybersecurity investment) when the probability distribution $$y_{0}$$ ranges over a set of $$Y_{0}$$; and we let $$u\left( x \right)$$ be an expected payoff for the business from the action $$x$$ corresponding to the probability distribution of $$y_{0}$$, we have a decision rule according to which for each $$x$$, the business should maximize index $$\varphi_{x}$$:

$$\varphi_{x} = \rho \cdot u\left( x \right) + \left( {1 - \rho } \right) \cdot {\min\nolimits_{x}}$$
(9.1)

Ellsberg offers an alternative formulation of (9.1), as Eq. (9.2):

$$\varphi_{x} = [\rho \cdot y_{0} + \left( {1 - \rho } \right) \cdot y_{x}^{\hbox{min} } ]\left( x \right)$$
(9.2)
where $$y_{x}^{\hbox{min} }$$ is the probability vector in $$Y_{0}$$ corresponding to $$\min_{x}$$ for action $$x$$, associated with a vector of potential payoffs $$X$$ [9]. This simple model illustrates how subjective probabilities can replace precise probabilities for determining optimal decision-making for cybersecurity.

Cost–benefit analysis of vulnerabilities:

Behavioral science can also shed light on the costs and benefits associated with different threats by considering how various types of cybercriminal behaviors are associated with the underlying ecosystems and business models. That way, we can better predict where the cybercriminals are most likely to attack our systems or even increase the costs of attacking those systems (Fig. 9.2).
../images/467596_1_En_9_Chapter/467596_1_En_9_Fig2_HTML.png
Fig. 9.2

Costs and benefits for an adversary

Cybersecurity hygiene:

Current research in cybersecurity provides clues that different people have a different propensity to detect cybersecurity threats. We even know that that some people correctly identify the start of a potential attack correctly 100% of the time. By exploring the behavioral traits (personality traits, risk and ambiguity attitude, social preferences) of these people, we can identify psychological features which distinguish them from the rest. This will help us to design effective training programs for staff and customers in the future.

Behaviorally layered active cyber defense:

One striking difference between the tricks adversaries are playing on us and the way we defend ourselves against cybercriminals is the fact that while they design highly personalized “services ” to their victims, our defenses are built as “one-size-fits-all”. By using the taxonomy of cybercriminals according to their underlying motivation (Fig. 3.​2) and overlaying this taxonomy with potential business models, we can algorithmically model the propensity of each type of adversary to hit a set of organizational targets and anticipate an oncoming attack . By doing so, we can also ambush the attackers by waiting for them in the most likely places for the attack and then collecting forensic evidence in real time. Under these circumstances, recent advances in decision theory offer models like Decision Field Theory [1012], which may be helpful in modeling adversarial behavior . Suppose the following decision problem is faced by a cybercriminal. The cybercriminal is selecting between three risk prospects (X, Y, and Z). Each prospect corresponds to a potential target within a particular business. This target is associated with a set of attributes: complexity of access , expected payoff, probability of getting caught, etc. Given potential preparation and consideration time (captured on the horizontal axis), the adversary will evaluate valences which capture the adversarial state of preference at each point in time (see Fig. 9.3). Given consideration time and random (Markov) walk, according to which different prospects might outperform each other at different points in time, the decision to go for a particular prospect is reached when the state valence reaches a threshold of 1. In our example, prospect Y is the most attractive.
../images/467596_1_En_9_Chapter/467596_1_En_9_Fig3_HTML.png
Fig. 9.3

Active cyberdefense model according to Decision Field Theory

Using such a model, businesses can predict a set of prospects (targets) and form expectations about where cybercriminals are most likely to strike, creating opportunities for better defense mechanisms and evidence gathering. Note, that the model can be calibrated not only according to the different attributes of the prospects, but also according to assumptions about adversarial level of sophistication, expertise, etc.

Fake digital personas:

Developments in AI already allow us to create chatbots for chatting with customers for various types of organizational tasks. We also have a number of fake digital personas present on Twitter with millions of followers. The technology which allows us to design chatbot and digital personas may also offer interesting opportunities for cybersecurity, where bots could be created to liaise with cybercriminals and prevent phishing attacks . Obviously, the same technology is available to cybercriminals. Yet, not taking advantage of this opportunity for defense purposes would be irrational.

Creativity through non-technical conversations:

Obviously, the idea of having a broad conversation at different layers of organization will survive the test of the future as only with inclusive citizenship and creativity is the victory against adversaries possible.