In earlier chapters, we have talked about three types of solutions encompassed in the Canvas, Technology -driven, and Human-centered approaches. It is, therefore, logical to suspect that, in the future, all three types of approaches will coexist.
Canvas Solutions of the Future
Considering the fact that future threats will become more and more complex, solutions also have to be complex and multi-layered. Let us for a second imagine what a “possible” idealized cybersecurity architecture of the future might look like.
These layers are overlaid with three types of front-ends: Closed, Private , and Public interfaces . Each interface has its own network and is separated from other interfaces by firewalls. In turn, layers within each interface are separated by secondary firewalls in order to keep the organization functioning and avoid vulnerabilities that come from the usual challenge of keeping the operational level secure due to the large number of interacting parties and agents.
The Closed interface is the most important and keeps the organization functioning at its core even if the Public and Private interfaces fall victim to cybercriminals. Relating to our previous discussion, a Closed interface is where all essential digital valuables of the organization (imperative to its survival) should be kept. This level holds the essential databases for the operational, tactical, and strategic decisions respectively. Its databases are local and some high-value data is stored physically (in vaults) without any network connection (not even ethernet) on solid-state drives or other secure medium (this data can even be paper-based). A Closed interface implies that Wi-Fi networks are forbidden. Interaction with the strategic levels must be physical since physical breaches of buildings are much harder than digital ones.
The Private interface is the one that keeps the organization running and is the largest of the three as it communicates with the management and information systems of the company , e.g., Enterprise Resource Planning (ERP) systems. This level can be the keeper of important and meaningful digital valuables and rely on cloud-based services as well as data storage to allow agility, resilience, and efficiency. Compromising the Private interface is a potential problem but it is not critical and is unlikely to have catastrophic effects for the entire organization. The focus is on traceability, rapid detection, recovery, and overall resilience. An echelon (layered) structure and firewalls ensure that breaches are contained and slow down the attack progress allowing time for countermeasures.
The Public interface does not include any Strategic or Tactical layers. It only has an Operational layer. Therefore, only secondary digital valuables could be kept within the Public interface. Its main purpose is to communicate with the public, allowing user accounts, cloud-based storage, and other web services . The focus in case of security failure is on rapid detection and recovery. The assumption is that it is not a matter of whether it is going to be hacked or not (the assumption is that it will be), but when.
The Strategic layer inside the Closed interface should be physically disconnected from the rest of the organization . The assumption is that if the Strategic Closed network is somehow connected to any other network, then it is vulnerable. Therefore, communication with this network must be restricted and controlled; any terminals able to access this network must be respectively set up, preventing any data being offloaded or uploaded without control (e.g., all USB ports should be cemented); any mobile phones or other smart devices should be removed on entry.
This, of course, is just one example of an “ideal” architecture. It may or may not fit the needs of all organizations, yet it provides a general guideline for a set of organizations operating across three layers and holding different types of digital valuables.
Note that while Fig. 9.1 may appear to summarize a possible architecture for perimeterized cyberdefense systems, it can easily be applied to de-peremiterized systems.1 Specifically, while many cybersecurity systems are built relative to the definition of certain security perimeters (for example, internal and external systems), a security perimeter does not necessarily represent a physical or digital boundary. It could be defined in terms of the policy enforcement mechanism which is applied to different segments of the security system (e.g., in terms of layered verification and validation segments). For example, many contemporary cyberdefense systems are designed based on the so-called “zero-trust” logic. This logic has “never trust, always verify” principle at its core. In other words, instead of assuming that users located within a certain perimeter can be trusted, “zero-trust” systems abolish perimeters and assume that no one can be trusted. Such systems apply continuous verification and validation to all users. Note, that our proposed architecture can be easily adapted to “zero-trust” systems by replacing firewalls on Fig. 9.1 with verification and validation segments. Our suggested architecture adds an one more aspect to the way “zero-trust” systems could work: we propose to apply multi-layered instead of single-layered verification tools to system users. It is important to mention here that the internal logic of the “zero-trust” concept is not always entirely clear (as suggested, for example, by Boris Taratine in his recent article on the “zero-trust paradox”) [1]. Yet, the conceptual architecture described on Fig. 9.1 allows us to avoid this paradox [1].
Future Technology-Driven Solutions
Now let us turn to the technological solutions of the future. Can you imagine the gap between theoretical and empirical physics? While theoretical physics already offers us “The Theory of Everything” as well as theorizes about a world full of strings and quantum constructs, empirical physics is so far behind that we won’t be able to find out whether theoretical physicists are right or wrong for many decades, if not for many hundreds of years. Well, the gap between theoretical literature on cybersecurity and practice is currently almost the same as the gap between theoretical and empirical physics. However, in the future, this gap will decrease and we should be able to put many of the existing theoretical concepts to the test. Below, we discuss only a handful of such methods, each of which is equally exciting.
Scenario testing:
Scenario-testing methodology implies that many organizations will reach a level of maturity which would allow them to simulate different attacks in order to understand the set of consequences which these attacks may bring. This would allow various businesses to collect simulated rather than historical data, which could then be entered into the risk assessment and risk management algorithms. To some extent, the trend towards these solutions is already observed in some industries where ethical (white-hat) hackers are recruited to explore companies ’ zero-day vulnerabilities .
Terraforming cyberspace:
Terraforming cyberspace techniques [2] essentially represent next-generation zero-trust systems . It is implied that such systems would allow bullet-proof zero-trust networks where human errors or abuse of trust will be virtually impossible. It is envisioned that such systems will be fueled by artificial intelligence (AI) or even quantum computing. The way in which this methodology works is effectively to remove all possible human-related risks to the system . While the proponents of the terraforming methodology often comment on its efficiency, it is hard to imagine cyber systems not requiring any human input whatsoever, though it will be exciting to watch this space in the next few years.
Cryptography of the future:
Theoretical cryptography has already progressed way ahead of practice, where the limitations lie in the lack of computing power and speed of encryption. Yet, with the development of new technologies, the border between theoretical and practical cryptography will eventually become obsolete. In that regard, we might be able to observe serious breakthroughs in such tools as password managers, which will become more efficient and more difficult to “break”.
Zero-knowledge proofs:
It is envisioned that zero-knowledge proofs—defined as “a method by which one party (the prover) can prove to another party (the verifier) that something is true, without revealing any information apart from the fact that this specific statement is true”—will become one of the new standards [3]. If successfully implemented, zero-knowledge proofs [4–7] will be particularly useful for ensuring human rights protection as well as privacy assurances in digital spaces of the future.
Mobile targets:
Think of the way in which the US presidential security detail protects the president in case of an attack . The adopted practice is that the president should be put on a plane and his location remain uncertain and dynamic. While moving databases is, of course, costly and counterproductive, making digital valuables mobile could be beneficial and could be achieved in ways other than actually moving the data . For example, in the case of an attack where data is the target, adversaries would require not only the data itself but also the toolboxes which index the data. By making indexing mobile in the case of a suspected system compromise, it is possible to significantly complicate the job of cybercriminals.
Algorithmic active cyberdefense:
Algorithmic ACD already talks about ways in which, in addition to smart honeypots, smart mazes could be designed to lure and capture adversaries. In the future, we will see more and more such systems being implemented in practice. The traps will become more sophisticated and algorithms will be more effective in capturing, diagnosing, and even predicting potential threats.
Future Human-Centered Solutions
Defensive quantitative methods include behavioral segmentation for organizations and individuals; ambiguity and uncertainty estimations; and cost–benefit analysis of human-related vulnerabilities .
Defensive qualitative methods consist of cybersecurity hygiene methodologies.
Offensive quantitative methods incorporate behaviorally layered active cyberdefense (ACD) mechanisms; creation of fake digital personas (chatbots) to distract cybercriminals, etc.
Offensive qualitative methods include creativity through non-technical consultations.
Behavioral segmentation for organizations and individuals:
Current behavioral science techniques allow us to segment individuals in terms of their risk perceptions and risk attitudes in cyberspace (e.g., using the CyberDoSpeRT technique described in Chapter 7). Using these segmentations, risk in the system can be anticipated and modeled using the proportions of different types in the population and making inferences about the way in which these types of individuals learn. This, in turn, could be used to design policies and measures to tackle cybersecurity risks. By treating cybersecurity as a behavioral science, new methods of risk assessment can shed light on unknown or ambiguous events .
To give a specific example, assume that using a behavioral instrument (e.g., CyberDoSpeRT), you were able to classify your customers into four behavioral types , where : either Relaxed (), Anxious (), Ignorant (), or Opportunistic (). Since the total percentage of the customers should add up to 100%, we can safely assume that . Since each behavioral type has a unique profile with regard to potentially risky cyber activities, it is possible to obtain a relative ranking of all activities , where , in terms of risk- taking by behavioral type (). Since each activity is linked to compromising a particular set of digital valuables associated with a set of related costs , it is then easy to define probability and cost correspondence for each type , where for each activity one can identify probability of a particular cyber risk materializing (calculated as normalized relative ranking ) relative to potential cost associated with this risk. Therefore, for each , you can calculate both as well as identify . The weighted sum of costs by behavioral type () will allow you to understand which type is most and least costly in your customer population, and coefficient will help you determine within each behavioral type a customer activity which poses a major risk to your business. That way, you will be able to determine how to structure a multilayered social marketing campaign in order to (i) change the relative shares of different behavioral types in your customer population, as well as (ii) to target specific costly customer behaviors.
Ambiguity and uncertainty modeling:
Another option to consider is that we normally think of risk measures as some discrete values (probability, chance, etc.) But there is no reason why these measures should be discrete. For example, we have ambiguity/uncertainty models and models of stochastic choice in decision science which allow us to obtain, for example, interval measures, distribution measures, or vector measures of probability. These measures might be more informative. Specifically, if you know that a probability of a particular event is between 45 and 60%, that might be still helpful. The difficulty is that we need good communication tools to translate these theoretical measures into practical methods to make sure that they are actually informative and helpful to those working in the field.
Even though there are multiple different methods in which ambiguity could be mathematized, one of the easiest approaches (a linear model) was proposed by Daniel Ellsberg in 1961. This approach is based on the concept of Knightian uncertainty [8], which implies that in the absence of information about precise probability distribution over events , individuals form subjective beliefs about those probabilities. Ellsberg argued that these subjective beliefs instead of the actual probabilities enter into a decision-making process and could be used to calculate optimal responses [9].
Let us consider the following example. As we saw from the previous chapters, a business should be able to identify a set of digital valuables which might be of interest to adversaries and rank these valuables according to their importance to the business’ survival as essential, important, meaningful, and secondary. This ranking, in turn, will help a business formulate —some degree of confidence over a probability distribution , which corresponds to all judgments of the relative probability distributions over a series of adverse events . If we then let correspond to the minimum expected payoff from an action (say, a new cybersecurity investment) when the probability distribution ranges over a set of ; and we let be an expected payoff for the business from the action corresponding to the probability distribution of , we have a decision rule according to which for each , the business should maximize index :
Ellsberg offers an alternative formulation of (9.1), as Eq. (9.2):
Cost–benefit analysis of vulnerabilities:
Cybersecurity hygiene:
Current research in cybersecurity provides clues that different people have a different propensity to detect cybersecurity threats. We even know that that some people correctly identify the start of a potential attack correctly 100% of the time. By exploring the behavioral traits (personality traits, risk and ambiguity attitude, social preferences) of these people, we can identify psychological features which distinguish them from the rest. This will help us to design effective training programs for staff and customers in the future.
Behaviorally layered active cyber defense:
Using such a model, businesses can predict a set of prospects (targets) and form expectations about where cybercriminals are most likely to strike, creating opportunities for better defense mechanisms and evidence gathering. Note, that the model can be calibrated not only according to the different attributes of the prospects, but also according to assumptions about adversarial level of sophistication, expertise, etc.
Fake digital personas:
Developments in AI already allow us to create chatbots for chatting with customers for various types of organizational tasks. We also have a number of fake digital personas present on Twitter with millions of followers. The technology which allows us to design chatbot and digital personas may also offer interesting opportunities for cybersecurity, where bots could be created to liaise with cybercriminals and prevent phishing attacks . Obviously, the same technology is available to cybercriminals. Yet, not taking advantage of this opportunity for defense purposes would be irrational.
Creativity through non-technical conversations:
Obviously, the idea of having a broad conversation at different layers of organization will survive the test of the future as only with inclusive citizenship and creativity is the victory against adversaries possible.