© The Author(s) 2019
Ganna Pogrebna and Mark SkiltonNavigating New Cyber Riskshttps://doi.org/10.1007/978-3-030-13527-0_12

12. Navigating a Safe Space

Ganna Pogrebna1, 2   and Mark Skilton3  
(1)
University of Birmingham, Birmingham, UK
(2)
The Alan Turing Institute, London, UK
(3)
Warwick Business School, University of Warwick, Coventry, UK
 
 
Ganna Pogrebna (Corresponding author)
 
Mark Skilton

How to PLAN a Safe Space

In the early 1970s and 1980s, with the advent of what we can describe as modern enterprise computing, organizations used to typically buy their networks and computers at the department level. They all bought different networks and systems and they did not talk to each other in the enterprise.

As a result, we are currently faced with multiple efficiency issues, as we try to centralize everything and have one system. Purchasing power increases when it is centralized; and for ease of maintenance to standard ways of recruiting and training staff. A lack of diversity is brilliant for this, but it does not help resilience.

Look at the characteristics of resilient habitats in the natural world; they tend to have:
  • Diversity

  • Reserves

  • A certain sense of openness

Increasing the complexity of the system through multiple data records can help in building traps for adversaries where some of this data may be a decoy or a “honeypot” to attract attackers to try to get that data. This can help in collecting evidence for a legal case against an adversary who may unwittingly collect or use this data trap. Having multiple data sources can also increase the complexity of the system environment making it more difficult for adversaries to move in the network undetected as they have to deal with more network and data storage features in the system, some of which may be traps. This can create what is called a “signature” of a pattern in the system environment that suggests an anomalous transaction or system behavior suggesting something is not quite right and may indicate the presence of an adversary who might have entered the system. Hence creating more complexity in the system creates more obstacles for the potential, adversaries but it can have challenges for the defenders to monitor a more complex systems environment. This is arguably better than having one central system with all your data and assets ino one place as it presents just one target to the attacker, something that is often referred to as the “Pearl Harbor” problem where most of the American Battle Fleet in World War Two were in one Hawaiian harbor and presented an opportunity of a single attack target to the Japanese Aircraft.

With the immersive and pervasive nature of digital technology connecting personal data, transactions, appliances, building, vehicles and many other things in society, this physical world and the virtual digital worlds are become more intertwined as an attack surface.

In assessing our risk, we typically start with uncertainty and then map from uncertainty to risk, defining the likelihood of the type of risk and the percentage confidence interval in the next, say, twelve months of a cyberattack-for example, an attack through a website to steal our credit card information. We tend to do this very quickly. Because our start point is a very dangerous situation, and uncertainty is not a nice place to be, so we make rapid calculations to try to move to a safer position. This is a critical question about the effectiveness of resilience planning, building, and response because you may not be targeting efforts on the real problem.

Make the Space Hostile to Adversaries

One of the advantages of creating a rich environment with lots of diverse features is that you get a “web of evidence” as a natural consequence. To take a specific example, if somebody wants to hack into a Linux or Unix system and change the configuration in its directory of files, you can change or replace the files and the timestamp. But there are also things called “inode” records, which are created at the time of the file’s inception. If a file has been tampered with, you can check this source for any suspicious changes. You can create an environment of overlapping and interleaving strands of information to cross-check and look for changes.

Attackers will go for the weak points in a system. What you can try to do is compose an attack that is a collection of attacks that individually are not devasting but together would be this could be an effective cyber attack strategy in a situation where the cyber defenses have been build in discrete parts of the total company system but have not considered if many or all of the system was attacked. You can fix individual weak points in a system, but the attacker may be composing an attack over the horizon that you cannot see or cannot anticipated from the combination of actions they may take against your system.

It is about the cost to the attacker of a successful attack versus the costs to the defender of protecting a system. Take the example of ransomware that may be used against a modern premium electric car. You can disable the car remotely and demand a payment to release the vehicle controls. The attacker may take several months to develop this, including having to buy several cars to test this out until they know it will work. As a result, the car manufacturer may have to recall all the vehicles for repair, at great cost. What happens if the attacker uses the same system utilized by the automotive supplier to install over-the-air remote updates to the vehicles? The attacker could infect many vehicles remotely with the ransomware software. Alternatively, the vehicle manufacturer might be able to update over-the-air protection to onboard vehicles at the chip level to block new attacks. At some point, it may not be cost-effective for the attacker to plan and mount the attack if the cost to the vehicle manufacturer is lower and the problem easier to fix. It is a continual battle between the cost of attack and the cost of defense. There are, however, practical ways in which the landscape can be made economically hostile to attackers.

Recruiting Cybersecurity People with the Right Motivations

Hackers are not concerned about organizational department boundaries or what type of operating systems they prefer. They will do what they want and work with who they want.

On the company’s side, even if you have the right level of skills, you need to ensure your employees are correctly motivated, and they care. This is one of the key issues in cybersecurity recruitment. Traditional recruiting may identify motivations, but it may not translate into job performance later. It can be beneficial to get to know potential new recruits before they become part of the organization. The use of recruitment fares, internships, and pre-screening are often useful in this regard.

Even after recruitment, the characteristics of personal resilience and perseverance may not become evident until a few weeks later when, in the middle of an attack, they are able to think clearly under stress and pursue actions to try to respond to these threats. These are the real tests. Such people are key to cybersecurity, but they must also be able to work as team players to co-ordinate actions as a group. Creativity as well technical skill, a healthy hunger to learn and motivation are also key. Having different experiences and qualifications in the cybersecurity group also helps enrich the team dynamics for creativity and thought-pattern approaches.

Quantify the Consequences of Potential Cyberattacks

If an adversarial agent from the outside wants to effect maximum disruption, we need to ask how can we defend the most vulnerable parts of the company (the attack surface) and sources attack entry (the attack vectors)—for example, an employee, subcontractor or customer personal activity, key company products and services and their channels to market, key enterprise assets and buildings, or key transport roads or utility and hospital services of a city depending on your scope of planning?

Many scenarios of this kind can be mapped out in planning cyberdefense. From examining vulnerabilities, consequences, threads, and defense options, you can prioritize where you want to put your resources. You defend the points of weakness and exploits that have the highest index of vulnerability combined with the consequences. This requires a quantification of the impact of the consequences if this point of vulnerability is attacked. So, high vulnerability to attack may not be a priority if we know the consequences. You need to model all these factor when planning cyber responses. You can work out the level of optimal allocation of the defensive resources you have available based on the conceptual and intellectual framework models to calculate the costs of security and its level of predicted adequacy. This is a formal mathematical problem; it does not necessarily look at all issues but at least it provides a conceptual framework to think about it.

Some events may not be initially a cyber risks issue. For example, in regions that are vulnerable to earthquakes such as the West Coast of the USA and Silicon Valley, there are potentially many vulnerable people. A major cybersecurity attack on infrastructure is in many ways similar to an earthquake: you may have to evacuate a city and restore infrastructure, recover homes, businesses, and people’s lives. Can you get people out fast enough? The answer is you may not be able to, as transport routes are not designed for mass evacuation, and the transport controls may be down. Similar issues are faced on a different scale in the evacuation of a football stadium or large buildings. It is just applied at the city level. How do you do this securely and safety? Following the triple disaster of the Great East Japan earthquake in 2011, and the resulting devastating tsunami and nuclear crisis at the Fukushima nuclear power plant [1, 2], swift decisions had to be taken at government level without waiting for reports from areas where communications had broken down.

AI Changes Your Cybersecurity Planning Approach

The field of cybersecurity is increasingly working together with other fields, including, for example, ML automation, the legal profession, ethicists, social science and social media specialists. Yet these fields are also being impacted and disrupted by automation.

Cybersecurity may be one of the most protected professions in the world right now, in that it is in increasingly high demand as threat vectors multiply and the volume of cyberattacks spreads into all corporate, social, and government spheres.

Automation is still useful in assisting and replacing some cybersecurity tasks, but creativity and experience remain the key factors when it comes to tackling things that cannot necessarily be programmed and automated.

Some cybersecurity attack vectors can be anticipated. Other areas of increasingly complex behaviors and patterns across many system end-points in, for example, thousands of ATM-attached networks, may be better monitored by automation that can work continuously, 24 hours a day all year without rest.

New forms cybersecurity AI automation are now being discovered that are being used by hackers to monitor and mimic human activity in a new phase of cyberattacks.

But in other situations, such as fake news, which is a different kind of attack vector, it may be difficult to employ automation fully due to the semantic nuances involved. Automation progress will be able to work through these complex challenges, but it remains an issue of equilibrium between constantly evolving threats and responses. This is the struggle.

The Dangers of a Machine Making Decisions in Cybersecurity

From a defender’s perspective, if you invest in AI for defense but after a time it blocks access to customers as part of an automated action, causing significant financial losses for customers and millions to resolve, but may have saved hundreds of millions up to that point, how do the company executive board members respond to this? Often, at that moment, they will look at each individual incident rather than the overall picture as the potential reputational risk takes precedence over other considerations. These may include such operational issues as fixing the algorithmic rules, but in other cases may be part of an underlying weakness in the use of AI. This is a repeated issue of using statistical methods to “learn” automated rules in AI. You need to train the machine learning algorithms but sometimes these can develop rules that may evolve over time to do things that are not in value alignment with the original policies that were being automated. A well reporting example is the training of self-driving automobiles where the algorithms may not be fit for purpose in all scenarios resulting in damage or loss of life. This may be a combination of poorly defined data dimensions of the learning algorithm not being able to response safety to achieve its objective function (such as “do not hit an object”), or a feature of learning from training data that fails to recognise a state change in its environment, a failure in its ability to automate sensory feedback-control appropriately for the given objective function. These examples, and many others are new phenomena that is emerging as part of the field of machine learning and artificial intelligence that create new risks that can also include cyber attacks and cyber defense where the machine learning data or algorithms may be compromised or the technology may itself be used to carry out a cyber attack. This is a subject outside the scope of this book.

Internet of Things Changes Your Cybersecurity Planning Approach

Today, most organizations treat cybersecurity as a cost, a daunting proposition that they will only implement as a last resort because where is the benefit? But business models are changing because of the impact of cybersecurity. To cite one example, Philips Lighting, now called Signify [3], are moving from selling “light bulbs” to smart cities to selling “lighting-as-a-service”. It is this “as-a-service” that is the key issue, if your revenues are moving from manufacturing goods to selling updates and support in the longer term. So, employment and employees will change to support this. But also, cybersecurity becomes the fundamental platform in enabling “everything-as-a-service” because you must be able to talk to these devices through their total lifecyle that includes management, updates, and billing for device use.

What the IoT offers compared to the earlier machine-to-machine (M2M) connections is these new business models that sit on top. This is evolving across all industry sectors—it becomes health-as-a-service, lighting-as-a-service, cars-as-a-service, and so on. Uber, Zipcar, and others, for example, developed other as-a-service models, called the uberization effect, a concept from the online taxi company Uber, which pioneered this business model. This opened up assets and services with direct contact between buyers and the owners of these objects, facilities, or work services. The physicality of the object has a different dimension in the cyberspace digital world. The ability to interact with different people, different public and private networks, different multiple vendors to deliver different services is the next wave. With the IoT era, this is the reality.
  • How do you validate who you are working with?

  • How do you isolate data so that it is harder to identify the person?

  • How do you build entire systems from different vendors who may not talk with each other, may be competitors, and will not share critical information with other people in the system?

  • How do you build a secure system that may connect to 3rd party technology or external networks and systems that you may not know the configuration of, or have access to, but have access to your enterprise system as part of connection services such as supply chain business to business (B2B) or Business to customer (B2C) service connected to many suppliers and other external companies; or in the case of bring your own device (BYOD) mobile services.

  • How do you build a framework where these different vendors and systems inter-work, inter-operate securely?

  • How do I formally prove that this device or system is trustworthy to receive my data?

  • How do I know that this device or system is able to manage my data without having to know everything about me or the information context behind my data?

Build Appropriate Tracking into Spaces, Things, and People

Rather than just design systems to prevent hackers from getting in, we want to let them in and observe and learn what they are doing without, of course, allowing them to steal information or impact service levels. “Honeypots” are computer security techniques [4] that may be a dedicated server on a network to attract, deflect and detect, acting as a decoy to potential hackers as part of an intrusion detection system (IDS) [4, 5].

How to trace events when things are gone—the example of flight MH370.

We need to add markers to objects, so we can track them if they are moved or stolen. Take the example of Malaysian Airlines flight MH370 that disappeared on 8 March 2014 in the southern Indian Ocean [6]. Despite an extensive air and sea search, the location of the aircraft and occupants remains unknown. However, some debris has been recovered consistent with having drifted over nearly two years from the area in which impact is thought to have occurred? [7].

There was no easy, distinct way of tracking the aircraft parts.

How to BUILD Safe Spaces

In cybersecurity, following an incident, we need to find out what happened, but this is often not easy to establish. A lack of organizational disclosure regarding cyber incidents is not helpful in understanding events from the perspective of knowledge sharing and analysis. This problem is compounded by people not reporting or understanding when they have been attacked and are victims of cybercrime. Appropriate levels of investment and resources need to be available to build effective investigation and defenses against cyberattacks. Just using fines as a mechanism to change company behavior who fail to respond adequately to cyber-attacks can be an incomplete answer when enforcing new government regulation such as the European GDPR. This needs to be balanced with encouraging companies to invest in cyberdefense and training to prevent attacks. Not just more fines. Many companies, particularly small to medium size enterprises (SMEs), do not have the resources to handle cyberattacks.

People experience fear and desire in relation to cyberattacks. Fear that they will not be able to secure themselves, so they avoid it. The image of cybersecurity as too daunting and complicated drives this behavior. Human behavior has a tendency in threatening situations to de-sensitize themselves to stressful emotions by imagining they will not be attacked. A positive fear appeal would promote a ‘danger control process’ which can lead to a successful outcome as the message recipient undertakes a cognitive process to avert a threat. But fear appeals in isolation do not provide effective or adequate assurance, as per its definition, and organizations should not rely upon this mechanism. Neuroscience suggests these fear and desire behaviors are part of the nervous system function to reduce surprise and optimize actions [811].

Use Deception Technologies

New developments in cyber defense are emerging as a consequence of realizing that attacks will happen and that not all vulnerabilities will be discovered in time. One key development has been the use of deception through technologies that create decoys and other misdirection techniques as part of a cyber-deception solution. In effect creating a “maze of deception” to slow down, deflect or collect evidence to identify stealthy attackers or entrap them.

Currently, general thinking today is geared towards building cyberdefenses to stop hackers getting into the systems; this is called a defense-in-depth strategy. But is this the right approach? Maybe you want to let them in, you want to collect data on them and you want to make it hard so that the effort and cost involved for the attack makes them want to stop and give up trying to attack. Given the complexity of systems and the ever-present vulnerabilities and exploits that you may not have discovered yet to zero-day and polymorphic code attack, this is a better strategy than trying to second-guess everything.

There is a lot of cybersecurity deception technology, from honeypots and automated traps and decoys that imitate target systems such as cash machines (ATMs), to medical devices and internal network switches and routers. Firewalls and the end-point security of devices, from mobile cellphones to IoT devices, cannot defend a perimeter with 100% certainty. Neither can communications networks and database servers with encryption be protected from access that may have be compromised. Hackers seek to gain backdoor entry into a corporate network with the typical aim of exploiting and navigating networks to identify and exfiltrate data.

Backdoor is a method, often secret, of bypassing normal authentication or encryption in a computer system, a product, or an embedded device, such as a home router, or its embodiment as part of a cryptosystem, an algorithm, a chipset, or a “homunculus computer”—a tiny computer-within-a-computer, such as that found in Intel’s AMT technology [12]. Backdoors are often used for securing remote access to a computer or obtaining access to plaintext in cryptographic systems. The concept of a homunculus computer is a system within a system that mirrors what that system does and monitors its function. It is a concept drawn from human neuro-biology representing the brain and the way it functions as representing and processing collections of sensory inputs and output feedback, that are interpreted with the “mind” and its ability to perceive, think, and reason about the external world.

Deception technologies are part of Security Information and Event Management (SIEM) tools, in that they differ from IDS but allow automated static and dynamic analysis of this injected malware and provide these reports through automation to the security operations personnel. Deception technology may also identify, through indicators of compromise (IoC), suspect end-points that are part of the compromise cycle. Automation also allows for an automated memory analysis of the suspect end-point, and then automatically isolates the suspect end-point.

TrapX is is one vendor of deception technology that was effective at deceiving TeslaCrypt, Locky, and 7ev3n ransomware families known as advanced persistent threats (APTs), luring hackers away from valuable data assets [13]. These deception technologies are able, for example, to engage a ransomware attack with decoy resources, while isolating the infection points and alerting cyberdefense blue teams.

Fidelis Cybersecurity was another example of a deception technology vendor made public by First Midwest Bank, a financial institution that used this technology to set up decoy solutions to identify patterns of anomalies in their networks and end-points. This is particularly relevant when operating in a highly regulated industry: the bank is subject to the Federal Financial Institution Examination Council’s uniform principles and standards for financial institutions and its processes are periodically tested for compliance with a litany of laws and regulations [14]. Examples of deception technologies vendors include [15]:
  • Rapid7

  • Hexi cybersecurity

  • Smokescreen Technologies

  • TrapX

  • Fidelis Cybersecurity

  • Attivo Networks

  • Illusive Networks

Deception is one technology that can significantly reduce dwell time. On top of this, it is easy to install, does not require a lot of resources to manage, and increases the effectiveness and efficiency of security teams [16].

Kill Chain Concept

Deception technologies are typically based around a set of strategies that were developed in 2011 by Lockheed Martin’s kill chain concept to categorize different phases of a cyberattack they describe as Adversary Campaigns and Intrusion Kill Chains. It includes the following steps [17]:
  1. (1)

    Reconnaissance,

     
  2. (2)

    Weaponization,

     
  3. (3)

    Delivery,

     
  4. (4)

    Exploitation,

     
  5. (5)

    Installation,

     
  6. (6)

    Command, and

     
  7. (7)

    Action on objectives.

     

This work by Lockheed Martin determined that conventional network defense tools, such as IDS and antivirus, focus on the vulnerability component of risk, and traditional incident response methodology presupposes a successful intrusion. An evolution in the goals and sophistication of computer network intrusions has rendered these approaches insufficient for certain actors. A new class of threats, appropriately dubbed the “advanced persistent threat” (APT), represents well-resourced and trained adversaries that conduct multiyear intrusion campaigns targeting highly sensitive economic, proprietary, or national security information. The evolution of APTs necessitates an intelligence-based model because in this model the defenders mitigate not just vulnerability, but also the threat component of risk.

A key issue is how far to advertise the existence of deception technologies; it is one thing to track and trace what is going on to establish who is attacking, the attribution, followed by understanding the motives. But there is a fine line between that and entrapment, which may result in tricking someone into committing a crime to secure their prosecution. From a legal standpoint, this is questionable. This could also create a burden of work in a company that may seek information on attackers that might not necessarily add a great deal to the security of the company. From a learning point of view, deception technologies are excellent planning approaches to cybersecurity, but the issue is, how far do you pursue this strategy? In financial services, enticing the adversary to commit a financial theft or transaction, or to reveal their location and identity, is useful but very rarely done as it would lead to a prosecution in the law courts, assuming they were caught.

Individual and Class-Level Attacks

The chip is the basic level of digital technology. It is a fast-evolving area but its very nature means that you make billions of them. If one sustains an attack, it will potentially work on billions. If you have an attack on one, it will potentially work on billions. It is a class-level attack, in that its target is a whole group of technologies. A small design flaw on a chip can have a major impact if this is also a cybersecurity vulnerability and an exploit point for an attack vector.

One case study from 2017 was the Spanish identity smart card that incorporated a chip developed by the German company Infineon. It was found that Infineon’s key pair-generation algorithm had the “ROCA” flaw, which made it possible for someone to discover a target’s private key just by knowing what their public key was. Dan Cvrcek, CEO at the security firm Enigma Bridge, which was co-founded by researchers who identified the ROCA flaw, told ZDNet that exploitation of the flaw could allow attackers to revert or invalidate contracts that people had signed, in part because the Spanish don’t use timestamps for very important signatures. The card, known as a DNIe, had a chip that contained two certificates, one for identification and one for electronically signing documents etc. The cryptography used for identity cards is high-level keys, to save money in the development costs of the system, the security encryption keys for the crypto algorithm were physically stored on the card. Once you knew the factors, the theoretical breakage time goes from the lifetime of the universe to 20,000 computer hours, which is trivial considering today’s computing power, making a potential attack economically viable.

A fix would require all affected cards to be updated. On Infineon’s disclosure of the vulnerability, the Spanish authorities revoked all certificates and stopped letting people sign documents with the card at the self-service terminals found at many police stations. That decision affected every card, not only those that had the flaw. However, people could still digitally sign documents online, using a small card reader that connects to their PCs [18].

Securing the Billions of Connected Things

Sensors, phones, or PCs are subject to injection attacks that can pull back critical information. Individually, it may not matter to you if someone can see emails on your phone, but if they can see it when you and a million people or more are doing their online banking, then that is the challenge. The solution to this type of class attack is to make every chip truly unique. This can be done through cryptography, frameworks, changes at the silicon physical level, creating islands of isolation.

On a PC, the attacker can come in through the USB port and get to the main computer processor, but you can have a separate security domain that this is not connected to—an island with integrity. The rest of the PC might be attacked but this area can be maintained in isolation. You can then identify, remediate, and recover.

You can also plan to improve the security against class-level attacks:
  • Make the chip set unique

  • Give things identity

  • Give things a level of robustness

  • Manage ownership

Blockchain technology (BCT) can play an important part in this.

The typical purchase of a PC laptop by a person, they will always be the only user of that PC laptop. They will never sell or get rid of it, and the data remains on that PC laptop. This “one-person-only” ownership paradigm simply does not work in the world of the internet of things (IoT) that involves many assets and devices connected together sharing data and services, many of which you do not own but will use and provide your personal data and transactions to these systems. This is the case of the web enabled heating system in a house that you may then sell the house, or with the aftermarket for cars that are resold to new vehicle owners.
  • How do you manage identity when ownership is changing hands?

  • How do you manage identity in cars?

In the case of Jaguar Land Rover cars, if you privately sell your vehicle, the person who buys it cannot take control of the car until it has been zeroed at the dealership. If you buy it through the car dealership, they will blank it. It is about clear asset ownership and exchange of goods and services.

In IoT ownership and data privacy, who owns the data?
  • Is it the OEM who sold the heater?

  • Is it my utility service provider?

  • Is it the third-party service company with whom I may have a contract?

  • Is it the insurance company with whom I may have a contract?

The question of who owns the data can be very obscure, involving multiple levels of a system that may have many actors and corporations involved directly or indirectly.

You put your personal data and behavior out on Facebook who are mining your data to sell you services or sell on to 3rd parties who may in turn sell you services or use it for other reasons which may be discloses or hard to trace to identify what it is used for. This involves an inherent transaction in which you have use of the platform for free, but agree to their terms and conditions, which include permission to mine your data and provide you with advertising that you may or may not want. You may be fine with this until you realize that enough data has been gathered that they can start to nudge your perspectives. This can work in unintended ways, from the reported generation of fake news to influence political elections [19, 20], to the misuse of data that third-party companies obtain from Facebook without direct user consent, as seen in the Cambridge Analytica case in 2017 [21]. This is similarly the case for other major search engines and social media platforms. Our digital footprints are huge and this online data can be out there for another decade or more, the consequences of which we will never know.

In a New Zealand Auckland University study, it was reported that an average New Zealand citizen may appear in about 40 different databases, while a US citizen is in about 200—and these include information such as your age, date of birth, marital status, and where you live [6]. Then there are the business models that come out of this. Take the example of a heating system: the utility service provider can look after it and gather data, so they can bill customers for the service they use. They might be able to offer additional services such as automated energy savings to turn things on and off in a more ecological and economic way. Or they can suggest added-value services such as insurance cover. There is a lot that can be deduced from an analysis of the electrical load signal of a house, right down to which TV channel you are watching.

We Are Building a Very Powerful Cage Around Ourselves

The interesting thing, though, is the data: if I own that data, then a whole range of services may become available. This is the potential of the IoT business model. If I am selling lighting-as-a-service, do I want the lights to come on when I walk into the house? And AI based on my data in my house is great, if it is under my control. But what I don’t want is for all this data to go back to a cloud third party. I may be quite happy to share parts of that data with my utility provider. I may want to send data back to my insurance company about are my habits, so long as it is obfuscated. And I may have services such as my alarm system that might use my behavior data. If an intruder has entered the house and is behaving in a different way (i.e., they have not switched on any lights), then this might be used to deduce a break-in alert. For police and security services, the IoT is very powerful, but for warranted surveillance we need to be able to switch everything off, so they can gain entry to bug the house.

IoT is the ultimate two-edge sword. It offers huge new revenues and a whole new way for individuals to interact with the world around them, but only if we can trust it, and trust is a very delicate thing.
  • Monetization of data

  • Monetization of software

  • Selling solutions.

We will likely move to a service-oriented economy in the years ahead. But the challenges are:
  • Who owns the data?

  • How do I protect the data?

  • How do I ensure the operation of a system is correct against the conditions of use that was agreed or intended by the providers of the data that were input into that system, and the actions and consequences resulting from that system’s action?

  • What predictive solutions could you implement?

  • What completely new things can you imagine doing in this hyperconnected, embedded world that you cannot do today?

Say, for example, you have meetings in London, followed by one in Bristol, so your calendar lines up an Uber taxi for you and books your train ticket; it also knows the sort of food you like and is monitoring your blood pressure, along with much more. All of these things are predictive around you, so you don’t even have to think about many of them. It could de-risk these scenarios, it could make you safer or help you avoid situations that may be less optimal based on criteria. It could improve your quality of life and enhance more healthy living, potentially increasing your life expectancy. There are, however, ethical issues in this relating to bias and the way decisions are made in these scenarios. How does it arbitrate if there are risky decisions that may affect you and others; how does it affect choices that may be bad for your health or impact your experience in other ways regarding who you meet or the things that you consume or do? These are questions that ethicists and legislators will have to consider. Development of these issues must establish frameworks and policies to develop best practices and legislation to ensure the future of effective cybersecurity and safe spaces.

How to MANAGE Safe Spaces

The “business logic” which represented the highest level of a system activity, the business activities, trading and commercial behavior, is perhaps the hardest to understand when trying to map where the paths through this business are to vulnerabilities that may exist in that business logic. It may be that even after all the internal investment, testing, and monitoring, external security researchers, members of the public, or routine investigations into the organization can find things that all the other layers may have missed.

Towards Self-Healing Systems

The future of cybersecurity management needs to augment these layers to include and integrate all the threat and vulnerability information from:
  • intelligence sources

  • vendor communities

  • academic communities

  • user communities

  • hacker communities.

In the first instance, there is automation of detection for protection such as firewall rule updates. Then there are more complex issues, such as a code library vulnerability that needs fixing through vendor and/or internal and external actions. These libraries are subsequently updated to ensure that any developers are using improved code. This reperesents a move towards the concept of self-healing systems, which may include using ML techniques to fix applications based on inputs from threat sources, information gathered from human interactions and other models that look at business logic as rule-based logic. The aim is to develop insights and prevent common errors in the design of coding libraries.

But You Never Can Be Secure for Certain

You cannot know for certain whether a system is secure, though there are mathematical algorithms that can prove a set of rules are secure. A cryptographic set of rules, then, can be used to determine whether an encryption is secure. Theoretically speaking, however, it is still vulnerable from a cybersecurity practitioner perspective and they must decide if the level of security is sufficient. If a system is broken, then clearly it is insecure; but if it is functioning, then it is secure to some degree that may or may not be acceptable.

Utilizing all the SIEM1 tools, for example, can demonstrate that nothing anomalous has been detected but you can never know with absolute certainty. Factors may include:
  • Insufficient time to check all system areas

  • Lack of investment in tools to protect the system

  • Lack of cybersecurity technical skills

  • Lack of risk management skills

  • Lack of leadership skills to validate and respond to risks

  • Human error in design of the system

  • Human error in configuration, support, monitoring, and response to attack

  • Zero-day events that are new vulnerabilities/exploits

  • Fake information manipulation, clandestine or cyberwarfare agenda attacks

  • Proximity to use of other networks and vendors who are attacked.

Leveraging Domain Knowledge

In complex systems where you cannot anticipate every interaction, the use of subject experts in their field, as well as non-experts with experience, can enable a broader evaluation of this complexity similar to the concepts of crowdsourcing ideas and solutions. A key part of this is domain knowledge who can facilitate this than just the use of experts.

Any application that is compromised could lead to catastrophic financial losses. The immediate risk approach would be to ask how secure the network is, then use a VPN, where are the APIs, and how can we secure them with secure HTTPS and secure API protocols and so on. But the question of just fixing the API may be insufficient from the viewpoint of complete security. The question can be reframed from: “How do we fix secure APIs?” to “How can we run insecure APIs?” This assumes that despite a secure system, there may be vulnerabilities and attacks; the question is how the system can be managed to respond to these threats. This typically generates many more ideas and potential solutions to what-if scenarios than just a focus on a fix. This can introduce other levels of security to improve the resilience of the system overall. Making a system architecture that is robust enough for certain failures is one thing; but a better approach would be to make the system more flexible, such that it can adjust itself and resolve issues and recover from attacks. Instead of patching everything, we should have more responsive support systems that can investigate and fix attacks in a more flexible way that is adaptive and learning all the time. This might also enable better investment decisions regarding cyber security tools and processes, which may not necessarily be just technical but also involve organizational awareness, culture, and leadership.

The right question is: How can an insecure system in a hostile environment stay secure?