© The Author(s) 2020
A. BancroftThe Darknet and Smarter CrimePalgrave Studies in Cybercrime and Cybersecurityhttps://doi.org/10.1007/978-3-030-26512-0_1

1. Crime Is as Smart and as Dumb as the Internet

Angus Bancroft1  
(1)
School of Social and Political Science, University of Edinburgh, Edinburgh, UK
 
 
Angus Bancroft

… and the internet means the many versions of digital ‘you’ in this instance.

What is the relationship between digital society and crime? To understand this question, I have to ask firstly why crimes are committed at all and then how technology both facilitates and drives new forms of crime. When does it make sense to commit crime? Or rather, when was the last time you contemplated a crime? That torrent, online bet, or handy personal use of your employer’s resources, might be tempting and apparently risk-free. Such explanation focuses on when the incentives are right, what is called instrumental rationality. Another consideration is whether it is justifiable crime, a value rationality perspective. These two perspectives are not really in opposition. Instrumental rationality is situational. One needs to cross the moral boundary first in order to begin contemplating the act (Kroneberg et al. 2010). Even the most calculating act takes place in a moral economy (Karstedt and Farrall 2006). People are morally situated towards the free market, society and the law and citizens take their cue from the structured immorality of major institutions being seen to ‘get away with it’. Beyond the cynicism that this embeds in people’s own calculations, what happens when there are moral incentives to commit crime itself? This is a motivational problem. In situations where crime is a value or reward in itself, breaking the law is its own incentive. Crime then can be situationally rational, motivated and meaningful.

Cybercrime is a diminishing, though important, part of digital crime overall. Many people experience digital crime as a hybrid of in-person and internet-mediated crime, rather than solely within the online sphere. These developments are significant for crime control. If crime is rationally motivated, it can be controlled in the same way by managing disincentives and reducing opportunities. Situational crime prevention addresses chosen calculated actions rather than dispositions (Clarke 1980). However, actions must be considered in terms of people’s own sense of their past and future trajectories. If the motivation is longer-term, for example, if people are prepared to take immediate losses for the promise of future gains, or to establish themselves in specific networks, or because they think what they are doing is important enough to take those risks, then situational crime prevention may be limited. Hence we see the surprising resilience of many online criminal markets. Participants want them to succeed (Ladegaard 2017). The issue is then one of cultural, social and infrastructural resilience, its achievement and maintenance, and threats to it.

In one sense, a crime victim may not even be aware that some harm has been done to them. This becomes salient in the context of technology and the ever-growing complexity of law and regulation, which rather undermines the idea that we have effective personal knowledge and responsibility. If aware, the crime victim’s emotional response may be strong but not likely to make them involve the police. Many victims of remote exploitation are too ashamed to report they have been defrauded, or are not even aware that what was done to them was illegal (Goucher 2010). The perpetrator may not know when they are committing a crime. In 1976, a Virginia couple were surprised to find themselves prosecuted having taken polaroids during sex. Their child had taken the pictures to school for show and tell (Edgley and Kiser 1982). The polaroid instant photography system created a fusion of technology and sex which the Polaroid corporation cottoned onto and subtly promoted. Technology creates pathways for behaviour, and people hack it to make new things happen. Some technologies become the focus for public problems. New technologies generate new kinds of crime—in the sense of new routes for criminal activity—and also produce new ways of monitoring and criminalising speech, thought and action. This may be done because the activity's anti-social nature justifies it, or because it is now possible to track and intercept it and what gets done is what is possible (Iginio et al. 2015). Connected societies then face problems of system-generated risk that are not wholly reducible to technological weakness or lack of user awareness. Like the Polaroid couple, the technology allows for behaviours that might be harmful or merely transgressive depending on the purpose and context. Digital technology scales that up. It fuses with opportunity to create behaviour. It is adapted to human use and also shapes human capacities recursively.

In 2019, we see serious examples in which harm and crime is facilitated by technology: revenge porn spread by dedicated websites organised markets in disinfirmation; political speech criminalised by states and public space disrupted; mass-automated attempts to illegally access services. Abuse is workshopped and calculated (Squirrell 2017). Crime is platform driven (Dittus et al. 2017). Criminal markets generate new platforms and use the affordances of existing ones, their visuality, geo-localisation and reputation boosting (Moyle et al. 2019).

There is a range of crime that matters to this book, some of which is ideologically motivated, some opportunity-based. Criminal action depends on an architecture of opportunity that renders personal qualities of honesty suddenly mutable (Mayhew et al. 1976). Vulnerabilities are designed in, and opportunities produced by in the digital infrastructure. These threats are often experienced where there is an interface between different systems, like between cash and electronic money, or between electronic identification and interpersonal trust.

Digital Crime Is Global and Local

A combination of politics and power shapes the digital crime and illicit drug economy in developed and developing countries. Criminal networks can take advantage of contested political authority and state failure and are themselves exploited by political and state actors. On the other hand, digital criminality is often more widespread where there is more developed infrastructure to take advantage of. Major actors legitimate state violence against drug traffickers but ignore state violence against their own populations, activities that partly legitimise drug traffickers. They tend to assume that ‘violence’ only stems from the underclass. Crime follows other lines of exploitation. There is an active remote webcam industry in the developing world that has a large Western and local clientele. In the Philippines, Cam models—sex workers who operate using remote video link—can command higher earnings than prostitutes with whom they emphatically do not identify (Mathews 2017) . The same technologies and systems are used for sexual exploitation. Technology then forms part of integrated harm producing systems.

Harm production systems do not necessarily have to centre the technology while working through it. There is often in public discussion and policy strategies a focus on scams that target the West and launch from the developing world. Most cybercrime and hybrid digital crime operates close to home. The targets of developing world criminals are most likely to be others in the same countries or regions (Mba et al. 2017). In South Africa, scamming is crucial in affecting how people relate to digital technology. Catfishing and phishing scams are common. One common trick is to make cloned mobile sim cards using the victim’s identity. The cards can be sold and victim is landed with the bill. Crimes like this exploit the need for mobile airtime in a situation where the land-based infrastructure is weak, unreliable, untrusted or inaccessible. They exploit the poorest and most precarious, who cannot rely on other systems.

Digital crime is localised and interpersonal, though it draws on globally shared practices and systems. It is not, however, just the old ways made new. Digital crime has its own productive dynamics. The architecture of crime used to exist in the grey spaces left by the licit world. Now, criminal enterprises create their own practices, architecture and technologies and are well-adapted to people’s digital needs. Increasingly, crime is responding to market-driven cost and price signals and it is also generating new logics of opportunity. Its existence is not confined to the underground; it is public, and sometimes barely even illegal. Companies use big data to evade regulation, so crime is not simply a matter of the seamy side (Zwick 2018). Some national governments maintain arms-length stables of cybercriminals, trolls and chaos merchants (Badawy et al. 2018). Digital criminals are better able to exploit this due to their sharing of knowledge, software and hardware. Crime is becoming standardised, copied and extended, without appearing to need to very organised.

The Limits of Digital Crime

There are inherent limits of what can be known about crime, not least the difficulty of reporting some crimes, and being aware of ourselves as plausible complainants. Crime itself cannot be defined entirely by statute and precedent. Law cannot enumerate all. A ‘crime’ means that agreement has been reached between key persons and institutions that a particular activity should be criminalised. It must be reported, recorded and processed as such. Most crimes never get to the reporting stage. Many digital crimes go underreported which limits researchers’ ability to estimate the full cost (Anderson et al. 2013). Some crimes are unnoticed, take place in large numbers but have low individual unit cost, and one category of self-interest crimes is legitimated, widespread and normalised such as copyright violation (MacNeill 2017). These do not invite any social sanctioning and require only the most cursory of self-justifications. Crime, then, is not a single, naturally occurring category, but it is a coherent body of harm, influenced by developments in digital society .

Crime is getting ‘smarter’ because it is becoming better integrated into—and in some cases driving—the underlying digital and financial infrastructure and it adopts key features and norms of that (Powell et al. 2018). Criminals are much better and more agile at making use of the smart infrastructure and they are growing their capacity to learn from each other. This reduces the cost of entry to criminal activity. It also draws more of the licit infrastructure into the criminal infrastructure to the point where there is not much of a distinction between them. To some extent, those involved in digital crime are as governed by similar metrics, incentives and principles as any Youtube influencer or derivatives trader. For example, much criminal entrepreneurship is put into spotting what are effectively arbitrage opportunities, exploiting price differences between different markets (Keegan et al. 2011). Others are involved in pumping their online reputation.

As well as being embedded, digital crime is also becoming more interdependent. If a criminal wants to sell a multi-tool piece of malware, they will also have to provide after-sales support, tying them into a longer relationship with their client (Alazab et al. 2011). Ransomware targets the less computer savvy and so the criminal needs to provide instructions and sometimes telephone ‘after crime support’ for their victim (Kamat and Gautam 2018). There are new criminal behaviours, an extended reach of criminal activity, and also new social dynamics. The changed social structure may be the most significant part of recent developments in cybercrime. The personnel, organisation and meaning of activities have all changed radically. The digital crime labour market has broadened, become more structured and differentiated between levels of skill and asset control (Holt 2013).

Producing Cybercrime

The term cybercrime is awkward as it implies something unreal, taking place in virtual space. In fact it is all happening somewhere. The internet goes through pipes, in someone’s pond or yard. At one end, the length of a fibre optic cable and the nanoseconds it gives can spell the success or failure of high frequency trading (MacKenzie et al. 2012). At the other, digital technologies and platforms are ubiquitous, so it would be remarkable were a crime to take place that was not in some way informed or facilitated by the digital. Someone who steals a credit card in a pub and sells it in another pub is still manipulating a digital system.

In the early days, it was hard to get anyone to take cybercrime seriously as a phenomenon. The existence of cybercrime means a reconfiguration of crime in the context of technological development. This means it is not crime that takes place somewhere else. Digital crimes should be understood as melds of technical systems and social arrangements which change historically.

There are some ways in which this configuration of crime creates original, not just analogous, crimes. One is systematic theft of time. Using infected computer hosts to mine cryptocurrencies steals processor cyclers. It also means that there is a close and near real-time relationship between the changing incentive structure and developments in digital crime (Cárdenas et al. 2009). Growth in the value of Bitcoin, the decentralised cryptocurrency payment system, could be considered an incentive for malware deployment. The harms also go beyond the annoyance to the users: theft of processing cycles adds to electricity load and causes machine degradation. Another harm is theft of agency: doxxing, harassment, swatting and inhibiting someone’s digital life all remove agency from them. I say this is at once both more and less novel than traditional crime. It is more novel because before the digital became ubiquitous, no woman needed worry about being verbally abused by someone on the other side of the planet because she expressed an opinion about a banknote design. It is less original because it plays into the same structured vulnerabilities with which we should be familiar. The people who were victims of harassment through the telephone system or in the public square were largely the same demographic who are threatened with rape and murder through Twitter , that is women and sexual minorities (Powell and Henry 2017).

There is a natural focus on criminal innovation and the new ways in which people can be harmed, the new methods through which illegal trades can be conducted and attacks launched. However, increasingly crime is stabilising around culturally integrated, motivated and highly reactive predatory networks (Dodge 2016), hybrid exploitation and attack platforms, and market communities.

New Configurations of Digital Crime

Definitional problems are central to what digital crime is and is not. They rely heavily on how seriously the context in which the activity is happening is taken. Online role playing games which rely on mass interaction often have economy-like features. Players must work to earn points to unlock a better spaceship, a bigger sword, usually through repetitive drudge work. A real-world economy has sprung up to service this need by engaging in ‘gold farming’. Banks of players work at nothing but producing the points needed to sell to wealthier players, to enhance their online characters they are too busy to service. Is gold farming in online games criminal (Keegan et al. 2011)? It is deviant. It distorts the carefully balanced game economy with its balance of incentive and challenge. Is it a crime, or just smart? Taking advantage of rent opportunities is thought of as showing financial acumen when it happens elsewhere. The fuzziness about what defines digital crime—or even if it is a thing at all—relates to a similar and perhaps more profound disagreement about what defines security. Who is being protected from whom and, increasingly, from what? We have the straightforward problems of weakly designed devices that are ubiquitous and which generate risk and vulnerability. Internet connected devices leak data and can be turned against other targets. As ever these are part technical, part political challenges. For some there should be hard-coded limits in the reach of the state and the law.

Digital crime is being rapidly reconfigured under various influences: financial deregulation, globalisation, fragmentation of the Westphalian state order, the re-emergence of Russia and China as global powers, more adept policing, and digital platform development. The rapid reconfiguration of cybercriminal activity has been facilitated by the emergence of crime as a service (National Centre for Cyber Security 2017). There are two senses in which this matters: there is an empirical sense about how crime is being restricted socially and technologically to prioritise providing, selling and hijacking services rather than goods. Then, there is the categorical question about what crime is and where it happens.

Crime as a service has certain features: expertise can be rented rather than learnt. There are decreasing skill demands as much of the skilled work is farmed out to other services. It makes use of non-human agents such as compromised botnets and Internet of Things devices which were never intended to be human-controlled (Rossow et al. 2013). It can be automated. The majority of web traffic is non-human. Web pages are created by bots and ‘viewed’ by other bots. Crime also happens automatically and not all reflects the agency of its creators . Vulnerabilities and exploitation systems are shared so that users can roll their own botnet. Botnets are in competition with each other (Krebs 2016a). They make use of shared system characteristics such as cloud storage, using licit systems as force multipliers . The ‘democratisation of censorship’ means one individual or reasonably coordinated group can create havoc and target opponents, raising the cost of protective security work and analysis (Krebs 2016b).

Much of these developments are attributed to the effect of digital systems alone; however, they are due to developments in the capitalist economy and choices made about platform design, the distribution and impermanence of gig economy labour, and the de-globalisation of politics and the economy. Many of these developments well predate the digital society. Criminal economies are rationalised, reducing middle market costs and shortening supply chains for illegal products but—as we shall see—it is not reducible to economically rational behaviour. Crucial to these new forms of digital crime is the way they employ hybrids of the digital, the real and the human . Some money laundering value chains include a crucial money mule network for transferring cash and opening controlled bank accounts . Others that distribute malware incorporate a troll farm where users are paid to engage in reputation hacking. Digital crime now systematically engages human labour in a more structured and directed way than before.

These dimensions change the social configuration of crime. For example, illegal file-sharing started out as a university student and academic staff activity, as they had access to the fast internet connections required (Andersson 2011). It was largely confined to that social fraction. As broadband became widespread, file sharing has become more common. Criminal configurations such as file-sharing and internet drug dealing may start out among a particularly tech-savvy demographic and then spread out to a larger population as the effort and cost of participating is lowered. While expertise may be easier to fake, bake or take, there is one resource stubbornly resistant to automation, that of acquiring and maintaining a reputation (Décary-Hétu and Dupont 2013).

Players in illegal markets face various problems. They have to succeed in making their activities work together despite being disparate, remote, and only fleetingly interactive. It is a problem of ordering interaction in ways that will lead to the expected outcome. Market actors want to exchange but they want to do it at a price that suits them. This can be tricky to agree and so formal pricing mechanisms are useful. Every participant risks something, and particularly so in illicit markets. Thus they use proxies to reduce that risk, for example brand loyalty, store loyalty, markers of quality and reliability, all of which (of course) may not be that reliable. It is an oddly precarious thing when described in economic theory. Markets can only solve these problems if they are culturally, socially and institutionally involved and meaningful (Beckert 2009).

These themes come together in response to the online illicit drug trade and specifically the cryptomarkets, open markets in illicit drugs, services and other goods (Aldridge and Décary-Hétu 2014; Barratt and Aldridge 2016; Martin 2014a). Cryptomarkets use the darknet. A darknet is a set of system relays and encryption protocols that disguises the origin, content and destination of internet traffic. The most prominent of these is The Onion Router (Tor) network. It was developed so that citizens—particularly those of repressive regimes—could communicate and browse the internet anonymously (Çalışkan et al. 2015). It is still used for that purpose but can also host hidden or ‘onion’ services, a function for which it has become better known under its previous name of hidden services, to the chagrin of some Tor fans. Paired with the peer-to-peer payment system/economist annoyer Bitcoin, onion services allow people to exchange goods and services without their transactions being cleared through any financial institution or exposed to any external surveillance. This function first came to prominence with the launch of the Silk Road site in 2011. Illicit drugs amounted to around 70% of listings on the site (Martin 2014b). Following the closure of Silk Road in 2013, numerous other markets have sprung up. The cryptomarket economy is currently in a state of flux following major law enforcement operations which have shattered the illusion of them being untouchable (Afilipoaie and Shortis 2018). Cryptomarkets like other digital crime spaces can be thought of as spaces apart from the rest of the illicit economy and in their original conception they were. Increasingly, they are tied into it and reflect the motives and interests of players across the illicit world.

Do not Fear the Darknet

A darknet is any communication system that separates it from the open internet. Usually it means one that enables anonymous, encrypted communication and browsing. It overlays the internet infrastructure but is separate from the open world wide web. Facebook has some characteristics of a darknet as it is not part of the open web. It is a darknet that is very easy to join. And harder to leave. When talking about ‘the’ darknet, we usually mean the set of software systems and protocols primarily designed for anonymity and that are supported by loose movements concerned with resisting monitoring. Tor provides for encrypted, unlocated communication. Tor developers do not like the term ‘darknet’. They characterise it as an encrypted, open communication system. But I like the term so I will keep using it.

The darknet is presented as disrupting decent social norms when in fact it mainly disrupts the internet’s always-on monitoring and recording. It has a reputation as the origin of much malicious internet traffic, and the site of criminal behaviour and shenanigans, as the internet of hipsters. It is also a tool used by security agencies, political activists and drug users in search of community. Despite the risks involved in it and its role as the site of criminal activity the darknet can be ethically and morally attractive and protective for its users. The darknet is a social space. Some of the misconceptions about the darknet and what it is for stem from myths of the internet. If the internet is a sphere of open, democratic discussion and free action, then the only reason a darknet should exist is for people who wish to operate in the shadows. If, however, the internet is not really what its boosters say about it and is increasingly closed, suspicious, and restricting, then a darknet is a logical response. The internet in many forms looks back at its users. Providers of internet services, website administrators, and security and intelligence services can and do monitor who is using the internet and what they are using it for. Governments try and scapegoat technologies like encryption, one of the key technologies used in Tor. The problem with attacking encryption in the name of security is that either these are indivisible—and therefore security is a system characteristic—or they are not, in which case the whole system is insecure.

Browsing and communications over the internet have the inherent weakness. Even if it is impossible to discover who is saying what, it is possible to find out who is saying it to whom and this is called ‘traffic analysis’. The connection can be enough to infer a great deal about the person doing it and can cause suspicion to fall on them or make them the target of retaliation. An example could be a resident of a totalitarian regime visiting the website of a dissident group. To protect privacy, they need to disguise both the content of communication and the fact of it happening. This is a challenge that faced security researchers for many years. The solution cooked up by a team at the US Naval Research Laboratory’s Centre for High Assurance Computer Systems was called ‘onion routing’ (Goldschlag et al. 1999).

Onion routing is a kind of network architecture that bounces traffic between relay nodes, encrypting at each stage. Only the start node ‘knows’ where the traffic starts from and the final node where it is going to. The in-between nodes are only aware of the existence of the next nodes in the network. So there is no overall network picture held by any node. Because each node wraps an encryption layer or ‘onion’ around the message, only entrance nodes can tell the destination of traffic. Each node along the way only communicates with immediately adjacent sibling nodes so does not have this information which is hidden in an encryption layer. Traffic gets to where it is going without being exposed. Tor is an implementation of onion routing that, among other innovations, adds directory servers that control what nodes join the network (Dingledine et al. 2004).

Much discussion of whether Tor is badass or just bad focuses on its ability to provide onion hosting. It allows anyone to set up a host whose origin is untraceable, unless they mess up when setting it up, and many do. Hidden services are implicit in the design of onion routing. It is relatively simple to set up one-way anonymisation using a trustworthy starting service. The cleverness of onion routing lies in its ability to allow anonymous two-way communication by setting up a path through the network. An early paper by Goldschag and colleagues set out highlighted hidden services as a logical extension of the onion routing approach (Goldschlag et al. 1996). Hidden services mean a person or group can set up messaging, hosting, or other services which benefit from hiding the originating IP. Tor hidden services are configured with an onion address instead of an IP. Onion addresses do not use the web’s Domain Name System (DNS) to translate a web name into an IP address. DNS servers can be compromised. The onion is the actual address of the site. Hidden services were revamped as ‘onion services’ in 2017 and rebranded to make them sound less sleazy and to allow for friendlier names. In my view hidden sounds good though and it is nothing to be ashamed of.

Discussing Technology and Crime Means Discussing Values

Focusing on technical threat means we miss what values are involved. Many of the technical and security challenges we face are ways of playing out political problems and disagreements. They are about power, how it operates, who has it and whether we can understand its operation at all. Much has been made about the role of dark money in politics. The issues now go beyond some of the wealthy clubbing together to make their interests appear as if they are everyone’s interests. It suits some powerful operators to hide the possibility of these questions being asked at all. If it is not clear to you who owns your personal data, or who you could ask if you were inclined to, then you never know. When we are told that an aspect of digital life is impossible to regulate or to grasp then the question is abandoned before it is asked. There is an obfuscation of technical and political questions happening. Because some types of regulation are deemed to be technically infeasible they are ruled out of bounds as being politically impossible. But those are choices. We decide to give up certain common law rights to privacy when we activate a smart speaker or a smart lock on our front doors.

Technofear is often anxiety about value change. What we actually see are shifting priorities and centres of power. The political aspect means examining what kind of crimes we prioritise, investigate and punish, how they are investigated and whether some kinds of harmful behaviour are defined as crimes at all. For example, developing, hoarding and deploying zero day vulnerabilities are activities that both security agencies and cybercriminals invest a lot of time and labour in. Crime is used to signal, such as the 2017 Wannacry ransomware attack, a vastly disruptive attack that netted very little money for its creators. This looked to be an extension of North Korea’s internet disruption programme. The aim is to signal the existence of their malware capacity rather than gathering up all the bitcoins.

Some technologies are promoted in order to track and trace people, to divvy their behaviour up into digestible chunks of data which can be put to good commercial use. The claims made for the big data technology that result are often overblown. In the main the most effective companies are just those who have a lot of data to work on. Therefore, what matters is who has the data, not how fancy your algorithm is. They are overblown for a reason. It allows the new economy gurus to get away with being casual about law abiding when it comes to ethical and even legal business behaviour, or to reframe the debate about data, privacy and labour rights in ways that suit their particular business model. They often start with the claim that what they do cannot, as well as should not, be regulated.

A way of doing this is to use data analysis capacity to evade authorities. The ‘we’re not a taxi company we are an app’ taxi company Uber used software it called Greyball , which the company claims was primarily to avoid use that violated its Terms of Service (Calo and Rosenblat 2017). It allowed them to hide their operations from local authorities in locations where they were not licensed to operate. Uber could then operate in a murky grey zone. It is not the case that this is behaviour confined to companies in the new economy. It is the case that they tend to get an automatic free pass from sections of the media and politicians when it comes to such bagatelles as labour laws. Anything that appears ‘heavy’ or old economy is immediately defined as unworkable in relation to the flow of information bits, as if those bits do not involve people, physical infrastructure, and institutions. Let us not overdo this. Dodging taxes, playing fast and loose with the law and undermining it at every turn have long been the habits of some businesses. Nobody expects them to be nice. It is striking now because the difference between the ‘don’t be evil’ image that these companies project and the ‘see no evil’ reality which sometimes appears when they actually have to do boring stuff like paying people.

That is why the resolution of these problems is political and not only technological. Discussion of cybercrime demands that we say what values we want to protect and whether we are prepared to sacrifice some in the name of security. It puts our values back in and involves thorny questions about what exactly those values are or should be and how much effort we are prepared to go to in order to preserve them. I have found when researching the darknet that these are precisely the questions users are tussling with. Cyber security is increasingly seen as essential and yet it is also as a point of contention between citizens, states, non-governmental organisations and private corporations as they grapple with existing and developing technologies. The changing salience of privacy online has recently sparked concerns about, on the one hand, the loss of privacy and autonomy in the face of state and corporate surveillance and, on the other, the creation of ungovernable spaces and the facilitation of terrorism and gendered violence .

There are a growing number of conflicts around cyber security that combine technical challenges and competing political and public agency priorities. For example, increased online surveillance might mean decreased technical security, in cases where it is proposed that encryption is weakened to facilitate surveillance . Competing demands are not necessarily reducible to a single concept of what cyber security should be nor can they be dealt with effectively on a purely technical or system level. These differences and disputes highlight the dual nature of the internet, both allowing counter-publics to emerge and also opportunities for state and private domination through control of the data infrastructure. I argue that far from being a dangerous morass, the darknet and the technologies used in it have benefits and significance for everyone online. Even if you have not interest in using them yourself, their principles of operation are useful guides to think about the way risks from crime are structured and distributed.

Governments hope to separate ‘good’ and ‘bad’ technical systems but that is not possible. One way this is done is in arguments over whether encryption should be a publicly available security good or should be artificially weakened to allow security services and law enforcement access to communications. Technical systems integrate with and develop alongside ideas of economic value, security, personal privacy and desirable secrecy. Some of the scariest systems are those that appear to embed values we do not like, such as anonymity. In technical change, the values that we hold change more slowly than how we and our actions are valued financially. The attention economy has significantly changed this. It has created a system of expectations, norms and technology that encourages sharing certain kinds of experience. Some platform camera filters smooth out skin and sharpen eyes and facial features, making a porcelain mask aesthetic. It is one example of technology embedding some values specific to a dominant culturally accepted aesthetics.

Myths of the Internet Make Digital Crime Look Strange When It Is Normal

Cybercrime is only outlandish because we hope the internet is democratic, distributive and user controlled. Well, it is not that. When we reproduce erroneous claims about what cybercrime is like we are often attaching them to some of these myths. These are often based on the self-aggrandisement of Silicon Valley evangelists, the bewildered dotcom boomers, techno-slaves and their acolytes. They are frequently repeated by half-listening politicians and an echo chamber media.

Misapprehensions exist because they are handy to think by, lazily reassuring or persist in the teeth of reality because they are useful. Some have faded rapidly in recent years, of which a major one is…
  1. 1.

    The internet is free, lawless and resists state interference. It cannot be repressed, controlled, named, ranked or numbered. In fact, it can and is. In fact, the internet is perfect for states to occupy. The internet inside the Chinese firewall is doing very well, thank you. Chinese companies do not appear to suffer from the close watch maintained by their government on what they can and cannot say. Innovation goes on at a roaring pace. Authoritarianism carries on just fine.

     
  2. 2.

    It is unencumbered by geography, place, nation, borders or any solid construct of the wheezing behemoths that occupy meatspace—the anchored offline world. There was a time in its early years when the internet could be thought of as occupying an ethereal cloud inhabited by cross-national Western academics. Even at that point, it was firmly grounded in the military-smartalec complex of the US Defence Department and elite Western universities. More so now, it is defined by global regions and nation. Search results, web access, and services provided over the internet are geo-located and restricted.

     
  3. 3.

    It is a flat network. Each node in the network is as important as any other node. That flat quality makes it impenetrable by hostile actors and able to survive any attempt to censor it. In reality it is lumpy in physical and software terms with many choke points and a few off switches. Lumpiness is an inherent characteristic of networks. They quickly cluster and reinforce clustering due to network effects.

     
  4. 4.

    It is a blank canvas with no inherent characteristics other than those imposed by the user. In truth it is designed, prioritised, governed, coded to reflect the priorities and needs of the maker.

     
  5. 5.

    It is ubiquitous. In reality there are many people whose connection is costly and unreliable and who move through the digital only with difficulty. Others—often at the other end of the economic scale—deliberately avoid and resist digitisation of their social and economic lives. A related assumption is that all would be better if everyone were connected and all economies were digitally mediated.

     
  6. 6.

    It is characterised by reciprocal sharing. Humans build relationships through sharing, we need to share, and the internet can make sharing a possibility. However the so called sharing economy is less about reciprocity and more about centralising transactions in a limited set of platforms. Sharing implies a relationship of equals, but reciprocity is minimal.

     
  7. 7.

    It undermines power and promotes equality. Power does not get away that easily. Power is redistributed and reconfigured through the internet, using its platforms and the states, societies and economies in which they are grounded.

     
  8. 8.

    It is anonymous. As with the other myths, the opposite is more often the case. It is de-anonymising and with a little concerted effort the user’s real world identity is not far away.

     
  9. 9.

    It will make the economy cashless—this is being pushed by financial institutions but few societies are ready or will ever be. It just involves pushing costs onto less able citizens who cannot or cannot afford to use digital services and find themselves forced on to expensive, hard to access second best cash services.

     
  10. 10.

    Technological verification can substitue for trust—the idea that problems of contract organisation and human communication can be solved by technologies which embed trust in them—blockchains, sexual consent apps, etc. These sound great because they cut out a huge layer of administration. But they don’t really work by themselves.

     
  11. 11.

    Tech rules it. This comes in two myths used to justify the direction of technological and social development around it: Because you can, you should as everything is different online and there are no consequences, and… Because you can, somebody will, so we might as well let it happen. This is often expressed as humans being at the mercy of technological development. As in, because it is technically feasible to monitor everyone, all the time, somebody will and we have no choice but to let it happen.

     
  12. 12.

    Technology diffuses from the West to the rest. In fact most innovation happens in low and middle income countries, and tech diffiuses faster where this is greater need. Myths are fine to live by but they are a problem when they hold up our thinking. Some of these myths are cruft, some of the early over-the-top claims made about the internet which still stick to it. Some are reproduced by those who have an interest in claiming that one or the other is true. For example, that internet-based companies cannot be expected to pay tax or adhere to basic labour law because their jurisdiction is nowhere. That is despite the value they produced being extracted from very real people and activities. Another is that their corporate structure is fundamentally different from that of ‘bricks and mortar’ companies and that justifies a wholly different kind of regulation or—better yet—no regulation at all. Ask those same companies how they feel about the enforcement of intellectual property rights in Russia and China and you may find that some old boots-on-the-ground gunboat regulation is just the ticket for them, negotiated by a very real government, paid for by your taxes. The digital society is characterised by the following characteristics: we are not equal and it shows. It is not screened but felt. It is governed without rules. It appears automatic but is curated. It always looks back at you.

     

The function of the darknet in my analysis is to reflect many of the challenges posed by the modern internet at us, which are themselves really the challenges of modern society and economic arrangements. The darknet is of special interest to me because it exposes some of these myths, crystallises some of these problems and provides some solutions to them. The darknet provides a window into some kinds of cybercrime. It also provides lessons for how to respond to some challenges facing all internet users, of surveillance, risk, interception, loss of autonomy and changes to the deep structure of our lives in the digital world. As we will find, the darknet produces its own myths and becomes the location for some prevailing myths about digital crime that I investigate in this book.

Conclusion

The darknet is presented by detractors and fans in opposition to the open web. Fears tend to be attributed to dark systems and dark actors. However, the darknet is far from shady, and we have more to fear form actors operating in and exploiting the opaque, grey spaces of the digital world, or hiding alone among the mob. Trolls engaging in organised harassment of women are taking advantage of the power of digital to coordinate disparate individuals to create a force much more harmful than achievable by a set of uncoordinated armchair misogynists. These points are where extensive havoc and harm can be wrought by a few. Alongside that, they can be the location for supportive communities who establish norms for reasonable behaviour, assessment of harm and value, and supportive interaction.