I didn’t take it too seriously … the last thing he told me was he had to go, he was loading magazines.
—Army Specialist Burl Mays, recounting an October 26, 1995, conversation with Sgt. William Kreutzer, who killed one soldier and wounded 18 at Fort Bragg, North Carolina, on the morning of October 27, 1995
The phrase “insider threat” has taken on many different meanings since being popularized in the mid-2000s following the appalling attacks of Hasan Akbar and Nidal Hassan and the extraordinarily damaging leaks by Bradley (now Chelsea) Manning and Edward Snowden. These events resulted in an astonishing cost—measured in human lives, intelligence, and resources (e.g., financial, material, work hours)—and required a strong, coordinated response in an attempt to avoid future incidents. In the following years, policies were written, procedures were established, and resources were realigned to prevent a similar event from occurring in the future. Policy makers understandably disagreed on how to define an “insider,” what constitutes a “threat” (including what types of threats are to be guarded against), and how a “threat” can be legally and ethically mitigated. Inevitably, psychology and other social sciences were in the spotlight, with leaders searching for answers about how to deter, detect, and mitigate potential insider threats. Presently, guidance at the federal level recognizes the importance of behavioral science contributions (National Insider Threat Task Force [NITTF], 2017). This chapter explores the varied viewpoints about how insider threat is conceptualized; the focus of insider threat efforts; and how psychological science informs efforts to deter, detect, and mitigate potential insider threats. Also offered are thoughts about how operational psychology activities adhere to the American Psychological Association’s (APA) Ethical Principals of Psychologists and Code of Conduct (2017), as written (hereafter referred to as the APA’s Ethics Code), and common ethics discussions had by operational psychologists who support insider threat activities.
As the previous paragraph suggests, one cannot possibly hope to define “insider threat” without first agreeing on a definition of the term “insider.” Colloquially, the term “insider” appears easy to understand, especially when that understanding is derived from high-profile cases. When required to create policies, procedures, research methodology, or other processes that determine how limited resources are utilized for the greatest impact, the task of defining an “insider” becomes much more difficult. Perhaps a more reasonable approach is to first understand the desired end state: Who and what is an organization trying to protect from an insider threat? Many organizations would list personnel, facilities/infrastructure, and data as to who and what is to be protected. With that established, the obvious follow-up question is: Who potentially poses a threat to the organization’s personnel, facilities/infrastructure, and data? Likely, the answer to that question has to do with access. Any person who is provided access to the organization’s personnel, facilities/infrastructure, or data may pose a threat to the same. Obviously, this logic creates an incredibly wide range of possible insiders for any organization to guard against, which is why different types of organizations must choose definitions that fit the organizational culture and do not violate applicable laws and regulations. For example, an organization may create policies to limit the “access to personnel” definition of insiders to “while on duty,” since the organization cannot hope to account for all of the personal connections that an employee may make outside work. Then again, if that employee is in an austere environment, “while on duty” could be changed to “while on site,” since the employee is always in the proximity of coworkers and others who have access to the facility.
For the sake of consistency through the current chapter and with the understanding that the greatest concentration of operational psychologists exists in the federal government, the following definition is submitted from Section 951 of the National Defense Authorization Act (NDAA [2017]):
The definition offered by the 2017 NDAA broadens the scope of Executive Order 13587 (2011), which initiated formal insider threat protection at the federal level but focused specifically on the protection of classified information. This focus on the protection of classified information (including classified networks on which most classified information resides) led to an emphasis on highly technical solutions, but the need for more holistic viewpoints and solutions for insider threat programs were recognized over time, including formal recognition of the role of behavioral science (e.g., National Insider Threat Task Force, 2017).
The focus of insider threat programs—which some now refer to as “counter-insider threat” programs—varies and largely may be dependent on organizational history, resources, and leadership focus. Organizations that prioritize protection against data exfiltration may emphasize the technical aspects of insider threat detection, while organizations that have experienced violence in the workplace may focus more on traditional threat assessment principles. Likewise, organizations that have been victims of espionage may align their insider threat resources to highlight activities believed to be associated with spying.
Regardless of an organization’s priorities and how those priorities shape the organizational expression of its insider threat program, one undeniable truth remains: The insider threat program mission, at its most basic level, is to detect, understand, predict, and influence human behavior. Therefore, the insider threat problem that every organization is charged with solving is a behavioral science problem that occurs in the context of a perpetual human capital shortfall—there will never be enough insider threat personnel to personally analyze all data generated by insiders. Due to the fact that the acquisition of data will forever outpace an insider threat program’s ability to supply expertly trained insider threat professionals, technological solutions must be developed and deployed to automate some data analysis procedures to ensure that the most concerning information is readily available to those experts. Operational psychologists provide expert consultation on how to interpret and handle the most concerning information but also help organizations develop and refine detection methodology, including automation strategies. Though many of the tools used in insider threat deterrence and detection are highly technical in nature and continually advancing, it is important to maintain awareness that all of these highly technical tools are being employed to solve a behavioral science problem, not a technological problem.
Due to the fact that insider threat programs are focused on preventing very low base-rate events, the literature in this area is much more descriptive than empirical in nature. Much like efforts to find a definitive psychological profile of those who engage in terrorism, attempts to answer the question “What type of person becomes a spy/leaker/active shooter?” also have had limited utility. Following a similar course to the psychology of terrorism and radicalization literature, psychologists and other researchers have found some success in identifying and describing some common characteristics in the routes that people take along the way to a damaging act. Two widely accepted and applied conceptual frameworks that are foundational to the psychological science of insider threat are the Critical Path (Shaw & Sellers, 2015) and the Pathway to Violence (Calhoun & Weston, 2003).
After an extensive review and analysis of historical espionage cases, behavioral scientists, psychologists, and other researchers identified commonalities and trends among individuals who spied against the United States (Band et al., 2006; Fischer, 2000; Moore, Cappelli, & Trzeciak, 2008; Shaw, Ruby, & Post, 1998; Shaw & Sellers, 2015; Wilder, 2017). The identification of these common factors or patterns led to the development of a theoretical framework widely known and referred to as the Critical-Path. The Critical-Path method is composed of four elements (personal predispositions, stressors, concerning behaviors, and problematic organizational responses), which describe vulnerabilities or risk factors that can push a person down the path toward performing a hostile act as well as critical points of mitigation aimed at thwarting such acts from occurring (Shaw & Sellers, 2015).
Personal predispositions are dynamic characteristics or experiences individuals bring to an organization that can create psychological vulnerabilities in poor decision making or problem solving and subsequently increase the risk of becoming an insider threat (Band et al., 2006; Fischer, 2000; Moore et al., 2008; Shaw et al., 1998; Shaw & Sellers, 2015; Wilder, 2017). Examples of personal predispositions include (1) maladaptive personality traits; (2) medical or mental health condition(s) that impair judgment, affect, self-esteem, perception, or self-control; (3) history of trauma, dysfunctional upbringing, developmental and learning delays; (4) past criminal behavior or rule violations; (5) social skills deficits, history of interpersonal conflicts, or social network risks such as being a member of an adversarial group or having significant relationships with individuals engaged in criminal conduct; and (6) drug or alcohol abuse (Band et al., 2006; Fischer, 2000; Liang, Biros, & Luse, 2016; Shaw & Sellers, 2015). It is important to emphasize that having a personal predisposition alone does not cause an individual to commit espionage, sabotage, workplace violence, and so on, but rather increases their risk particularly when tethered to significant stressors.
Maladaptive personality traits are of particular concern when assessing insider threat. A vast number of counterespionage and insider threat professionals support the concept that many historical espionage cases were committed by individuals who exhibited pathological characteristics in personality such as being highly manipulative; self-serving; thrill-seeking; lacking remorse or empathy; and exhibiting emotional dysregulation, egocentrism, social malevolence, or aversion to norms (Band et al., 2006; Fischer, 2000; Liang et al., 2016; Nurse et al., 2014; Shaw et al., 1998; Shaw & Sellers, 2015; Wilder, 2017). Insider threat literature has cited these traits, categorically, as psychopathy, narcissism, immaturity, Machiavellianism (Fischer, 2000; Maasberg, Warren, & Beebe, 2015; Wilder, 2017). The aforementioned categories, with the exception of immaturity, are commonly referred to as the “Dark Triad” (Maasberg et al., 2015). Maasberg et al. (2015) propose that such insider threat behaviors as malicious intent or motivation to cause harm are directly connected to the Dark Triad personality traits.
Individuals with personal predispositions coupled with life stressors (e.g., personal, professional, recent loss) and poor coping skills may begin exhibiting problematic behaviors as a means of solving or eliminating the problem (Band et al., 2006; Moore et al., 2008; Shaw et al., 1998; Shaw & Sellers, 2015). Such behaviors, typically disruptive and noticeable in the work environment, include security violations, aggressive behavior, substance abuse, a decline in work productivity or cognitive agility, tardiness, or extreme fluctuations in mood. They may also reflect financial issues such as bankruptcy or unexplained affluence, or unreported foreign travel or contact (Band et al., 2006; Fischer, 2000; Liang et al., 2016; Shaw & Sellers, 2015; Wood, Crawford, & Lang, 2005). When the organizational response to such behaviors is ignored or perceived negatively by the individual, the risk of an insider threat can escalate. More often than not, a hostile act does not occur without planning or preparing. Preparatory behaviors, such as a series of minor infractions to test an agency’s security vulnerabilities, are warning signals to an agency that an insider act may occur (Meloy, Hoffman, Guldimann, & James, 2012). Other planning behaviors may include surveillance, acquiring resources or skills (e.g., artillery and weapons training), rehearsals, attempts to access unauthorized areas or information, downloading or printing large amounts of documents, or leaking their intent to others (e.g., close friend, coworker, social media posts).
Conceptually, the Critical-Path method can serve as a framework for psychologists assessing potential insider threats. In addition, the elements of the framework can be used by data modelers and analysts to assign values to quantify risk and to establish baselines that set the stage for identifying anomalous behaviors (Shaw & Sellers, 2015).
With the increased availability of information combined with high-profile coverage of recent mass shootings from multiple news outlets, it is understandable the notion “it could happen here” seems more plausible. Though, statistically, active shootings, particularly in the workplace, are relatively rare compared to other forms of violence, the devastating impact can have residual adverse consequences especially on an employee’s sense of safety. Understanding the detectable changes in a person’s behavior and thinking who is on a path to violence is essential for insider threat programs. A substantial amount of literature exists in the field of violence and threat assessments, including related risk and preventative factors. Borum, Fein, Vossekuil, and Berglund (1999) suggest the following about targeted violence (i.e., mass murders, assassination attacks, and near-attacks):
Conceptually, there has been a shift from the violence prediction model, where dangerousness was viewed as dispositional (residing within the individual), static (not subject to change) and dichotomous (either present or not present) to the current risk assessment model where dangerousness or “risk” as a construct is now predominantly viewed as contextual (highly dependent on situations and circumstances), dynamic (subject to change) and continuous (varying along a continuum of probability). (p. 324)
Similar to the Critical-Path framework, the Pathway to Violence is a conceptual-based risk assessment model for understanding how the combination of underlying factors and current stressors as well as organizational responses can contribute to the risk of targeted violence (Calhoun & Weston, 2003; Fein & Vossekuil, 1998; Geck, Grimbos, Siu, Klassen, & Seto, 2017; Vossekuil, Fein, Reddy, Borum, & Modzeleski, 2004). In addition, the model highlights a sequential series of steps a person takes once he or she has decided to cause harm (to a person, group, or organization) and associated warning behaviors or indicators (Borum et al., 1999; Calhoun & Weston, 2003; Vossekuil, Borum, Fein, & Reddy, 2001; Vossekuil et al., 2004; White & Meloy, 2016). The steps along the pathway include “a felt grievance, to violence ideation, research and planning, specific preparations for violence, critical breaches of security or other boundaries, to a concluding attack” (White & Meloy, 2016, p. 34).
Targeted violence is not spontaneous or impulsive; in fact, a study conducted by the FBI on active shootings between 2000 and 2013 found that 77 percent of the shooters spent a week or longer planning the attack (Bulling, Scalora, Borum, Panuzio, & Donica, 2008; Fein et al., 1998; Silver, Simons, & Craun, 2018; Vossekuil et al., 2004). In addition, on average, active shooters exhibited four to five concerning behaviors prior to the act and, of the cases where a specific grievance was known, over 50 percent were related to a perceived adverse action against the shooter (Silver et al., 2018). For many attackers, violence is believed to be an acceptable solution to their unbearable problem (e.g., feeling hopeless, trapped, humiliated, ashamed) or perceived injustice (Borum et al., 1999; Fein & Vossekuil, 1998; Vossekuil et al., 2004).
Though mission statements and definitions of “insider” and “threat” vary across organizations, the common features in most insider threat programs are the tasks of deterring, detecting, and mitigating potentially damaging acts by trusted insiders.
The concept of deterrence in the insider threat realm is to intercede in some way before an insider begins to contemplate a malicious act; redirect thoughts that could lead to malicious acts to a more productive, socially acceptable pathway; or restrict certain activities so that attempts at malicious acts are unlikely to succeed. Insider threat deterrence efforts can be described in three broad categories—prevention mechanisms, education efforts, and early intervention strategies. Prevention mechanisms can be procedural (e.g., random vehicle inspections may prevent an individual from bringing explosives onto the property) or technical (e.g., virus scanners that prevent a specific malicious code from being introduced into a system that could damage systems or exfiltrate data). Education efforts can also be preventative in nature, in that they can provide troubled insiders with skills and resources that may reduce stress and improve coping or may help coworkers identify potentially troubled colleagues and guide them to the resources they need. Early intervention strategies (e.g., individual financial counseling for those in financial distress and security awareness training) could help individuals rectify specific vulnerabilities (e.g., extreme debt) before desperation sets in and identify others’ attempts to subtly recruit them to perform damaging acts. It is equally important for insider threat or related security programs to periodically re-educate and inform employees of organizational policies (i.e., reportable information) and resources available particularly when changes have occurred. How this information is communicated to the workforce may vary depending on the organizational structure and climate, but generally should be transmitted through multiple mediums (e.g., newsletters, annual computer-based training, flyers, security awareness day) and easily accessible (e.g., hotlines, internal search engines, shared files and intranet sites).
Although deterrence may be the most difficult task to quantify—an insider threat program likely will never know if a specific individual moved from a pathway leading to a malicious act toward more productive behavior due to a deterrence activity—insider threat detection likely is the most daunting of all three tasks. Especially in government organizations, when the public often can overestimate the quality of the data available and the ability to effectively process those data, the scrutiny of the public eye creates a zero-tolerance environment for detection failures. Public outcry that followed recent examples of detection failures from the federal government (e.g., Navy Yard shooter Aaron Alexis and NSA leaker Reality Winner) reveals that many taxpayers have the expectation that the involved agencies “should have known” that an insider attack was about to occur due to the data available to those agencies or law enforcement. Approaching the problem from a research methodology point of view, the resources committed to processing Type I errors (i.e., false-positives)—anomalies detected that ultimately do not indicate a potential threat—must be balanced against the catastrophic consequences of a Type II error (i.e., a false-negative), in which anomalies that could have indicated a potential threat are not detected, and a malicious (possibly deadly) act is allowed to occur. If the sensitivity of the threat detection system is too high, insider threat professionals are overwhelmed by so many potentially false-positives that they may not be able to analyze the data quickly enough to identify truly concerning behaviors. Conversely, if the threat detection system is overly specific, a false-negative—missing true indicators of a pending malicious act—may occur, devastating the organization and calling into question the resources expended on an insider threat program that failed to protect the organization. However, damaging acts do occur (e.g., a suicide in the workforce) without any anomalous behaviors being observed via the data available in the threat detection system. At an individual level, impulsivity and discretion play a role in these events, which rarely follow a smooth, linear trajectory of escalation from contemplation to action. Given the infancy of some insider threat programs and limitations of technological capabilities, “young” or underresourced insider threat programs must rely heavily on the organization and its workforce to uncover concerning behaviors. More advanced and well-resourced insider threat programs can add on sophisticated technical solutions to aid in detection (NITTF, 2017).
Methodologically, the solution seems pretty simple: set up a “triage” system in which all potentially concerning behaviors are quickly evaluated by an analyst to determine if a more in-depth assessment of those anomalies is warranted, and then pass the most concerning cases to specialists who can pore over the data to make a reasonable determination. In reality, many insider threat programs have so much data available that having enough human capital to even triage all anomalies is impossible. With the speed of technology, the data available grows at a pace that cannot be solved with hiring surges. Therefore, insider threat programs have looked to technology to provide automated solutions to the ever-rising tidal wave of data.
The most prominent technical tool associated with insider threat programs is user activity monitoring (UAM) software. UAM is a technical solution that allows an organization to track and record the activities of end users on organizationally owned information technology equipment. The volume and scope of the activities that are tracked and recorded, however, are largely dependent on the organization’s objectives for using UAM and the resources dedicated to those objectives. Obviously, the larger the monitoring/recording aperture, the more resources must be dedicated to data storage, data processing, and governance and system maintenance. In Hollywood-style, suspense thriller espionage movies, every keystroke is captured, recorded, and scrutinized in real time, then stored for eternity. In reality, every organization would need a data center more than twice the size of their other buildings and tens of thousands of employees working 24/7 to accomplish that feat. To manage resource demands and limit the scope of monitoring to the interests of the organization, UAM tools employ indicators (aka, “triggers” or “policies”) designed to detect potentially malicious or damaging behavior on the network, or, when most effective, the antecedents to potentially malicious or damaging behavior. Predictably, with most UAM tools being designed by computer engineers or other allied professionals, a great emphasis is placed on developing indicators to detect attempts at damaging the computer systems or networks (e.g., introducing malicious code), as well as unauthorized data exfiltration.
Due to consumer demand and the recognition that many people demonstrate behaviors on computer networks that may indicate future dangerous acts, the term “behavioral analytics” has become a “buzz-word” that loosely describes the analysis of network activity for the purpose of predicting behaviors. In some contexts, behavioral analytics are focused solely on the analysis of behavior on the network and how that may predict future “digital behavior,” but other contexts include more diverse data and a broader application of the analysis of those data (e.g., trying to determine if an individual’s network activity indicates that he or she is a higher probability of committing workplace violence). A specific component of the broader field of behavioral analytics is sentiment analysis, which is an attempt to derive emotion or other implicit meaning from network activity. Sentiment analysis is a burgeoning field in insider threat. Though meaningful sentiment appears to have been derived in some studies and applications (e.g., Brown, Greitzer, & Watkins, 2013; Greitzer & Ferryman, 2013), data cleaning and preparation problems still exist in large-scale applications. For example, the use of sarcasm is a common difficulty encountered by those who design sentiment analysis models, since the language of sarcasm often is exactly the opposite of the intended meaning, making the automated analysis of that text a challenging proposition (Maynard & Greenwood, 2014).
Given the infancy of some insider threat programs and limitations of technological capabilities, uncovering concerning behaviors can rely heavily on the organization and its workforce. Obviously, if a potential insider threat cannot be deterred or prevented, steps must be taken to mitigate the problematic behavior. Mitigation activities can happen at a number of different levels of the organization and can be conducted by personnel both formally assigned to an insider threat program and those who may have supporting roles. The nature of the threat and organizational resources play a large role in determining specific mitigation strategies and methods. General categories of responses can be identified, but none should be seen as the only option to address an identified threat.
Routine activity theory (RAT) is a postulate in criminology that suggests that criminal activity is a function of a motivated offender, accessible target of value, and the failure to provide adequate guardianship for the target of value (Cohen & Felson, 1979). Some organizations have substantive prevention mechanisms (e.g., disabled CD/DVD drives and real-time USB device alerts) that prevent potentially more common threats from occurring by providing adequate guardianship and reducing the accessibility of the target. In addition, organizations attempt to reduce the number of motivated offenders through applicant and employee vetting programs. For those “hardened” organizations—who, theoretically, have greatly reduced the probability of many other potential threats—a relatively common threat to the workforce detected by insider threat programs is the indication of possible suicidal ideation. For organizations that are well resourced and can provide services beyond management and leadership intervention, emergency responders and threat assessment and management teams may be deployed to interdict and interview the subject, and employee assistance program (EAP) personnel may offer to provide services to mitigate the threat. Options for less-resourced organizations include interviews by supervisors/management or human resources personnel and the activation of local emergency responders to assess the subject’s well-being and intervene just as they would if called out to respond to a potentially suicidal individual in the community. Similar processes may be employed in response to a threat to harm others or damage the facility. To ensure the most effective and safe mitigation practices, operational psychologists tasked with threat management responsibilities should ensure competency through formal, evidenced-based training and consultation with experienced peers. If available, additional resources (e.g., financial counseling, addictions support groups, resiliency training) may be engaged or encouraged to support the individual in distress.
Often, the initial, “play-it-safe” response to an identified threat is to immediately remove the individual from authorized access to the organization’s personnel, facilities, data, or systems. Theoretically, and, often in reality, immediate suspension of authorized access for a subject greatly reduces the immediate threat to the organization. However, restricting access may come at a cost. If an individual only is restricted from authorized access to a particular information system or network, then the organization may be obligated to find duties that correspond with the subject’s newly restricted access, at least until the investigation or administrative action is complete. Further, restricting authorized access does not necessarily prevent the individual from seeking unauthorized access, and the embarrassment, shame, or other negative feelings that could arise from being “walked out” of a department or facility may increase a subject’s motivation to commit a malicious act. In addition, if one component of the threat was a potential threat to self, removal from a facility greatly reduces the organization’s knowledge of the subject’s whereabouts and activities, potentially decreasing the subject’s social support and dramatically increasing the opportunity to make a suicide attempt without anyone attempting to intervene. If the threat is related to espionage, leaking, or other data exfiltration, suspending authorized access greatly limits the ability of security and counterintelligence investigators to collect evidence, observe behavioral patterns, and potentially identify co-conspirators. Similar considerations should occur when the decision is employment termination; past insider threat cases clearly demonstrate that revoking access does not prevent such an act from occurring and but could escalate the risk. These possibilities highlight the importance of a multidisciplinary approach to insider threat.
Depending on the type of identified threat and the authorities of the affected agency, internal or external law enforcement may be brought in to help mitigate the threat. Of course, law enforcement officers must follow their professional standards, which sometimes are not consistent with how insider threat and security personnel believe a threat should be mitigated. Some of the same risks described earlier in the Access Restrictions section apply, only with potentially more significant consequences, since the organization now has linked itself to law enforcement activities over which it may not have any control (e.g., local law enforcement). A subject who believes “they [the organization] had me arrested!” may develop a much deeper and unshakable grievance than a subject who just had authorized access suspended. On the other hand, an organization may be convinced that espionage is taking place, but law enforcement may not believe that the evidence supports the organization’s position, leaving that organization without the mitigation method it believes is most effective for the situation.
Operational psychologists who have experience consulting to counterespionage, counterterrorism, and insider threat investigations and activities can offer a tremendous amount of expertise to deterrence efforts (e.g., psychosocial education), the design and modification of detection systems (both technological and methodological means), and how to respond to threats once identified (i.e., mitigation). Common applications of operational psychology in insider threat are detailed next, with the recognition that many, if not most, applications are not discrete within the deter, detect, and mitigate tasks, but rather serve cross-domain purposes. The applications can be categorized into four primary functional applications— research and development, consultation, training and education, and liaison activities.
Operational psychologists within insider threat programs or consulting to insider threat programs often are the only—or rank among few—personnel with formal education and professional experience in research design, statistics, and deliberate use of the scientific method. For example, operational psychologists may design and conduct studies that ultimately influence how insider threat data are modeled, triaged, and analyzed (NITTF, 2017). Operational psychologists may also conduct case studies of known espionage, unauthorized disclosure (i.e., “leaking”), or targeted violence cases and present the findings to the organization and greater insider threat community to advance lessons learned and improve processes. Operational psychologists may be tasked to study a specific problem or agency resource and determine the potential impact from an insider threat perspective. The research and development possibilities are nearly as endless as the growing amount of insider threat data. Summarily, if provided sufficient resources and authority to conduct research and development activities, operational psychologists can have a tremendous impact on not only the future of insider threat activities in the organization, but in the field as a whole.
Within the functional applications of operational psychology to insider threat programs, consultation is the cornerstone in which operational psychologists not only have the greatest direct impact but also where they demonstrate the most obvious value to the organization. Consultation in insider threat serves all three primary tasks—deter, detect, and mitigate—in a multitude of ways. As consultants, operational psychologists advise decision makers on the application of psychological science to both common and unique problems that the insider threat programs attempt to solve, evaluate second- and thirdhand information relating to specific potential threats, and even provide direct assessment to ensure a trusted workforce. Some common specific consultation activities are discussed here.
Operational psychology consultation to insider threat detection and analysis is a rapidly emerging application within insider threat programs. In this area, operational psychologists rely on their unique expertise (e.g., research, statistics, scientific method, human factors, organizational psychology, clinical psychology, and other related fields) to collaborate with analysts, data scientists and modelers, investigators, and other insider threat personnel to design and refine detection and analysis strategies and methods. Operational psychologists may drive efforts to utilize sentiment analysis and text analytics to detect specific types of threats, or may advise on appropriate methodology to ensure that regular review of threat detection data models is scientifically rigorous enough to provide meaningful results. Operational psychologists may also work with insider threat analysts to triage alerts for false-positives and review potentially concerning information in the context of the Critical Path to insider threat (Shaw & Sellers, 2015), Pathway to Violence (Calhoun & Weston, 2003), or other theoretical framework that may explain potentially threatening behaviors.
The field of threat assessment in psychology can trace many of its routes back to operational psychologists (though the field had not been defined as such at that time) who provided consultation and conducted research in support of federal law enforcement functions, specifically on behalf of the U.S. Secret Service and Federal Bureau of Investigations (FBI). Fein and Vossekuil (1997) conducted what is often referred to as the “Exceptional Case Study,” in which they studied 83 cases of targeted violence (violent attack or attempted violent attack on a specific public figure or institution) that occurred between 1949 and 1996. This study provides a foundation for future works that advocate for evidence-based threat assessment practices (e.g., Borum et al., 1999; Fein & Vossekuil, 1998). This research, as well as work done by others in psychology, criminology, and sociology (e.g., Calhoun & Weston, 2009; Meloy & Hofman, 2013), drove the formation of the modern threat assessment field.
Much like psychologists consulting to law enforcement, operational psychologists who consult to insider threat inquiries and investigate possible threats also review legally collected information (often secondhand and thirdhand) about a scenario and subject and advise decision makers about gaps in information, noting how available data may suggest motivations, and detailing the steps along the Pathway to Violence (Calhoun & Weston, 2003), and possible precautions and intervention strategies. Operational psychologists may assess information to assist threat management and law enforcement personnel in determining the imminence of a potential threat and the resources necessary to appropriately evaluate and mitigate the threat. For example, an operational psychologist may recognize a pattern of reported or observed behavior that suggests an individual may become verbally or physically aggressive when he or she feels cornered, and may suggest strategies that minimize the perception of a threat during that intervention. Likewise, operational psychologists may describe interview techniques that increase the likelihood of gathering valid and reliable information and outline how the application of those techniques may be unique for a specific individual. The scenarios involving operational psychology consultation to threat assessment are seemingly endless, but each consultation opportunity must be evaluated on the grounds of safety, legality, ethics, and effectiveness.
Certain organizations have well-established and enduring requirements of psychological fitness for employees due to the sensitivity of their jobs, with certain jobs having further heightened requirements due to special tasks that present a risk to the organization (e.g., those who are authorized to carry a service weapon) or risk to national security (e.g., those who have a security clearance or special access to other sensitive information). Operational psychologists often are asked to conduct psychological evaluations of applicants following a conditional offer of employment to ensure that they are psychologically fit enough to carry out the sensitive tasks required by the organization. In addition, operational psychologists in some organizations are tasked with directly evaluating current employees’ psychological fitness following a significant event (e.g., fitness for duty evaluation) or on a regular basis (e.g., annual examinations) for those who meet certain job criteria. All of these functions serve to prevent individuals who may pose a threat from initially entering the organization (deterrence), identify employees who currently may pose a threat (detection), and provide services and other strategies to best handle the threat (mitigation).
Psychologists in the Department of Defense (DoD) and Intelligence Community (IC) have a long history of consulting to security clearance adjudicators regarding clearance eligibility; those consultations often include a direct assessment of the subject of an adjudication, as noted in the previous section. The requirement for such consultation currently exists in Security Executive Agent Directive 4: National Security Adjudicative Guidelines (SEAD 4), which sets the standards by which all security clearance determinations are made (Office of the Director of National Intelligence [ODNI], 2017). Specific to psychologist involvement are Guidelines G (Alcohol Consumption), H (Drug Involvement and Substance Misuse), and I (Psychological Conditions), which detail how adjudicators are to evaluate information about a subject’s conditions and behaviors as they relate to these topic areas. All three guidelines detail the “concern” and factors that “could raise a concern and be disqualifying” as well as factors that “could mitigate security concerns” (pp. 16–20). Guideline I (Psychological Conditions) specifically references psychologists when describing the concern:
The Concern: Certain emotional, mental, and personality conditions can impair judgment, reliability, or trustworthiness. A formal diagnosis of a disorder is not required for there to be a concern under this guideline. A duly qualified mental health professional (e.g., clinical psychologist or psychiatrist) employed by, or acceptable to and approved by the U.S. Government, should be consulted when evaluating potentially disqualifying and mitigating information under this guideline and an opinion, including prognosis, should be sought. No negative inference concerning the standards in this guideline may be raised solely on the basis of mental health counseling. (p. 19)
In this context, psychologists may provide consultation to information already gathered, and, in many cases, directly evaluate the subject of the adjudication on behalf of the organization and provide those findings to the organization. The organization then can use that consultation and evaluation to inform the clearance determination that is made according to the overall guidance provided in SEAD 4.
Given that grievances about real or perceived injustices at work or an overall unsupportive work environment may increase the probability of an individual becoming a threat to the organization, insider threat programs and the operational psychologists supporting those programs have a vested interest in working with others in the organization to ensure a safe, comfortable, and productive work environment. Operational psychologists often consult with management about how to handle difficult behaviors and team dynamics in the workplace, as well as how to encourage and maintain team cohesion and productivity. The resulting leadership engagement not only helps create a more positive work environment, but also likely results in leadership knowing more about individual employees so that timely intervention can take place before an employee advances too far down a destructive path.
Training and education often are terms used interchangeably, but some organizations, especially in the federal government, draw specific distinctions for resource management and mission alignment purposes. For clarity in the current chapter, consider training as learning events that produce a specific skill, whereas education is being used to describe learning events that contribute to much broader professional development (e.g., a degree or certificate program). Again, operational psychologists play a key role in training and education for insider threat, security, counterintelligence, and law enforcement personnel, as well as for the general workforce. For specific personnel, operational psychologists may conduct training that is very specific and defined (e.g., how to recognize and protect against adversarial persuasion techniques) or may provide expertise on broader psychological concepts as they contribute to coursework in a formal education program. In addition, training may be provided to the general workforce that assists an insider threat program in improving the “human sensor network” (i.e., the workforce watching for and reporting potential threats) by helping the workforce recognize concerning behaviors and helping them reduce their own inhibitions about reporting that behavior.
As scientists, it is the obligation of operational psychologists to share knowledge with one another and the larger fields of psychology, security, insider threat, and counterintelligence, though the information sharing may be limited due to the sensitivity of the information being utilized. This obligation, the responsibility for those in the DoD and IC to share lessons learned, and the general desire for professional collaboration dictate that liaison activities are an essential function for operational psychologists working in or consulting to insider threat programs. Liaison activities include traditional conferences and symposia where research, current practices, and lessons learned are presented, but also include specific information exchanges between agencies and interagency working groups formulated to solve common problems.
Understanding that the focus of insider threat truly is a behavioral science problem that technical, investigative, and other means are attempting to solve, an increasing number of investigators, counterintelligence agents, analysts, and modelers have begun attending these gatherings that previously were attended only by operational psychologists. In addition, some leaders within insider threat programs have recognized the multidisciplinary value of presentations and briefings at these events and have begun offering supporting psychologists as representatives of their programs at broader insider threat liaison events.
As with the activities of any who works in applied psychology, it is relatively easy to recognize some potential ethical pitfalls that could be associated with the applications described earlier. Regardless of the potential pitfalls, two important factors remain:
Though no list ever will be comprehensive, some common examples of ethical principles often discussed in this area and associated guidelines are described in the following paragraphs.
Given that the organization is the identified client in most scenarios in which operational psychologists are supporting insider threat programs, Guideline 3.11 (Psychological Services Delivered to or through Organizations) of the APA’s Ethics Code (APA, 2017) nearly always applies to how services are provided and is a foundational pillar for designing ethical operational psychology services in support of insider threat programs. The standard is as follows:
3.11 Psychological Services Delivered to or through Organizations
For example, when conducting research and development activities on behalf of an insider threat program, it is essential to lay out the “nature and objectives” of the research and the “probable uses” of the results, and to provide those results to the organization “as soon as feasible.” If any human subjects are to be used in the research, all of the aforementioned provisions apply so that the subject has appropriate expectations about participation so that he or she may make informed decisions about participation. Of course, many guidelines within Standard 8: Research and Publication (APA, 2017) would apply, with consideration to the sensitivity of the information being gathered and associated sharing and publication restrictions. Another great example of the ethical complexity operational psychologists routinely encounter is when asked to conduct a psychological evaluation for security clearance adjudication. The evaluation is a component of a larger consultation to the adjudicator, who, along with the agency, is the identified client. Yet the evaluation is conducted on an individual (clearly “affected by the services” provided to the client, as referenced in Standard 3.11) who submits for the evaluation voluntarily, but there is a potential for significant professional consequences if the evaluation is declined, since the lack of information for the adjudicator could contribute to security clearance revocation and associated employment limitations. Further, the purpose of conducting the evaluation is to share that information with the organization, and the organization may find it necessary to share with multiple levels of decision makers. Clearly, psychologists must be careful to provide a comprehensive consent process and confirm understanding of that consent (ensuring the consent truly is informed consent) so that the individual being assessed clearly understands the primary client for the evaluation, the risks involved with participation, and how that information may be shared. The affected individual (i.e., the subject of the assessment) must make a decision by weighing the importance of privacy against the desire to retain a security clearance and current employment, with no guarantee that participation will have a positive impact on those factors (though nonparticipation increases the likelihood of a negative impact). Also important is to ensure that nonpsychologists do not inadvertently use psychologist names or credentials (e.g., using a psychologist’s release form) on behalf of security to facilitate collecting healthcare information from external providers. Of course, Standards 3.10 (Informed Consent), 4.04 (Minimizing Intrusions on Privacy), 4.05 (Disclosures), and 9 (Assessment) also apply to this application of operational psychology.
How psychologists handle sensitive information has been a focus since the first APA Ethics Code was introduced in 1953 (APA, 1953). People entrust psychologists with sensitive and sometimes damaging information about themselves and others, and psychologists—especially in the clinical and counseling professions—quickly would lose the public trust if information was disclosed without care or consideration of those who might be impacted. That general principle of protecting people’s private information inevitably conflicts with those who are tasked with consulting to an organization about individuals’ psychological fitness, decision making, and overall behavior. Often, the psychologists working in this capacity do not belong to components within the organization that are considered a “covered entity” under the Health Insurance Portability and Accountability Act of 1996 (HIPAA, 1996), nor are they providing services covered under HIPAA, so primary legal guidance for handling sensitive information is derived from the Privacy Act of 1974, all associated exemptions, and case law. Therefore, it is logical for Guideline 3.11, Psychological Services Delivered to or through Organizations (APA, 2017) to focus so much on how information will be shared and the extreme importance of informed consent, limits of confidentiality, and privacy.
For psychologists, informed consent and limits of confidentiality are relatively easy to discuss in a direct interaction that is well defined, even in the case of operational psychologists who are supporting insider threat activities. A psychologist who conducts a direct psychological evaluation for security clearance adjudication may discuss very clear facts about the role of the psychologist and relationships within the organization, the purpose of the evaluation, how information is gathered and shared, who may have access to the information, how declining the evaluation will be reported to the adjudicator (without an evaluative statement), and that confidentiality is significantly limited. Fitness-for-duty evaluations, special assignment evaluations, and other direct assessments can be approached in a similar manner. Further, psychologists in these scenarios are compelled to minimize intrusions on privacy in accordance with Standard 4.04 (Minimizing Intrusions on Privacy) and can provide the greatest protection by limiting the details in an assessment report to only those data that may impact the findings, conclusions, and recommendations in the report. Although Standard 1.01 (Misuse of Psychologists’ Work) still applies after a report is provided to the client (i.e., the organization), operational psychologists must acknowledge and take care to ensure that the individual being assessed understands that the psychologist has little to no control over how a report is shared following its release to the requestor. Psychologists who operate by the “law of no surprises” (i.e., covering as many possible outcomes of the interaction as possible) in providing truly informed consent and limits of confidentiality provide greater protection for the individual being assessed, the organization, and themselves.
Though not the responsibility of operational psychologists, understanding informed consent about how information is gathered also is important to the operational psychologists who consult to insider threat analysts, threat management personnel, and other entities who request consultation on information derived from indirect sources. For example, many organizations take great care to inform the end users of information systems (inevitably, everyone who works in the organization) that network activity is monitored and some information that is considered privileged (e.g., in some organizations, communications with a psychotherapist over organizationally owned information systems is privileged information). Many organizations provide this informed consent via required training (often annual) and a security banner, on which the end user must click “I agree” or some other affirmative statement before being allowed to use that information system. As expected, organizations typically work with legal counsel to ensure that this form of consent meets legal requirements, as do any methods to gather information about employee activities. Operational psychologists presented with this information must recognize the sensitivity of that information and only share what is professionally necessary during consultative activities. Operational psychologists may, and sometimes must, develop hypotheses from incomplete information for insider threat professionals to test but must refrain from offering informal speculation that has no bearing on the threat assessment or other consultative activity.
When alerted to concerning information, insider threat professionals often engage operational psychologists for consultation regarding any potential threat those data may indicate, gaps in the data, and methods to mitigate the potential threat. Though sometimes the questions asked during consultation appear clinical in nature (e.g., “Does the data suggest a threat to self?”), traditional clinical practice guidelines often do not apply. Using a clinical example, if credible information was available (regardless of source) about a patient/client suggesting that he or she was having suicidal thoughts, no responsible clinician would end a session with that patient/client without exploring further to confirm that there is no specific plan or intent to commit suicide. Likewise, if insider threat data is presented to an operational psychologist that suggests potential suicidality but no specific plan or intent, that data cannot be dismissed as “vague suicidal thoughts” because the operational psychologist is not in a position to directly inquire about plan or intent. Using clinical principles to inform this vital operational psychology practice in the absence of accepted standards of practice requires that operational psychologists pay special attention to Standards 2.01 (Boundaries of Competence [e]), 9.01 (Bases for Assessments [c]), 9.02 (Use of Assessments [b]), to ensure that they are practicing responsibly, seeking out appropriate consultation and supervision, and informing the recipients of their consultation services of the limitations inherent in opinions provided during this type of consultative activity (APA, 2017). In fact, Standard 2.01 (e) provides salient guidance to most of operational psychology:
2.01 (e) In those emerging areas in which generally recognized standards for preparatory training do not yet exist, psychologists nevertheless take reasonable steps to ensure the competence of their work and to protect clients/patients, students, supervisees, research participants, organizational clients, and others from harm. (p. 5)
It is essential that operational psychologists in all areas strive to “ensure the competence of their work” and take reasonable steps to protect organizational clients, research participants and others from harm.
In all areas of applied psychology, just as with those who provide healthcare, operational psychologists who work in insider threat must be keenly aware of multiple relationships and potential conflicts of interest and take appropriate action to mitigate those concerns when identified. Operational psychologists, like their clinical and counseling colleagues, are guided by Standard 3.05 (a); they “[refrain] from entering into a multiple relationship if the multiple relationship could reasonably be expected to impair the psychologist’s objectivity, competence or effectiveness in performing his or her functions … or otherwise risks exploitation or harm to the person with whom the professional relationship exists” (APA, 2017, p. 6). Delivering services to and through organizations further complicates the issues of both multiple relationships and conflicts of interest, since the identified client (the organization) may differ from an individual that is either being evaluated or otherwise impacted by the services rendered. The size, resources, and location of an organization also influence decision making, in that some operational psychologists must operate like a “rural psychologist” and manage multiple relationships accordingly using peer consultation and other methods to minimize the impact of the inevitable multiple relationships that arise.
Similarly, operational psychologists must routinely evaluate service requests and activities for conflicts of interest. Conflicts of interest must be assessed on a case-by-case basis—though an operational psychologist may have a conflict when providing a particular service involving a certain individual, the same conflict may not exist if the individual is different. Being in a particular role or being restricted to providing a particular service does not protect an operational psychologist from all potential conflicts of interest while serving in that capacity. Likewise, the possibility of future professional interactions in one role with an individual does not automatically create a conflict for providing services in a current role. Standard 3.06 (APA, 2017) encourages psychologists to “refrain … when personal, scientific, professional, legal, financial or other interests or relationships could reasonably [emphasis added] be expected to (1) impair their objectivity, competence or effectiveness … or (2) expose the person or organization with whom the professional relationship exists to harm or exploitation” (p. 6). Qualifiers, such as reasonably in the previous passage, are thoughtfully used throughout the Ethics Code in an acknowledgment that the authors of the Ethics Code cannot anticipate every situation that psychologists will encounter and individual psychologists must carefully evaluate each situation and consult with peers when necessary to determine reasonable thresholds. Given enough time and forethought, many services provided to or through an organization could be argued against on a basis of conflict of interest, but whether or not the complex relationships and roles could reasonably result in a true conflict of interest based on “personal, scientific, professional, legal, financial or other interests or relationships” is at the heart of the issue. If a conflict that meets the “reasonably” criterion is identified, operational psychologists must take appropriate action to mitigate the conflict.
Given that the overarching goals of all insider threat programs are to detect, understand, predict, and influence human behavior, operational psychologists have a tremendous amount to offer in support of those objectives. Behaviors and philosophies are changing (e.g., beliefs about right to information as seen in recent unauthorized disclosure cases), and the ever-changing motivations and activities of future insider threats require critical research to advance deterrence, detection, and mitigation strategies. Combining knowledge of human development, motivation, relationships, behavior patterns, research methodology, statistics, and other related areas with skills honed through years of professional practice, operational psychologists are uniquely positioned to drive the future of insider threat detection, deterrence, and mitigation. Operational psychologists utilize effective research and development programs to inform insider threat practice and policy, and liaison with other professionals to ensure that best practices are being distributed. Operational psychologists educate insider threat, security, and counterintelligence professionals on the psychological science of insider threat behavior and provide consultation to identify potential threats and develop effective mitigation strategies. By teaming with analysts, threat management, and other insider threat, security, and counterintelligence professionals, operational psychologists provide a vital service to organizations and communities that ultimately saves lives.
American Psychological Association. (1953). Ethical standards of psychologists. Washington, DC: Author.
American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. Retrieved from https://www.apa.org/ethics/code/ethics-code-2017.pdf
Band, S. R., Cappelli, D., M., Fischer, L. F., Moore, A. P., Shaw, E. D., & Trzeciak, R. F. (2006). Comparing insider IT sabotage and espionage: A model-based analysis. (No. CMU/SEI-2006-TR-026). Retrieved from https://resources.sei.cmu.edu/asset_files/TechnicalReport/2006_005_001_14798.pdf
Borum, R., Fein, R., Vossekuil, B., & Berglund, J. (1999). Threat assessment: Defining an approach for evaluating risk of targeted violence. Behavioral Sciences and the Law, 17, 323–337.
Brown, C. R., Greitzer, F. L., & Watkins, A. (2013). Toward the development of a psycholinguistic-based measure of insider threat risk focusing on core word categories used in social media. Proceedings of the Nineteenth Americas Conference on Information Systems, Chicago, Illinois, August 15–17, 2013. Retrieved from https://pdfs.semanticscholar.org/3977/6d135c25e16217ef25e19a2ecdda496bb556.pdf
Bulling, D., Scalora, M., Borum, R., Panuzio, J., & Donica, A. (2008). Behavioral science guidelines for assessing insider threats. Publications of the University of Nebraska Public Policy Center, 37.
Calhoun, F. S., & Weston, S. W. (2003). Contemporary threat management: A practical guide for identifying, assessing, and managing individuals of violent intent. San Diego, CA: Specialized Training Services.
Calhoun, F. S., & Weston, S. W. (2009). Threat assessment and management strategies: Identifying the howlers and hunters. Boca Raton, FL: CRC Press.
Cohen, L. E., & Felson, M. (1979). Social change and crime rate trends: A routine activity approach. American Sociological Review, 44(4), 588–608. doi: 10.2307/2094589
Exec. Order No. 13587, 3 C.F.R. 276–280 (2011), reprinted in 50 U.S.C. 3161 app. at 606–07 (2012).
Fein, R. A., & Vossekuil, B. (1997). Preventing assassination: Secret Service Exceptional Case Study Project [Monograph]. National Institute of Justice, U.S. Department of Justice, Washington, DC. Retrieved from https://www.ncjrs.gov/pdffiles1/Photocopy/167224NCJRS.pdf
Fein, R. A., & Vossekuil, B. (1998). Protective intelligence and threat assessment investigations: A guide for state and local law enforcement officials. National Institute of Justice, U.S. Department of Justice, Washington, D.C.
Fischer, L. F. (2000). Espionage: Why does it happen? The Department of Defense Security Institute, Richmond, VA. Retrieved from file://D:\CI Web content\Espionage Why Does it Happen.htm.
Geck, C. M., Grimbos, T., Siu, M., Klassen, P. E., & Seto, M. C. (2017). Violence at work: An examination of aggressive, violent, and repeatedly violent employees. Journal of Threat Assessment and Management, 4(4), 210–229.
Greitzer, F. L., & Ferryman, T. A. (2013, May). Methods and metrics for evaluating analytic insider threat tools. Paper presented at the 2013 IEEE Security and Privacy Workshops, San Francisco.
Health Insurance Portability and Accountability Act 1996. Pub. L. No. 104–191, 110 Stat. 1936, August 21 (1996). Washington, DC: U.S. Congress.
Liang, N., Biros, D. P., & Luse, A. (2016). An empirical validation of malicious insider characteristics. Journal of Management Information Systems, 33(2), 361–392.
Maasberg, M., Warren, J., & Beebe, N. L. (2015, January). The dark side of the insider: Detecting the insider threat through examination of dark triad personality traits. Paper presented at the 48th Hawaii International Conference on System Sciences, Kauai, HI, 3518–3526, doi:10.1109/HICSS.2015.423.
Maynard, D., & Greenwood, M. A. (2014). Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis. In Proceedings. Language Resources and Evaluation Conference (LREC), May 26–31, 2014, Reykjavik, Iceland. Retrieved from http://eprints.whiterose.ac.uk/130763/1/sarcasm.pdf
Meloy, J. R., & Hoffman, J. (2013). International handbook of threat assessment. New York: Oxford University Press.
Meloy, J. R., Hoffman, J., Guldimann, A., & James, D. (2012). The role of warning behaviors in threat assessment: An exploration and suggested typology. Behavioral Sciences and the Law, 30, 256–279.
Moore, A. P., Cappelli, D. M., & Trzeciak, R. F. (2008). The “big picture” of insider IT sabotage across U.S. critical infrastructures. (No. CMU/SEI-2008-TR-009). Retrieved from https://resources.sei.cmu.edu/asset_files/TechnicalReport/2008_005_001_14981.pdf
National Defense Authorization Act for Fiscal Year 2017, Pub. L. No. 114–328, § 951, 130 Stat. (2016).
National Insider Threat Task Force. (2017). 2017 insider threat guide. Retrieved from https://www.dni.gov/files/NCSC/documents/nittf/NITTF-Insider-Threat-Guide-2017.pdf
Nurse, J. R., Buckley, O., Legg, P. A., Goldsmith, M., Creese, S., Wright, G. R., & Whitty, M. (2014, May). Understanding insider threat: A framework for characterizing attacks. Paper presented at the IEEE Computer Society 2014 Security and Privacy Workshops, San Jose, CA, 214–228, doi:10.1109/ SPW.2014.38.
Office of the Director of National Intelligence. (2017). Security executive agent 4: National security adjudicative guidelines. Retrieved from https://www.dni.gov/files/NCSC/documents/Regulations/SEAD-4-Adjudicative-Guidelines-U.pdf
Shaw, E., Ruby, K. G., & Post, J. M. (1998). The insider threat to information systems: The psychology of the dangerous insider. Security Awareness Bulletin, 2(98), 1–10.
Shaw, E., & Sellers, L. (2015). Application of the critical-path method to evaluate insider risks. Studies in Intelligence, 59(2), 1–8.
Silver, J., Simons, A., & Craun, S. (2018). A study of the pre-attack behaviors of active shooters in the United States between 2000–2013. Washington, DC: Federal Bureau of Investigation, U.S. Department of Justice.
Thompson, E. (1996, June 11). Soldier warned of attack, Fort Bragg comrade says. The Washington Post. Retrieved from https://www.washingtonpost.com/archive/politics/1996/06/11/soldier-warned-of-attack-fort-bragg-comrade-says/b41f2050-7f02-4de5-8e84–74ffa3a1d65b/?noredirect=on&utm_term=.b0363af5b88d
The United States between 2000–2013. Federal Bureau of Investigation, U.S. Department of Justice, Washington, D.C. Retrieved from https://www.fbi.gov/file-repository/pre-attack-behaviors-of-active-shooters-in-us-2000-2013.pdf
Vossekuil, B., Borum, R., Fein, R., & Reddy, M. (2001). Preventing target violence against judicial officials and courts. The ANNALS of the American Academy of Political and Social Science, 576, 78–90.
Vossekuil, B., Fein, R. A., Reddy, M., Borum, R., & Modzeleski, W. (2004). The final report and findings of the safe school initiative: Implications for the prevention of school attacks in the United States. United States Secret Service and Department of Education, Washington, DC. Retrieved from https://www.secretservice.gov/data/protection/ntac/ssi_final_report.pdf
White, S. G., & Meloy, J. R. (2016). Workplace assessment of violence risk: A structured professional judgment guide (3rd ed.). San Diego, CA: Specialized Training Services.
Wilder, U. M. (2017). Why spy? The psychology of espionage. Studies in Intelligence, 61(2), 19–36.
Wood, S., Crawford, K. S., & Lang, E. L. (2005). Reporting of counterintelligence and security indicators by supervisors and coworkers. (No. PERS-TR-05–6). Defense Personnel Security Research Center, Monterey, CA.