Besides traditional security issues, such as viruses, human actions periodically result in loss of varying types and degrees. Improperly maintained equipment will fail. Data entry errors cause a domino effect of troubles for organizational operations. Software programming problems along with poor design and incomplete training caused the devastating crashes of two Boeing 737 Max airplanes in 2019 (as is discussed in more detail in Chapter 3, “What Is User-Initiated Loss?”). These are not traditional security problems, but they result in major damage to business operations.
History Is Not on the Users’ Side
No user is immune from failure, regardless of whether they are individual citizens, corporations, or government agencies. Many anecdotes of user failings exist, and some are quite notable.
The Target hack attracted worldwide attention when 110,000,000 consumers had their personal information compromised and abused. In this case, the attack began when a Target vendor fell for a phishing attack, and then the attacker used the stolen credentials to gain access to the Target vendor network. The attacker was then allowed to surf the network and inevitably accomplish their thefts.
While the infamous Sony hack resulted in disaster for the company, causing immense embarrassment to executives and employees, it also caused more than $150,000,000 in damages. In this case, North Korea obtained its initial foothold on Sony's network with a phishing message sent to the Sony system administrators.
From a political perspective, the Democratic National Committee and related organizations that were key in Hillary Clinton's presidential campaign were hacked in 2016 when a Russian intelligence GRU operative sent a phishing message to John Podesta, then chair of Hillary Clinton's campaign. The resulting leak of the email was embarrassing and was strategically released through Wikileaks.
In the Office of Personnel Management (OPM) hack, 20,000,000 U.S. government personnel had their sensitive information stolen. It is assumed that Chinese hackers broke into systems where the OPM stored the results of background checks and downloaded all of the data. The data contained not just the standard name, address, Social Security number, and so on, but information about their health, finances, mental illnesses, among other highly personal information, as well as information about their relatives. This information was obtained through a sequence of events that began by sending a phishing message to a government contractor.
From a physical perspective, the Hubble Space Telescope was essentially built out of focus, because a testing device was incorrectly assembled with a single lens misaligned by 1.3 mm. The reality is that many contributing errors led to not only the construction of a flawed device but the failure to detect the flaws before it was launched.
In an even more extreme example, the Chernobyl nuclear reactor had a catastrophic failure. It caused the direct deaths of 54 people, another approximately 20,000 other people contracted cancer from radiation leaks, and almost 250,000 people were displaced. All of this resulted from supposed human error, where technicians violated protocols to allow the reactor to run at low power.
These are just a handful of well-known examples where users have been the point of entry for attacks. The DBIR also highlights W-2 fraud as a major type of crime involving data breaches. Thousands of businesses fall prey to this crime, which involves criminals pretending to be the CEO or a similar person and sending emails to human resources (HR) departments, requesting that an HR worker send out copies of all employee W-2 statements to a supposedly new accounting firm. The criminals then use those forms to file fraudulent tax refunds and/or perform other forms of identity theft. Again, these attacks are successful because some person makes a mistake.
NOTEIf you are unfamiliar with U.S. tax matters, W-2 statements are the year-end tax reports that companies send to employees.
Other human failures can include carelessness, ignorance, lost equipment, leaving doors unlocked, leaving sensitive information insecure, and so on. There are countless ways that users have failed. Consequently, sometimes technology and security professionals speciously condemn users as being irreparably “stupid.” Of course, if technology and security professionals know all of the examples described in this section and don't adequately try to prevent their recurrence, are they any smarter? The following sections will examine the current approach to this problem and then how we can begin to improve on it.
There are a variety of ways to deal with expected human failings. The three most prevalent ways are awareness, technology, and governance.
Operational and Security Awareness
As the costs of those failings have risen into the billions of dollars and more failings are expected, the security profession has taken notice. The general response has been to implement security awareness programs. This makes sense. If users are going to make mistakes, they should be trained not to make mistakes.
Just about all security standards require that users receive some form of awareness training. These standards are supposed to provide some assurance for third parties that the organizations certified, such as credit card processors and public companies, provide reasonable security protections. Auditors then go in and verify that the organizations have provided the required levels of security awareness.
Unfortunately, audit standards are generally vague. There is usually a requirement that all employees and contractors have to take some form of annual training. This traditionally means that users watch some type of computer-based training (CBT) that is composed of either monthly 3- to 5-minute sessions or a single annual 30- to 45-minute session. CBT learning management systems (LMSs) usually provide the ability to test for comprehension. Reports are then generated to show the auditors to prove the required training has been completed.
As phishing attacks have grown in prominence, auditors started to require that phishing simulations be performed. Organizations also unilaterally decided that they want phishing simulations to better train their users. Phishing simulations do appear to decrease phishing susceptibility over time. These simulations vary greatly in quality and effectiveness. As previously stated, this optimistically results in a 4 percent failure rate.
In general operational settings, training is provided, but there are few standards or requirements for such training. There may or may not be a safety briefing. There are sometimes compliance requirements for how people are to do their jobs, such as in the case of handling personally identifiable information (PII) in certain environments covered by regulations or requirements, such as the Healthcare Insurance and Portability and Accountability Act (HIPAA) and the Payment Card Industry Data Security Standard (PCI DSS). The PCI DSS even requires that programmers receive training in secure programming techniques. NIST 800-50, “Building an Information Technology Security Awareness and Training Program,” even attempts a more rigorous structure in the context of the Federal Information Security Management Act (FISMA).
Unfortunately, awareness training, security-related or otherwise, is poorly defined and broadly fails at creating the required behaviors.
Independent of awareness efforts, IT or security technology professionals implement their own plans to try to reduce the likelihood of humans falling for attacks or otherwise causing damage. For the most part, these are preventative in nature. For example, a user cannot click on a phishing message if the message never gets to the user. For that reason, organizations acquire software that filters incoming email for potential attacks.
Читать дальше