Say what you will about cybercriminals, says Angela Sasse, “their victims rave about the customer service”.
Sasse is talking about ransomware: an extortion scheme in which hackers encrypt the data on a user's computer, then demand money for the digital key to unlock them. Victims get detailed, easy-to-follow instructions for the payment process (all major credit cards accepted), and how to use the key. If they run into technical difficulties, there are 24/7 call centres.
“It's better support than they get from their own Internet service providers,” says Sasse, a psychologist and computer scientist at University College London who heads the Research Institute in Science of Cyber Security. That, she adds, is today's cybersecurity challenge in a nutshell: “The attackers are so far ahead of the defenders, it worries me quite a lot.”
Long gone are the days when computer hacking was the domain of thrill-seeking teenagers and college students: since the mid-2000s, cyberattacks have become dramatically more sophisticated. Today, shadowy, state-sponsored groups launch exploits such as the 2014 hack of Sony Pictures Entertainment and the 2015 theft of millions of records from the US Office of Personnel Management, allegedly sponsored by North Korea and China, respectively. “Hacktivist” groups such as Anonymous carry out ideologically driven attacks on high-profile terrorists and celebrities. And a vast criminal underground traffics in everything from counterfeit Viagra to corporate espionage. By one estimate, cybercrime costs the global economy between US$375 billion and $575 billion each year.
Increasingly, researchers and security experts are realizing that they cannot meet this challenge just by building higher and stronger digital walls around everything. They have to look inside the walls, where human errors, such as choosing a weak password or clicking on a dodgy e-mail, are implicated in nearly one-quarter of all cybersecurity failures. They also have to look outwards, tracing the underground economy that supports the hackers and finding weak points that are vulnerable to counterattack.
“We've had too many computer scientists looking at cybersecurity, and not enough psychologists, economists and human-factors people,” says Douglas Maughan, head of cybersecurity research at the US Department of Homeland Security.
That is changing—fast. Maughan's agency and other US research funders have been increasing their spending on the human side of cybersecurity for the past five years or so. In February, as part of his fiscal-year 2017 budget request to Congress, US President Barack Obama proposed to spend more than $19 billion on federal cybersecurity funding — a 35% increase over the previous year — and included a research and development plan that, for the first time, makes human-factors research an explicit priority.
The same sort of thinking is taking root in other countries. In the United Kingdom, Sasse's institute has a multiyear, £3.8-million (US$5.5-million) grant from the UK government to study cybersecurity in businesses, governments and other organizations. Work from the social sciences is providing an unprecedented view of how cybercriminals organize their businesses—as well as better ways to help users to choose an uncrackable yet memorable password.
The fixes are not easy, says Sasse, but they're not impossible. “We've actually got good science on what does and doesn't work in changing habits,” she says. “Applying those ideas to cybersecurity is the frontier.”
Know your audience
Imagine that it is the peak of a harried work day, and a legitimate-looking e-mail lands in your inbox: the company's computer team has detected a security breach, it says, and everyone needs to run an immediate background scan for viruses on their machines. “There's a tendency to just click 'accept' without reading,” says Adam Joinson, a social psychologist who studies online behaviour at the University of Bath, UK. Yet the e-mail is a fake—and that hasty, exasperated click sends malware coursing through the company network to steal passwords and other data, and to convert everyone's computers into a zombie “botnet” that fires off more spam.
The attackers, it seems, have a much better grasp on user psychology than have the institutions meant to defend them. In the scenario above, the success of the attack relies on people's instinctive deference to authority and their lowered capacity for scepticism when they're busy and distracted. Companies, by contrast, tend to impose security rules that are disastrously out of sync with how people work. Take the ubiquitous password, by far the simplest and most common way for computer users to prove their identity. One study, released in 2014 by Sasse and others, found that employees of the US National Institute of Standards and Technology (NIST), headquartered in Gaithersburg, Maryland, averaged 23 “authentication events” per day—including repeated logins to their own computers, which locked them out after 15 minutes of inactivity.
Such demands represent a substantial drain on employees' time and mental energy—especially for those who try to follow the standard password guidelines. These insist that people use a different password for each application; avoid writing passwords down; change them regularly; and always use a hard-to-guess mix of symbols, numbers and uppercase and lowercase letters.
So people resort to subversion. In another systematic study of password use in the real world, Sasse and her colleagues documented the ways in which workers at a large multinational organization side-stepped the official security requirements without (they hoped) being totally reckless. The employees' methods—writing down a list of passwords, for example, or transferring files between computers using unencrypted flash drives—would be familiar in most offices, but essentially created a system of 'shadow security' that kept the work flowing. “Most people's goal is not to be secure, but to get the job done,” says Ben Laurie, who studies security compliance at Google Research in London. “And if they have to jump through too many hoops, they will say, 'To hell with it.'”
Researchers have uncovered multiple ways to ease this impasse between workers and security managers. Lorrie Cranor directs the CyLab Usable Privacy and Security Laboratory at Carnegie Mellon University in Pittsburgh, Pennsylvania—one of several groups worldwide that are looking at ways to make password policies more human-compatible.
“We got started on this six or seven years ago, when Carnegie Mellon changed its password policy to something really complicated,” says Cranor, who is currently on leave from the university to serve as chief technologist at the US Federal Trade Commission in Washington DC. The university said that it was trying to conform to standard password guidelines from NIST. But when Cranor investigated, she found that these guidelines were based on educated guesses. There were no data to base them on, because no organization wanted to reveal its users' passwords, she says. “So we said, 'This is a research challenge.'”
Cranor and her colleagues put a wide range of password policies to the test by asking 470 computer users at Carnegie Mellon to generate new passwords based on different requirements for length and special symbols. Then they tested how strong the resulting passwords actually were, how much effort was required to create them, how easy they were to remember—and how annoyed at the system the participants became.
One key finding was that organizations should forget the standard advice that complex gobbledygook words such as 0s7G0*7j%x$a are safest. “It's easier for users to deal with password length than password complexity,” says Cranor. An example of a secure but user-friendly password might be a concatenation of four common but randomly chosen words—something like usingwoodensuccessfuloutline. At 28 characters, it is more than twice as long as the gibberish example, but much easier to remember. As long as the system guards against people making stupid choices such as passwordpassword, says Cranor, strings of words are quite hard for attackers to guess, and provide excellent security.
Time for a change
Another key finding, says Cranor, is that unless there is reason to think that the organization's security has been compromised, the standard practice of forcing users to change their passwords on a 30-, 60- or 90-day schedule ranks somewhere between useless and counterproductive (see go.nature.com/2vq6r4). For one thing, she says, studies show that most people respond to such demands by choosing a weaker password to begin with, so that they can remember it, and then making the smallest change that they can get away with. They might increase a final digit by one, for example, so that password2 becomes password3 and so on. “So if a hacker guesses your password once,” she says, “it won't take them many tries to guess it again.”
Besides, she says, one of the first that things hackers do when they break in is to install a key-logging program or some other bit of malware that allows them to steal the new password and get in whenever they want. So again, says Cranor, “changing the password doesn't help”.
Sasse sees encouraging signs that such critiques are being heard. “For me, the milestone was last year when GCHQ changed its advice on passwords,” she says, referring to the Government Communications Headquarters, a key UK intelligence agency. GCHQ issued a public document, containing several citations to the research literature, that gave up on long-established practices such as demanding regular password changes, and instead urged managers to be as considerate as possible towards the people who have to live with their policies. “Users have a whole suite of passwords to manage, not just yours,” goes one bit of advice. “Only use passwords where they are really needed.”
Attack on attackers
If research can uncover weak points in user behaviour, perhaps it can also find vulnerabilities among the attackers.
In 2010, Stefan Savage, a computer scientist at the University of California, San Diego, and his team set up a cluster of computers to act as what he calls “the most gullible consumer ever”. The machines went through reams of spam e-mails collected from several major antispam companies, and clicked on every link they could find. The researchers focused on illegal pills, counterfeit watches and handbags, and pirated software—three of the product lines most frequently advertised in spam—and bought more than 100 items. Then they used specially designed web-crawling software to track back through the spammers' supply network. If an illicit vendor registered a domain name, made payments to a supplier or used a bank to accept credit-card payments, the researchers could see it. The study exposed, for the first time, the entire business structure of computer criminals—and revealed how surprisingly sophisticated it was.
“It was the ultimate hothouse of weird new entrepreneurial ideas,” says Savage, “the purest form of small-business capitalism imaginable—because there is no regulation.” Yet there was order, even so. “Say you have a criminal activity you want to engage in,” Savage explains—for example, selling counterfeit drugs. You set up shop by creating the website and the databases, striking a deal with a bank to accept credit-card payments and creating a customer-service arm to deal with complaints—all the back-end parts of the business.
Sasse is talking about ransomware: an extortion scheme in which hackers encrypt the data on a user's computer, then demand money for the digital key to unlock them. Victims get detailed, easy-to-follow instructions for the payment process (all major credit cards accepted), and how to use the key. If they run into technical difficulties, there are 24/7 call centres.
“It's better support than they get from their own Internet service providers,” says Sasse, a psychologist and computer scientist at University College London who heads the Research Institute in Science of Cyber Security. That, she adds, is today's cybersecurity challenge in a nutshell: “The attackers are so far ahead of the defenders, it worries me quite a lot.”
Long gone are the days when computer hacking was the domain of thrill-seeking teenagers and college students: since the mid-2000s, cyberattacks have become dramatically more sophisticated. Today, shadowy, state-sponsored groups launch exploits such as the 2014 hack of Sony Pictures Entertainment and the 2015 theft of millions of records from the US Office of Personnel Management, allegedly sponsored by North Korea and China, respectively. “Hacktivist” groups such as Anonymous carry out ideologically driven attacks on high-profile terrorists and celebrities. And a vast criminal underground traffics in everything from counterfeit Viagra to corporate espionage. By one estimate, cybercrime costs the global economy between US$375 billion and $575 billion each year.
Increasingly, researchers and security experts are realizing that they cannot meet this challenge just by building higher and stronger digital walls around everything. They have to look inside the walls, where human errors, such as choosing a weak password or clicking on a dodgy e-mail, are implicated in nearly one-quarter of all cybersecurity failures. They also have to look outwards, tracing the underground economy that supports the hackers and finding weak points that are vulnerable to counterattack.
“We've had too many computer scientists looking at cybersecurity, and not enough psychologists, economists and human-factors people,” says Douglas Maughan, head of cybersecurity research at the US Department of Homeland Security.
That is changing—fast. Maughan's agency and other US research funders have been increasing their spending on the human side of cybersecurity for the past five years or so. In February, as part of his fiscal-year 2017 budget request to Congress, US President Barack Obama proposed to spend more than $19 billion on federal cybersecurity funding — a 35% increase over the previous year — and included a research and development plan that, for the first time, makes human-factors research an explicit priority.
The same sort of thinking is taking root in other countries. In the United Kingdom, Sasse's institute has a multiyear, £3.8-million (US$5.5-million) grant from the UK government to study cybersecurity in businesses, governments and other organizations. Work from the social sciences is providing an unprecedented view of how cybercriminals organize their businesses—as well as better ways to help users to choose an uncrackable yet memorable password.
The fixes are not easy, says Sasse, but they're not impossible. “We've actually got good science on what does and doesn't work in changing habits,” she says. “Applying those ideas to cybersecurity is the frontier.”
Know your audience
Imagine that it is the peak of a harried work day, and a legitimate-looking e-mail lands in your inbox: the company's computer team has detected a security breach, it says, and everyone needs to run an immediate background scan for viruses on their machines. “There's a tendency to just click 'accept' without reading,” says Adam Joinson, a social psychologist who studies online behaviour at the University of Bath, UK. Yet the e-mail is a fake—and that hasty, exasperated click sends malware coursing through the company network to steal passwords and other data, and to convert everyone's computers into a zombie “botnet” that fires off more spam.
The attackers, it seems, have a much better grasp on user psychology than have the institutions meant to defend them. In the scenario above, the success of the attack relies on people's instinctive deference to authority and their lowered capacity for scepticism when they're busy and distracted. Companies, by contrast, tend to impose security rules that are disastrously out of sync with how people work. Take the ubiquitous password, by far the simplest and most common way for computer users to prove their identity. One study, released in 2014 by Sasse and others, found that employees of the US National Institute of Standards and Technology (NIST), headquartered in Gaithersburg, Maryland, averaged 23 “authentication events” per day—including repeated logins to their own computers, which locked them out after 15 minutes of inactivity.
Such demands represent a substantial drain on employees' time and mental energy—especially for those who try to follow the standard password guidelines. These insist that people use a different password for each application; avoid writing passwords down; change them regularly; and always use a hard-to-guess mix of symbols, numbers and uppercase and lowercase letters.
So people resort to subversion. In another systematic study of password use in the real world, Sasse and her colleagues documented the ways in which workers at a large multinational organization side-stepped the official security requirements without (they hoped) being totally reckless. The employees' methods—writing down a list of passwords, for example, or transferring files between computers using unencrypted flash drives—would be familiar in most offices, but essentially created a system of 'shadow security' that kept the work flowing. “Most people's goal is not to be secure, but to get the job done,” says Ben Laurie, who studies security compliance at Google Research in London. “And if they have to jump through too many hoops, they will say, 'To hell with it.'”
Researchers have uncovered multiple ways to ease this impasse between workers and security managers. Lorrie Cranor directs the CyLab Usable Privacy and Security Laboratory at Carnegie Mellon University in Pittsburgh, Pennsylvania—one of several groups worldwide that are looking at ways to make password policies more human-compatible.
“We got started on this six or seven years ago, when Carnegie Mellon changed its password policy to something really complicated,” says Cranor, who is currently on leave from the university to serve as chief technologist at the US Federal Trade Commission in Washington DC. The university said that it was trying to conform to standard password guidelines from NIST. But when Cranor investigated, she found that these guidelines were based on educated guesses. There were no data to base them on, because no organization wanted to reveal its users' passwords, she says. “So we said, 'This is a research challenge.'”
Cranor and her colleagues put a wide range of password policies to the test by asking 470 computer users at Carnegie Mellon to generate new passwords based on different requirements for length and special symbols. Then they tested how strong the resulting passwords actually were, how much effort was required to create them, how easy they were to remember—and how annoyed at the system the participants became.
One key finding was that organizations should forget the standard advice that complex gobbledygook words such as 0s7G0*7j%x$a are safest. “It's easier for users to deal with password length than password complexity,” says Cranor. An example of a secure but user-friendly password might be a concatenation of four common but randomly chosen words—something like usingwoodensuccessfuloutline. At 28 characters, it is more than twice as long as the gibberish example, but much easier to remember. As long as the system guards against people making stupid choices such as passwordpassword, says Cranor, strings of words are quite hard for attackers to guess, and provide excellent security.
Time for a change
Another key finding, says Cranor, is that unless there is reason to think that the organization's security has been compromised, the standard practice of forcing users to change their passwords on a 30-, 60- or 90-day schedule ranks somewhere between useless and counterproductive (see go.nature.com/2vq6r4). For one thing, she says, studies show that most people respond to such demands by choosing a weaker password to begin with, so that they can remember it, and then making the smallest change that they can get away with. They might increase a final digit by one, for example, so that password2 becomes password3 and so on. “So if a hacker guesses your password once,” she says, “it won't take them many tries to guess it again.”
Besides, she says, one of the first that things hackers do when they break in is to install a key-logging program or some other bit of malware that allows them to steal the new password and get in whenever they want. So again, says Cranor, “changing the password doesn't help”.
Sasse sees encouraging signs that such critiques are being heard. “For me, the milestone was last year when GCHQ changed its advice on passwords,” she says, referring to the Government Communications Headquarters, a key UK intelligence agency. GCHQ issued a public document, containing several citations to the research literature, that gave up on long-established practices such as demanding regular password changes, and instead urged managers to be as considerate as possible towards the people who have to live with their policies. “Users have a whole suite of passwords to manage, not just yours,” goes one bit of advice. “Only use passwords where they are really needed.”
Attack on attackers
If research can uncover weak points in user behaviour, perhaps it can also find vulnerabilities among the attackers.
In 2010, Stefan Savage, a computer scientist at the University of California, San Diego, and his team set up a cluster of computers to act as what he calls “the most gullible consumer ever”. The machines went through reams of spam e-mails collected from several major antispam companies, and clicked on every link they could find. The researchers focused on illegal pills, counterfeit watches and handbags, and pirated software—three of the product lines most frequently advertised in spam—and bought more than 100 items. Then they used specially designed web-crawling software to track back through the spammers' supply network. If an illicit vendor registered a domain name, made payments to a supplier or used a bank to accept credit-card payments, the researchers could see it. The study exposed, for the first time, the entire business structure of computer criminals—and revealed how surprisingly sophisticated it was.
“It was the ultimate hothouse of weird new entrepreneurial ideas,” says Savage, “the purest form of small-business capitalism imaginable—because there is no regulation.” Yet there was order, even so. “Say you have a criminal activity you want to engage in,” Savage explains—for example, selling counterfeit drugs. You set up shop by creating the website and the databases, striking a deal with a bank to accept credit-card payments and creating a customer-service arm to deal with complaints—all the back-end parts of the business.
Post a Comment