Phishing attacks exploit cognitive biases, research finds

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Cybercriminals are crafting personalized social engineering attacks that exploit cognitive bias, according to a new report from Security Advisor, which uses machine learning to customize security awareness training for individual employees.

Cognitive bias refers to mental shortcuts humans subconsciously take when processing and interpreting information prior to making decisions. Bias is an attempt to simplify information processing to speed up decision-making, and can be effectively exploited in phishing attacks, SecurityAdvisor CEO Sai Venkataraman told VentureBeat. Cybercriminals manipulate a recipient’s thoughts and actions to convince that person to engage in risky behavior — such as clicking on a link they normally wouldn’t click on or entering sensitive information on a website.

Enterprise security teams usually rely on security awareness programs to train employees to recognize attacks so that they won’t be tricked. However, traditional security awareness programs rarely take into account the role cognitive biases play in these situations, nor do they typically consider people’s roles or past behavior. The training concepts weren’t sticking, and the data showed 5% of users accounted for 90% of security incidents, Venkataraman said.

SecurityAdvisor isn’t the only one saying traditional security awareness training and phishing simulations have a limited ability to protect organizations. A recent Cyentia Institute study found that security training resulted in slightly lower phishing simulation click rates among users, but it had no significant effect at the organizational level or in real-world attacks. The report, commissioned by Elevate Security, examined malware, phishing, email security, and other real-world attack data and found that increasing simulations and training can be counterproductive and result in people clicking malicious links more often than individuals with little or no training. Just 11% of users with only one training session clicked on a phishing link, but 14% of users with five training sessions clicked on the link, according to Cyentia’s analysis.

Understanding cognitive bias

Phishing works because people filter what they see through their experiences and preferences, and these influence the choices they make. Cognitive biases take many forms, but SecurityAdvisor’s research identified five major types used in phishing attacks: halo effect, hyperbolic discounting, curiosity effect, recency effect, and authority bias.

Halo effect, which refers to the individual having a positive impression of a person, brand, or product, is the type most commonly used by cybercriminals, appearing in 29% of phishing attacks. In this type of attack, a cybercriminal pretends to be a trusted entity to gain access. Cybercriminals targeting C-suite executives may send fake speaking invitations from reputable universities and organizations, for example.

Hyperbolic discounting, or the inclination to choose a reward that gives immediate results rather than a long-term equivalent, appeared in 28% of phishing attacks analyzed by SecurityAdvisor. This can take the form of clicking on a link to get $100 off a MacBook Air, for example. Spammers have long used this tactic to lure victims with promises of free or exclusive deals.

Curiosity effect, the desire to resolve uncertainty, rounded out the top three by appearing in 17% of phishing attacks. In this kind of attack, the C-suite executive may receive information about exclusive access to unnamed golf events, and a desire to know more about the event could make the executive more susceptible. IT teams may see phishing emails focused on things they are concerned about, such as securing the remote workforce and top trends in data analytics.

The recency effect takes advantage of the tendency to remember recent events, such as using information about COVID-19 vaccinations in the subject lines of phishing emails. And finally, the authority bias is based on people’s willingness to defer to the opinions of an authority figure. An attacker using authority bias may impersonate a senior manager or even the CEO.

For example, in organizations with “control-based cultures,” the authority bias means people will be less likely to question email messages that appear to be sent by the CFO instructing them to pay an invoice, for example, Venkataraman said.

SecurityAdvisor found that C-suite executives are targeted 50 times more than regular employees, followed by people on IT security teams, who are targeted 43.5 times more than regular employees. The biases used are also different. Cybercriminals targeting C-suite executives tend to employ the halo effect or the curiosity bias, while the majority of scams against IT security teams employed the curiosity bias. There were also industry-specific differences. People in the health care industry were more likely to see scams employing authority bias, recency effect, and loss aversion, while retail employees are more likely to be targeted by the halo effect, curiosity bias, and hyperbolic discounting. Financial services employees were likely to see phishing messages employing the halo effect to appear as if they came from regulators and vendors, or authority bias to appear as if they were sent by the CEO or tax authorities.

Changing security awareness training

Technology can go only so far when it comes to filtering out these attack messages because they are designed to look legitimate. But training employees to simply not fall for these attacks is not the answer either. Instead, the goal is to help mitigate risky behaviors. One way to counter the effects of cognitive biases is to help employees recognize tricks when they are being used. Machine learning can help facilitate individual changes in employee behavior by providing constant reminders to apply this knowledge at the exact moment of risk, Venkataraman said.

SecurityAdvisor’s platform fortifies people against these biases with “just-in-time” nudges, such as showing a quick refresher video when the platform detects the user had been targeted in an attack. The key message with these nudges is to remind employees they are part of the organization’s security infrastructure, Venkataraman said. Instead of saying that humans are the weakest link when it comes to corporate security, “we wanted to say humans are the strongest part of the security community.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here