Americas

  • United States

Asia

Oceania

maria_korolov
Contributing writer

9 ways hackers will use machine learning to launch attacks

Feature
Jun 13, 20228 mins
CybercrimeHackingMachine Learning

Machine learning algorithms will improve security solutions, helping human analysts triage threats and close vulnerabilities quicker. But they are also going to help threat actors launch bigger, more complex attacks.

job search machine learning ai artifical intelligence robotics automation
Credit: Thinkstock

Machine learning and artificial intelligence (AI) are becoming a core technology for some threat detection and response tools. The ability to learn on the fly and automatically adapt to changing cyberthreats give security teams an advantage.

However, some threat actors are also using machine learning and AI a to scale up their cyberattacks, evade security controls, and find new vulnerabilities all at an unprecedented pace and to devastating results. Here are the nine most common ways attackers leverage these technologies.

1. Spam, spam, spam, spam

Defenders have been using machine learning to detect spam for decades, says Fernando Montenegro, analyst at Omdia. “Spam prevention is the best initial use case for machine learning,” he says.

If the spam filter used provides reasons for why an email message did not go through or generates a score of some kind, then the attacker can use it to modify their behavior. They’d be using the legitimate tool to make their own attacks more successful. “If you submit stuff often enough, you could reconstruct what the model was, and then you can fine-tune your attack to bypass this model,” Montenegro says.

It’s not just spam filters that are vulnerable. Any security vendor that provides a score or some other output could potentially be abused, Montenegro says. “Not all of them have this problem, but if you’re not careful, they’ll have a useful output that someone can use for malicious purposes.”

2. Better phishing emails

Attackers aren’t just using machine-learning security tools to test if their messages can get past spam filters. They’re also using machine learning to create those emails in the first place, says Adam Malone, EY partner, Technology Consulting. “They’re advertising the sale of these services on criminal forums. They’re using them to generate better phishing emails. To generate fake personas to drive fraud campaigns.”

These services are specifically being advertised as using machine learning, and it’s probably not just marketing. “The proof is in the pudding,” Malone says. “They’re definitely better.”

Machine learning allows attackers to customize the phishing emails in creative ways so that they don’t show up bulk emails and are optimized to trigger engagement — and clicks.  They don’t stop at just the text of the email. AI can be used to generate realistic-looking photos, social media profiles, and other materials to make the communication seem as legitimate as possible.

3. Better password guessing

Criminals are also using machine learning to get better at guessing passwords, says Malone. “We’ve seen evidence of that based on the frequency and success rates of password guessing engines,” he says. Criminals are building better dictionaries and to hack stolen hashes.

They’re also using machine learning to identify security controls, Malone says, “so they can make fewer attempts and guess better passwords and increase the chances that they’ll successfully gain access to a system.”

4. Deep fakes

The most frightening use of artificial intelligence are the deep fake tools that can generate video or audio that is hard to distinguish from the real human. “Being able to simulate someone’s voice or face is very useful against humans,” says Montenegro. “If someone is pretending to sound like me, you might fall for it.”

In fact, a couple of high-profile cases have been made public over the last couple of years in which faked audio costs companies hundreds of thousands — or millions — of dollars. “People have been getting phone calls from their boss — that were fake,” says Murat Kantarcioglu, professor of computer science at the University of Texas.

More commonly, scammers are using AI to generate realistic-looking photos, user profiles, and phishing emails to make their messages seem more believable. It’s big business. According to the FBI, business email compromise scams led to more than $43 billion in losses since 2016. Last fall, there were media reports of a bank in Hong Kong duped into transferring $35 million to a criminal gang, because a bank official received a call from a company director with whom he’d spoken before. He recognized the voice, so he authorized the transfer.

5. Neutralizing off-the-shelf security tools

Many popular security tools used today have some form of artificial intelligence or machine learning built in. Anti-virus tools, for example, are increasingly looking beyond the basic signatures for suspicious behaviors. “Anything available online, especially open source, could be leveraged by the bad guys,” says Kantarcioglu.

Attackers can use these tools, not to defend against attacks, but to tweak their malware until it can evade detection. “AI models have many blind spots,” Kantarcioglu says. “You might be able to change them by changing features of your attack, like how many packets you send, or which resources you’re attacking.”

It’s not just the AI-powered security tools that attackers are using. AI is part of a lot of different technologies. Consider, for example, that users often learn to spot phishing emails by looking for grammar mistakes. AI-powered grammar checkers like Grammarly can help attackers improve their writing.

6. Reconnaissance

Machine learning can be used for reconnaissance, so that attackers can look at their target’s traffic patterns, defenses, and potential vulnerabilities. This isn’t an easy thing to do, so it’s not likely to be something that the average cybercriminal would engage in. “You do need some skill sets to use AI,” says Kantarcioglu, “so I believe that it would be advanced state actors who will use these  techniques.”

However, if, at some point, the technology is commercialized and provided as a service through the criminal underground, then it might become more widely accessible. It could also happen “if a nation-state threat actor developed a particular tool kit that used machine learning and released it to the criminal community,” says Mellen, “but the cybercriminals would still need some understanding of what the machine learning app was doing and how to harness it effectively, which creates a barrier to entry.”

7. Autonomous agents

If an enterprise notices that it’s under attack and shuts off internet access to affected systems, then malware might not be able to connect back to its command-and-control servers for instructions. “Attackers might want to come up with an intelligent model that will stay even if they can’t directly control it, for longer persistence,” says Kantarcioglu. “But for regular cybercrime, I believe that wouldn’t be super important.”

8. AI poisoning

An attacker can trick a machine learning model by feeding it new information. “The adversary manipulates the training data set,” says Alexey Rubtsov, senior research associate at Global Risk Institute. “For example, they intentionally bias it, and the machine learns the wrong way.”

For example, a hijacked user account can log into a system every day at 2 a.m. to do innocuous work, making the system think that there’s nothing suspicious about working at 2 a.m. and reduce the security hoops the user has to jump through.

This is similar to how Microsoft’s Tay chatbot was taught to be a racist in 2016. The same approach can be used to teach a system that a particular type of malware is safe or particular bot behaviors are completely normal.

9. AI fuzzing

Legitimate software developers and penetration testers use fuzzing software to generate random sample inputs in an attempt to crash an application or find a vulnerability. The souped-up versions of this software use machine learning to generate the inputs in a more focused, organized way, prioritizing, say, text strings most likely to cause problems. That makes the fuzzing tools more useful to enterprises, but also more deadly in the hands of attackers.

All these techniques are a reason why basic cybersecurity hygiene such as patching, anti-phishing education and micro-segmentation continue to be vital. “And it’s one of the reasons why defense in depth is so important,” says Forrester’s Mellen. “You need to put up multiple roadblocks, not just the one thing that attackers end up using against you to their advantage.”

Lack of expertise keeps threat actor use of machine learning, AI low

Investing in machine learning takes a great deal of expertise, which is in short supply right now. Plus, there are usually simpler and easier ways for adversaries to break into enterprises since many vulnerabilities remain unpatched.

“There’s a lot of low-hanging fruit out there and a lot of other ways they can make money without necessarily using machine learning and AI to create attacks,” says Mellen. “In my experience, in the vast majority of instances, they’re not making use of it.” As enterprises improve their defenses, and criminals and nation-states continue to invest in their attacks, that balance might soon start to shift.