DeFi hacks are worse than ever, report Chainalysis and PeckShield

But given the wealth amassed by a number of ransomware gangs in recent years, it may not be long before attackers bring in their own AI experts, said prominent cybersecurity authority Mikko Hyppönen.

Some of these groups have so much money — or rather, bitcoin — that they could now potentially compete with legitimate security firms for AI and machine learning talent, according to Hyppönen, the company’s research director. cybersecurity WithSecure.

The Conti ransomware gang withdrew $182 million in ransom payments in 2021, according to blockchain data platform Chainalysis. Leaked Conti talks suggest the band may have invested some of their share in a costly ‘zero day’ vulnerabilities and hiring penetration testers.

“We have already seen [ransomware groups] hire pen testers to break into networks to figure out how to deploy ransomware. The next step will be that they start hiring ML and AI experts to automate their malware campaigns, Hyppönen told Protocol.

“It is not far to see that they will have the capacity to offer double or triple salaries to AI/ML experts in exchange for their passage to the dark side,” he said. “I think it’s going to happen in the near future – if I have to guess, in the next 12 to 24 months.”

If that happens, Hyppönen said, “it would be one of the biggest challenges we’re likely to face in the near future.”

AI for scaling ransomware

While catastrophic cybersecurity predictions abound, with two decades of cybercrime experience, Hyppönen is not just any predictor. He has been part of his current company, known until recently as F-Secure, since 1991 and has been researching and competing with cybercriminals since the concept’s early days.

In his view, the introduction of AI and machine learning on the attacker side would be a distinct game-changer. He’s not the only one to think so.

When it comes to ransomware, for example, automating large parts of the process could mean an even greater acceleration of attacks, said Mark Driver, research vice president at Gartner.

Currently, ransomware attacks are often highly tailored to the individual target, making attacks harder to scale, Driver said. Even still, the number of ransomware attacks doubled year-on-year in 2021, SonicWall reported — and ransomware is also growing in popularity. The percentage of affected organizations that agreed to pay a ransom jumped to 58% in 2021, from 34% the previous year, Proofpoint reported.

However, if attackers could automate ransomware using AI and machine learning, it would allow them to attack an even wider range of targets, according to Driver. This could include smaller organizations, or even individuals.

“It’s not worth their effort if it takes them hours and hours to do it manually. But if they can automate it, absolutely,” Driver said. In the end, “it’s terrifying”.

The prediction that AI is coming in massive numbers in cybercrime isn’t new, but it has yet to manifest itself, Hyppönen said. This is most likely because the ability to compete with deep-pocketed enterprise technology vendors to attract needed talent has always been a constraint in the past.

The huge success of ransomware gangs in 2021, mostly Russian-affiliated groups, seems to have changed that, according to Hyppönen. Chainalysis reports that it tracked ransomware payments totaling $602 million in 2021, led by Conti’s $182 million. The ransomware group that hit the Colonial Pipeline, DarkSide, earned $82 million last year, and three other groups made more than $30 million this year alone, according to Chainalysis.

Hyppönen estimated that less than a dozen ransomware groups may have the ability to invest in hiring AI talent over the next few years, mostly Russian-affiliated gangs.

“We definitely wouldn’t miss it”

If cybercrime groups hire AI talent with part of their windfall, Hyppönen thinks the first thing they’ll do is automate the more manual parts of a ransomware campaign. Actual execution of a ransomware attack remains difficult, he said.

“How do you get it on 10,000 computers?” How do I find a way to access corporate networks? How to bypass the different protections? How do you continue to change the operation, dynamically, to ensure that you are successful? said Hyppenen. “It’s all manual.”

Monitoring systems, modifying malicious code, recompiling it, and registering new domain names to avoid defenses — things humans take a long time to do — would all be fairly simple to do with automation. “All of this is done in an instant by machines,” Hyppönen said.

That means it should be very obvious when AI-powered automation comes to ransomware, according to Hyppönen.

“It would be such a big change, such a big change,” he said. “We certainly wouldn’t miss it.”

But would ransomware groups really decide to go to all that trouble? Forrester analyst Allie Mellen said she wasn’t so sure. Given the success of ransomware groups, Mellen said it’s unclear why they would bother to go this route.

“They have no problem with the approaches they’re taking right now,” she said. “If it is not broke, do not fix it.”

Others see a greater likelihood of AI playing a role in attacks such as ransomware. Like defenders, ransomware gangs clearly have a penchant for evolving their techniques to try to stay one step ahead of the other side, said Ed Bowen, managing director of the AI ​​Center of Excellence at Deloitte.

“I expect that — I expect them to use AI to improve their ability to access that infrastructure,” Bowen said. “I think it’s inevitable.”

Lower the barrier to entry

While talent in AI is extremely scarce right now, that will begin to change in the coming years as a wave of people graduate from college and research programs in the field, Bowen noted.

Barriers to entry into the AI ​​field are also decreasing as the tools become more accessible to users, Hyppönen said.

“Today, all security companies rely heavily on machine learning – so we know exactly how difficult it is to hire experts in this area. Especially people who have expertise in both cybersecurity and machine learning, so they’re hard people to hire,” he said. says Protocol. “However, it becomes easier to become an expert, especially if you don’t need to be a world-class expert.”

This dynamic could increase the pool of candidates for cybercrime organizations that are both richer and “more powerful than ever before,” Hyppönen said.

If this future were to materialize, it would have massive implications for cyber defenders, should a greater volume of attacks – and attacks against a wider range of targets – result.

Among other things, this would likely mean that the security industry itself would seek to compete for AI talent more than ever, if only to try to stay ahead of automated ransomware and other AI-powered threats.

Between forwards and defenders, “you’re always going above and beyond” on technical ability, Driver said. “It’s a war trying to get ahead of the other side.”

Comments are closed.