How are AI AND RANSOMWARE impacting cybersecurity?

How are AI AND RANSOMWARE impacting cybersecurity?

As businesses globally get excited about AI and the potential it offers, cybercriminals find new ways to leverage AI, coming up with more persistent, harder-to-detect attacks that almost replicate human-led attacks.  

Cybercriminals use AI and ML to tailor their malicious versions of ChatGPT to orchestrate phishing attacks that almost resemble legitimate sources.  

The mail that you got from Microsoft might just be an AI-generated email that a cybercriminal has crafted to phish you into giving access to your account! 

2024 has been a year where increased businesses have started adopting AI but does the risk of AI-based threats outweigh its potential? Let’s explore! 

How is AI cybercrime impacting businesses globally? 

In a recent publication, the UK’s National Cyber Security Centre (NCSC) has warned about the impact of AI. In their assessment, they found that AI has played one of the prime factors that has caused a global increase in the frequency and sophistication of cyber attacks. Ransomware attacks have increased by 9% on Quarter-on-Quarter basis with the number of ransomware groups increasing their target range (Guidepoint Security).  

It has enabled beginner level hackers who are less skilled to come up with relatively sophisticated attacks to gather sensitive information and gain access. The significant rise in AI-based ransomware attacks that are harder to predict and defend against is now becoming a cause of concern among business leaders and cybersecurity experts.  

In one KPMG survey (CEO Outlook 2024) they found that 9 out of 10 leaders were worried about the threat AI poses and that it would increase their vulnerability to data breaches

Another Arize AI research (2024) highlights that more than 56% of Fortune 500 companies believe that AI will pose a significant risk in their annual reports. 


Learn how cybercriminals leverage ChatGPT to orchestrate phishing attacks  


It will become more challenging for today’s businesses reliant on traditional security measures to defend against such attacks.   

Here is why: 

  • AI-based cyber-attacks evade traditional security measures like intrusion detection mechanisms, firewalls, and anti-virus based on rules, & signatures, rendering traditional security measures ineffective 

  • Cybercriminals are orchestrating social engineering attacks using readily available tools like Chat GPT. 53% of hackers use ChatGPT to tailor phishing emails (Statista)  

  • They are leveraging AI-based malicious tools like FraudGPT that they can obtain from the dark web for around $200 to orchestrate sophisticated phishing attacks and generate advanced cyber attacking tools. 

  • Hackers use AI to replicate legitimate persons using Deepfake-generated fake content to lure victims into divulging their personal and financial information.  

  • Advanced AI-based brute forcing tools are used by hackers to decipher complex cryptography through faster and more precise exploits 

  • AI-based automated ransomware attacks are on the rise with never-before techniques, tactics, and tools used to orchestrate an attack delivered via AI-crafted phishing campaigns. 

  • For example, an attacker can leverage AI to automate the processes of identification & exploitation of vulnerabilities in the infrastructure, data extraction, and encryption of sensitive information assets.  

To combat the rising threat of AI-based cybercrime, cybersecurity experts are advising organizations to consider a blend of both AI and Human expertise that leverages AI and ML to detect advanced threats based on suspicious behaviour, that goes beyond known threats.  

For more visit SharkStriker


To view or add a comment, sign in

More articles by SharkStriker Inc

Insights from the community

Others also viewed

Explore topics