- AiNews.com
- Posts
- Hackers Use AI in New AsyncRAT Malware with AI-Generated Code
Hackers Use AI in New AsyncRAT Malware with AI-Generated Code
Image Source: ChatGPT-4o
Hackers Use AI in New AsyncRAT Malware with AI-Generated Code
Researchers from HP’s Wolf Security team have discovered a malware program written by generative AI “in the wild” while investigating a suspicious email. The discovery highlights how malware developers are using AI to accelerate the process of writing malicious code, making it easier and faster to launch cyberattacks.
AI’s Role in Malware Development
The investigation revealed a variation of AsyncRAT, a type of software that can be used to remotely control a victim’s computer. While the original AsyncRAT was developed by humans, this new version contains an injection method that appeared to have been developed using generative AI.
According to HP’s report, AI’s involvement was evident in several characteristics of the code. For example, nearly every function in the malware was accompanied by a comment explaining what it did, a practice uncommon among cybercriminals. Researchers also noted that the structure of the code and the choice of function names and variables suggested it had been generated by AI.
How the Malware Was Discovered
The malware was uncovered when HP’s Sure Click threat containment software flagged a suspicious email sent to one of its subscribers. Posing as an invoice written in French, the email contained a malicious file targeting French speakers. Inside the file, a Visual Basic Script (VBScript) wrote variables onto the user’s PC registry and installed a JavaScript file, which then ran a sequence of scripts to install the AsyncRAT malware on the device.
Initially, the researchers were unable to determine the file's purpose because the code was locked within a script that required a password for decryption. However, after overcoming this obstacle by successfully cracking the password, they were able to decrypt the file and uncover the malware hidden inside.
AsyncRAT, released via GitHub in 2019, is described by its developers as a “legitimate open-source remote administration tool,” but it has been widely adopted by cybercriminals to remotely control infected devices, often to steal sensitive information such as cryptocurrency private keys and seed phrases, potentially leading to a loss of money.
The Rise of AI in Cyberattacks
While AsyncRAT itself is not new, this version’s AI-generated code signals a worrying trend: cybercriminals are increasingly using AI to create more sophisticated and harder-to-detect malware. HP’s report notes that generative AI is “lowering the bar” for hackers, making it easier for them to infect endpoints and execute attacks.
Potential Threats and Growing Concern
The rise of AI in the cybercriminal world is causing concern among cybersecurity experts. In 2023, some users discovered that ChatGPT could be used to uncover vulnerabilities in smart contracts. While such tools can be useful for ethical hackers, they also present risks by giving black hat hackers a powerful new tool to exploit weaknesses.
In May 2023, Meta released a report warning of fake AI programs being used as lures by malware operators to attract victims. The use of AI in these scenarios highlights how bad actors are adapting to new technologies to enhance their malicious activities.
What This Means for the AI Industry
The growing use of AI in cyberattacks is a double-edged sword. While AI holds incredible potential for good, it is also being exploited for malicious purposes. The ability of cybercriminals to generate malware more efficiently using AI underscores the urgent need for stronger cybersecurity measures and vigilant monitoring of how these technologies are being used. As AI continues to evolve, it’s crucial for the industry to stay ahead of bad actors by developing more robust defenses to safeguard against AI-enabled threats.