Data Breach Warning: Hackers Exploiting ChatGPT To Write Malicious Codes To Steal Your Data
The first such instances of cybercriminals using ChatGPT to write malicious codes have been spotted by Check Point Research (CPR) researchers.
New Delhi, Jan 8: Artificial intelligence (AI)-driven ChatGPT, that gives human-like answers to questions, is also being used by cyber criminals to develop malicious tools that can steal your data, a report has warned.
The first such instances of cybercriminals using ChatGPT to write malicious codes have been spotted by Check Point Research (CPR) researchers. Twitter Data 'Breach': Over 200 Million Users' Information Dumped on Dark Web Earlier Sold for $200,000.
In underground hacking forums, threat actors are creating "infostealers", encryption tools and facilitating fraud activity.
The researchers warned of the fast-growing interest in ChatGPT by cybercriminals to scale and teach malicious activity.
"Cybercriminals are finding ChatGPT attractive. In recent weeks, we're seeing evidence of hackers starting to use it to write malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point," said Sergey Shykevich, Threat Intelligence Group Manager at Check Point.
Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes.
On December 29, a thread named "ChatGPT - Benefits of Malware" appeared on a popular underground hacking forum. Data Breach: Indian Government Sector Top Target for Hackers As Cyber-Attacks Rise by 95% in Second Half of 2022.
The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.
"While this individual could be a tech-oriented threat actor, these posts seemed to be demonstrating less technically capable cybercriminals how to utilise ChatGPT for malicious purposes, with real examples they can immediately use," the report mentioned.
On December 21, a threat actor posted a Python script, which he emphasized was the afirst script he ever created'.
When another cybercriminal commented that the style of the code resembles OpenAI code, the hacker confirmed that OpenAI gave him a "nice (helping) hand to finish the script with a nice scope."
This could mean that potential cybercriminals who have little to no development skills at all, could leverage ChatGPT to develop malicious tools and become a fully-fledged cybercriminal with technical capabilities, the report warned.
"Although the tools that we analyse are pretty basic, it's only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools," Shykevich said.
OpenAI, the developer behind ChatGPT, is reportedly trying to raise capital at a valuation of almost $30 billion.
Microsoft acquired OpenAI for $1 billion and is now pushing ChatGPT applications for solving real-life problems.
(The above story first appeared on LatestLY on Jan 08, 2023 12:06 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).