A new cybercrime generative artificial intelligence (AI) tool called FraudGPT has appeared in dark web marketplaces and Telegram channels. This tool is specifically designed for offensive purposes, such as crafting spear-phishing emails, creating cracking tools, and carding. Netenrich security researcher Rakesh Krishnan reported that FraudGPT has been circulating since at least July 22, 2023, with a monthly subscription cost of $200 (or $1,000 for six months and $1,700 for a year).
Claiming to offer a wide range of exclusive tools and capabilities with no limitations, the actor behind the tool, known as CanadianKingpin, says that FraudGPT can be used to write malicious code, create undetectable malware, and find leaks and vulnerabilities. It is also mentioned that there have been over 3,000 confirmed sales and reviews, but the exact large language model (LLM) used for the system is unknown.
This emergence of FraudGPT follows the trend of threat actors leveraging AI tools like OpenAI ChatGPT to create new variants that enable various forms of cybercriminal activity without restrictions. These tools not only advance the phishing-as-a-service model but also serve as a platform for inexperienced actors to launch convincing phishing and business email compromise attacks, leading to the theft of sensitive data and unauthorized wire payments.
While it is possible for organizations to develop ethical safeguards when creating AI tools like ChatGPT, it is also relatively easy to reimplement the technology without those safeguards. Therefore, implementing a defense-in-depth strategy with comprehensive security telemetry has become crucial in detecting and mitigating fast-moving threats before they evolve into ransomware or data exfiltration.
For more exclusive content, follow us on Twitter and LinkedIn.