A study led by researchers from Cambridge, Edinburgh, and Strathclyde found that artificial intelligence (AI) is not turning hackers into “supercharged hackers,” as previous assumptions suggest. Instead, they claimed in a new academic paper that hackers are mostly using AI tools like ChatGPT to write spam and generate nudes.
The study, titled “Stand-Alone Complex or Vibercrime?,” was published on Arxiv in March by authors Jack Hughes, Ben Collier, and Daniel Thomas. In this paper, the researchers aim to explore how “underground cybercrime” is actually adopting artificial intelligence, in comparison to the claims made by cybersecurity vendors.
“We present here one of the first attempts at a mixed-methods empirical study of early patterns of GenAI adoption in the cybercrime underground,” the researchers said.
The research team examined 97,895 forum threads that were published after the launch of ChatGPT in November 2022. These threads were sourced from the Cambridge Cybercrime Centre’s CrimeBB dataset, which focuses on underground and dark web forums. The team applied topic modeling techniques and closely analyzed 3,203 threads, ethnographically engaging in the scene—meaning they directly interact with the forum community.
They found that 97.3% (or 95,292 out of 97,895 forums) of the threads in the sample were categorized as “other,” meaning they were not using AI for crime, while only 1.9% were involved in vibe coding tools.
In addition, posts about “Dark AI” products—like those typically advertised as jailbroken LLMs—focused on users requesting free access and expressing concerns and complaints about AI tools that don’t work. One developer of a well-known Dark AI service eventually admitted that the tool was a marketing strategy.
“Dark AI was the subject of substantial cybersecurity reportage in the press, along with marketing of the threat by cybersecurity companies. The tools are furthermore the subject of a large volume of requests for free access on the forums, including variously WormGPT and others, as well as AI products for penetration testing such as the open source project (now a commercial product) WhiteRabbitNeo,” the study read. “There is very little discussion in our dataset of how (or whether) these tools are proving useful, either for automation of elements of cybercrime crime scripts, for learning, or for malware and code development assistance.”
Another part of the research detailed Anthropic’s August 2025 report, which claimed Claude Code was used to run a “vibe hacking” extortion campaign against 17 organizations in healthcare, emergency services, government, and religious institutions. However, the Cambridge team’s data does not exhibit that pattern across the wider underground.
In the forums studied, AI coding assistants were used similarly to how mainstream developers use them: as autocomplete tools and replacements for Stack Overflow for already-skilled coders. They said that low-skilled individuals tend to rely on pre-made scripts because of their effectiveness.
“AI-assisted coding is a double-edged sword. It will speed up development but also amplifies risks such as insecure code and supply chain vulnerabilities,” one user said in a forum studied by the researchers.
“AI use…is not much different to how the hacker community was coding before– namely, with criminal users largely re-using code made by others with minimal adaptation, and hacker forum users with a real interest in learning mostly using this for non-criminal software engineering projects (borne out in these discussions, where positive stories about LLM use for coding largely relate to people’s adoption in their legitimate day jobs or hobby projects),” the researchers explained.
Is AI actually helping criminals?
Another finding from the study showed that scammers are using LLMs to spam and chase declining ad revenues. Particularly, romance scammers are using AI tools like eWhoring to scam victims out of their money via voice cloning and image generation.
The most disturbing market they found was in nude image generation services. One operator even advertised: “I’m able to make any girl nude with an AI… 1 Picture = $1, 10 Pictures = $8, 50 Pictures = $40, 90 Pictures $75.”
However, as they remarked, none of this is sophisticated cybercrime; it is the same low-margin, high-volume work powered by the spam industry, now just running on automated tools.
In closing, the researchers said that AI is not being used to cause widespread disruption in cybercrime. Instead it is merely “replacing existing means of code pasting, error checking, and cheatsheet consultation, mostly for generic aspects of software development involved in cybercrime.”
See the complete research details in this link.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch | Can we trust AI? How blockchain and IPv6 could fix accountability
















English (US) ·