Cybercriminals looking to abuse the power of generative AI to build phishing campaigns and sophisticated malware can now purchase easy access to them from underground marketplaces as large numbers of threat actors are putting stolen gen AI credentials up for sale every day.
Hackers are selling usernames and passwords of approximately 400 individual gen AI accounts per day, according to an eSentire study.
“Cybercriminals are advertising the credentials on popular Russian underground markets, which specialize in everything from malware to infostealers to crypters,” said eSentire researchers in the report. “Many of the gen AI credentials are stolen from corporate end-users’ computers when they get infected with an infostealer.”
Stealer logs, which refer to all the information an infostealer retrieves from the victim machines including the gen AI credentials, are currently being sold at $10 each on the underground markets.
LLM Paradise is among the most used
Researchers said one of the most prominent underground markets that was found to facilitate the exchange of gen AI credentials was LLM Paradise.
“The threat actor running this market had a knack for marketing jargon, naming their store LLM Paradise and touting stolen GPT-4 and Claude API keys with ads reading: ‘The Only Place to get GPT-4 APIKEYS for unbeatable prices,’” researchers said.
The threat actor advertised GPT-4 or Claude API keys starting at only $15 each, while typical prices for various OpenAI models run between $5 and $30 per million tokens utilized, the researchers added.
LLM Paradise couldn’t sustain itself and, for unknown reasons, shut its services down recently. However, threat actors went around the snag and are still operating some ads for stolen GPT-4 API keys on TikTok published before the marketplace was shuttered.
Other than the GPT-4 and Claude APIs, other credentials put up for sale on LLM Paradise-like marketplaces include those for Quillbot, Notion, Huggingface, and Replit.
Credentials can be used for phishing, malware and breaches
eSentire researchers said the stolen credentials have greater value in the hands of cybercriminals for their multifold returns. “Threat actors are using popular AI platforms to create convincing phishing campaigns, develop sophisticated malware, and produce chatbots for their underground forums,” they said.
Additionally, they can be used to access an organization’s corporate gen AI accounts which further allows access to customers’ personal and financial information, proprietary intellectual property, and personally identifiable information.
The hacked credentials can also allow access to data restricted to corporate customers only, thereby affecting gen AI platform providers as well. OpenAI was found to be the most affected with more than 200 OpenAI credentials posted for sale per day.
Regular monitoring of employee’s gen AI usage, having gen AI providers implement WebAuthn with MFA options, including passkey or password best practices for gen AI authentication, and using dark web monitoring services to identify stolen credentials are a few steps corporate users can follow to defend against gen AI attacks.