
OpenAI has taken decisive action against North Korean hacking groups by banning their access to the ChatGPT platform. These groups were reportedly using the AI tool to conduct research on potential targets and develop hacking strategies. This move is part of OpenAI's ongoing efforts to safeguard its platform from misuse by state-sponsored threat actors.
Identifying and Blocking Threat Actors
In its February 2025 threat intelligence report, OpenAI detailed the banning of accounts linked to North Korean threat actors. These accounts were associated with activities that matched the tactics, techniques, and procedures (TTPs) of known groups such as VELVET CHOLLIMA and STARDUST CHOLLIMA. The detection was made possible through collaboration with an industry partner.
Use of ChatGPT for Malicious Activities
The banned accounts were found to be leveraging ChatGPT for a variety of malicious purposes. These included researching cryptocurrency topics, which are often targeted by North Korean hackers, and seeking coding assistance for open-source Remote Administration Tools (RAT). Additionally, they used the platform for debugging and developing security tools for Remote Desktop Protocol (RDP) brute force attacks.
- Staging Malicious Binaries: Threat actors used ChatGPT to reveal staging URLs for binaries unknown to security vendors, aiding in the detection of these threats.
- Exploiting Vulnerabilities: The hackers inquired about vulnerabilities in various applications to exploit them in future attacks.
Broader Malicious Activities
OpenAI's investigation uncovered a range of other activities by the North Korean actors. These included developing a C#-based RDP client, requesting code to bypass security warnings, and crafting phishing campaigns targeting cryptocurrency investors. The actors also sought methods for social engineering and creating obfuscated payloads for execution.
North Korean IT Worker Scheme
OpenAI identified accounts linked to a scheme where North Korean IT workers posed as legitimate employees to earn income for the regime. These individuals used ChatGPT to perform job-related tasks and create cover stories to mask their true identities and activities.
Ongoing Threat Disruption
Since October 2024, OpenAI has disrupted multiple campaigns originating from China, including "Peer Review" and "Sponsored Discontent." These campaigns utilized ChatGPT for surveillance operations and generating anti-American content. OpenAI's proactive measures have thwarted over twenty cyber operations linked to Iranian and Chinese hackers since early 2024.