This is how ChatGPT is used to do evil: from espionage on social networks in China to the creation of scams and malware

This is how ChatGPT is used to do evil: from espionage on social networks in China to the creation of scams and malware

OpenAI has once again taken action against the malicious use of its technology, blocking several accounts that used ChatGPT for illicit purposes. Among the most notable cases is that of an account originating in China that used the chatbot to design mass surveillance tools on social networks. It’s not the first time the company has disrupted similar efforts, but this latest quarterly report reveals a troubling array of malicious activity.

The company has detailed that one of the suspended accounts was using ChatGPT to develop promotional materials and project plans for a social media monitoring tool. According to OpenAI, this work was supposedly being done for a government client. The tool had the ability to track platforms such as X, Facebook, Instagram and TikTok searching for specific political, ethnic or religious content, previously defined by the operator.

Monitoring of political and ethnic content

Although OpenAI has clarified that it cannot independently verify whether the tool was ultimately used by a Chinese government entity, the intention behind its creation is clear. In a separate but related case, the company also blocked an account that used ChatGPT to develop a system to help track the movements of people linked to the Uyghur ethnic group.

This discovery is especially sensitive given that China has long been accused of alleged human rights violations against the Uyghur Muslim population in the country. These cases demonstrate how language models can be adapted to create sophisticated control and surveillance tools.

But espionage is not the only misuse that OpenAI has detected. The report also reveals that Russian, Korean and Chinese-speaking developers have used ChatGPT to perfect malware. Likewise, entire networks in Cambodia, Myanmar and Nigeria that used the chatbot to help create scams and large-scale fraud. As a counterpoint, the company estimates that its artificial intelligence is used to detect scams three times more than it is used to create them.

These discoveries of malicious uses come in addition to operations that OpenAI already disrupted over the summer in Iran, Russia, and China. On that occasion, some users used ChatGPT to generate posts and comments on social networks as part of campaigns to influence the population.

These cases remind us that, despite the great advantages that ChatGPT and AI in general have, it is also a magnificent tool for developing illegal or unethical practices. Unfortunately, when we talk about local AI models, such as those of OpenAI GPT-OSS, it is not possible to apply control over actual use.