OpenAI claims to have foiled China-backed election interference, phishing attacks
In 2024 so far, OpenAI has already tackled over 20 cases where large scale attempts were made to use its AI models to sow election disinformation. However, OpenAI’s security measures proved effective in blocking these attacks
read more
OpenAI has revealed that it successfully thwarted a phishing attempt, allegedly carried out by a group with ties to China, sparking fresh concerns about cyber threats from Beijing aimed at top US artificial intelligence (AI) companies.
The AI giant shared that a suspected China-based group, known as SweetSpecter, attempted to target its staff earlier this year, posing as a user of its chatbot, ChatGPT, to initiate the attack.
Phishing attempts blocked by security systems
SweetSpecter reportedly sent emails to OpenAI’s employees, disguised as customer support messages, which contained malware attachments. If opened, these attachments would have allowed the attackers to take screenshots and extract sensitive data.
However, OpenAI’s security measures proved effective in blocking the attack. The company confirmed that its security team quickly reached out to the employees thought to be targeted and found that the emails were stopped before they could land in corporate inboxes.
OpenAI’s rising cybersecurity risks
This incident has reignited concerns over the vulnerability of leading AI firms, especially as the US and China remain locked in a tense rivalry over AI development and dominance.
Earlier in the year, another notable case saw a former Google engineer charged with stealing AI trade secrets for a Chinese company.
Despite repeated accusations from the US, China has consistently denied involvement in cyberattacks, accusing external forces of waging smear campaigns against the country.
Wider influence operations revealed
OpenAI recently revealed some unsettling details in its latest threat intelligence report, shedding light on the misuse of its AI models in phishing attempts and cybercrime.
The company has been busy tackling a variety of global threats, including shutting down accounts linked to groups in China and Iran. These groups had been using AI for coding help and research, amongst other things.
The report highlights the growing cybersecurity challenges AI companies face in today’s fast-paced tech environment. With the global race for AI dominance heating up, incidents of misuse are becoming more frequent.
In 2024 so far, OpenAI has already tackled over 20 cases where large scale attempts were made to use its AI models to sow election disinformation. Notable incidents included shutting down accounts producing fake content related to the US elections and banning accounts in Rwanda that were involved in election-related activity on social media. OpenAI, which is backed by Microsoft, is keenly aware of the risks, and is stepping up its efforts to curb such misuse.