Microsoft has said nation-state hackers are already utilising large language models such as OpenAI’s ChatGPT, to refine and improve their cyberattacks.
Microsoft Threat Intelligence and OpenAI made the claim in respective blog posts, which said “malicious actors will sometimes try to abuse our tools to harm others, including in furtherance of cyber operations.”
“In partnership with Microsoft Threat Intelligence, we have disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities,” said OpenAI. “We also outline our approach to detect and disrupt such actors in order to promote information sharing and transparency regarding their activities.”
OpenAI and Microsoft then went to identify particular hacking groups, and said both it and Redmond had disrupted five state-affiliated malicious actors.
These included two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard.
The identified OpenAI accounts associated with these actors were terminated.
According to both firms:
OpenAI said that “although the capabilities of our current models for malicious cybersecurity tasks are limited, we believe it’s important to stay ahead of significant and evolving threats.”
It said that to respond to the threat, OpenAI has taking a multi-pronged approach to combating malicious state-affiliate actors’ use of its platform. This includes monitoring and disrupting malicious state affiliated actors; working with industry-partners with the AI ecosystem; learning safety mitigations, and being publicly transparent about potential misuses of AI.
“The vast majority of people use our systems to help improve their daily lives, from virtual tutors for students to apps that can transcribe the world for people who are seeing impaired,” said OpenAI. “As is the case with many other ecosystems, there are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits.”
“Although we work to minimise potential misuse by such actors, we will not be able to stop every instance,” it added. “But by continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else.”
China’s US embassy spokesperson Liu Pengyu told Reuters it opposed “groundless smears and accusations against China” and advocated for the “safe, reliable and controllable” deployment of AI technology to “enhance the common well-being of all mankind.”
Fourth quarter results beat Wall Street expectations, as overall sales rise 6 percent, but EU…
Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…
Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…
Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…
Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…
Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…