Connect with us

Technology

Microsoft, OpenAI: Hackers Using ChatGPT for Cyberattack Improvements

Published

on

Graphics by ASC

Hackers are leveraging ChatGPT’s AI technologies to enhance cyberattacks, according to recent findings by Microsoft and OpenAI.

In a joint research effort, the two tech giants have identified instances of malicious actors from various countries utilizing large language models (LLMs) for refining their tactics and improving their cyber offensive capabilities.

In a blog post today, Microsoft highlighted the emerging trend, stating, “Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent.”

The report outlined specific instances of hacker groups employing LLMs for various nefarious purposes. Notably, the Strontium group, associated with Russian military intelligence, has been observed using LLMs to better comprehend satellite communication protocols and radar imaging technologies. Additionally, they have been leveraging these models for basic scripting tasks aimed at automating or optimizing technical operations.

Meanwhile, North Korean hackers from the Thallium group have been utilizing LLMs to research vulnerabilities, aid in scripting tasks, and draft content for phishing campaigns. Meanwhile, Iranian hackers from the Curium group have been employing LLMs to generate phishing emails and develop code to evade detection by antivirus applications.

While there have been no reported significant cyberattacks involving LLMs thus far, Microsoft and OpenAI are actively monitoring and addressing these threats.

The companies have been shutting down accounts and assets associated with these hacking groups. According to Microsoft, this research is crucial to exposing early-stage moves by threat actors and sharing information on how to counter them effectively.

Microsoft also raised concern for potential future threats, such as AI-powered voice impersonation, emphasizing the need for proactive defense strategies. (GFB)

Subscribe

Advertisement

Facebook

Advertisement

Ads Blocker Image Powered by Code Help Pro

It looks like you are using an adblocker

Please consider allowing ads on our site. We rely on these ads to help us grow and continue sharing our content.

OK
Powered By
Best Wordpress Adblock Detecting Plugin | CHP Adblock