Microsoft, OpenAI warn hackers utilising AI large language models like ChatGPT for improving cyberattacks
Tech giants Microsoft and OpenAI have exposed how hackers are leveraging advanced language models, such as ChatGPT, to refine and enhance their cyber onslaughts.
Recent research jointly conducted by Microsoft and OpenAI has unveiled alarming attempts by groups affiliated with Russia, North Korea, Iran, and China to exploit tools like ChatGPT for reconnaissance, script enhancement, and the development of sophisticated social engineering tactics.
In a blog post released today, Microsoft highlighted, “Cybercrime syndicates, state-sponsored threat actors, and other adversaries are actively exploring the potential applications of emerging AI technologies to bolster their operations and evade security measures.”
The notorious Strontium group, linked to Russian military intelligence and also known as APT28 or Fancy Bear, has been identified as utilising Language Models (LLMs) to dissect satellite communication protocols, radar imaging technologies, and intricate technical parameters. This group, infamous for its involvement in previous high-profile cyber incidents including targeting Hillary Clinton’s presidential campaign in 2016, has expanded its nefarious activities to encompass basic scripting tasks facilitated by LLMs, such as file manipulation and data selection.