The Helsinki-based cybersecurity and privacy firm WithSecure, the Finnish Transport and Communications Agency, and the Finnish National Emergency Supply Agency collaborated on the report, according to an article by Cybernews on Thursday.
“Although AI-generated content has been used for social engineering purposes, AI techniques designed to direct campaigns, perform attack steps, or control malware logic have still not been observed in the wild, said Andy Patel WithSecure intelligence researcher.
Such “techniques will be first developed by well-resourced, highly-skilled adversaries, such as nation-state groups.”

The paper examined current trends and advancements in AI, cyberattacks, and areas where the two intersect, suggesting early adoption and evolution of preventative measures were key to overcoming the threats.
“After new AI techniques are developed by sophisticated adversaries, some will likely trickle down to less-skilled adversaries and become more prevalent in the threat landscape,” stated Patel.
The threat in the next five years
The authors claim that it is safe to assert that AI-based hacks are now extremely uncommon and mostly used for social engineering purposes. However, they are also employed in ways that analysts and researchers cannot directly observe.
The majority of current AI disciplines do not come near to human intellect and cannot autonomously plan or carry out cyberattacks.
However, attackers will likely create AI in the next five years that can autonomously identify vulnerabilities, plan and carry out attack campaigns, use stealth to avoid defenses, and gather or mine data from infected systems or open-source intelligence.
“AI-enabled attacks can be run faster, target more victims and find more attack vectors than conventional attacks because of the nature of intelligent automation and the fact that they replace typically manual tasks,” said the report.
GIPHY App Key not set. Please check settings