Google warns of surge in generative AI-enhanced attacks, zero-day exploit use in 2024

Phishing scams and other cyber attacks will become harder to detect, warns Google

Add bookmark
Michael Hill
Michael Hill
11/15/2023

Person Using a Computer

Generative AI and large language models (LLMs) will be used by cyber criminals to significantly enhance the effectiveness and scale of social engineering attacks in 2024, Google Cloud has warned. This will involve phishing, SMS, and other social engineering operations to make content and material appear more legitimate. Meanwhile, more zero-day exploits will be used by both nation-state attackers and cyber criminal groups to maintain persistent access to environments. That’s according to the Google Cloud Cybersecurity Forecast 2024 report compiled by the firm’s security leaders and experts across numerous teams from Mandiant Intelligence, Mandiant Consulting, Chronicle Security Operations, Google Cloud’s Office of the CISO, and VirusTotal.

The warning comes as the impact of generative AI and LLMs on the cyber security landscape continues to make headlines, with the rapid growth and adoption of the technology challenging business technology leaders to keep potential cyber security issues in check. In August, research from Deep Instinct revealed that threat actors’ use of generative AI has fueled a significant rise in attacks worldwide during the last 12 months.

Last month, the Biden-Harris Administration announced plans to establish the U.S. Artificial Intelligence Safety Institute (USAISI) to lead the government’s efforts on AI safety and trust – particularly for evaluating the most advanced AI models such as generative AI. The SAISI will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.

Generative AI will make misspellings, grammar errors harder to spot in phishing emails

Generative AI’s ability to make fraudulent content appear more legitimate will make misspellings, grammar errors, and lack of cultural context harder to spot in phishing emails and messages, Google Cloud wrote. “LLMs will be able to translate and clean up translations too, making it even harder for users to spot phishing based on the verbiage itself.” LLMs will allow an attacker to feed in legitimate content, and generate a modified version that looks, flows, and reads like the original, but suits the goals of the attacker, Google Cloud added.

“With generative AI, attackers will also be able to execute these campaigns at scale. If an attacker has access to names, organizations, job titles, departments, or even health data, they can now target a large set of people with very personal, tailored, convincing emails,” the tech giant said. A malicious LLM may not even be necessary to create these emails since there is nothing inherently malicious about, for example, using gen AI to draft an invoice reminder, it added.

Generative AI to increase scalability, productivity of cyber criminal activity

A generative AI prompt will be all attackers need to create fake news, fake phone calls that will actively interact with recipients, and deepfake photos/videos based on gen AI-created fake content, Google Cloud wrote. “We judge that such generative AI technologies have the potential to significantly augment information operations – and other operations such as intrusions – in the future, enabling threat actors with limited resources and capabilities, similar to the advantages provided by exploit frameworks such as Metasploit or Cobalt Strike.” Adversaries are already experimenting with generative AI, and this is only expected to increase over time, the firm added.

LLMs and other generative AI tools will increasingly be developed and offered as-a-service to assist attackers with target compromises too, Google Cloud warned. “They will be offered in underground forums as a paid service, and used for various purposes such as phishing campaigns and spreading disinformation. We’ve already seen attackers have success with other underground as a service offerings, including ransomware used in cyber crime operations.”

Continued use of zero-day vulnerabilities by nation-state attackers, cyber criminals

The teams behind Google Cloud’s report have observed a general increase in zero-day vulnerability use since 2012, with 2023 on track to beat the previous record set in 2021. “We expect to see more zero-day use in 2024 by both nation-state attackers as well as cyber criminal groups. One of the reasons for this is that attackers want to maintain persistent access to the environment for as long as possible, and by exploiting zero-day vulnerabilities (as well as edge devices), they’re able to maintain access to an environment for much longer than if they were to, for example, send a phishing email and then deploy malware,” Google Cloud wrote.

Security teams and solutions have become much better at identifying malicious phishing emails and malware, and so attackers will turn to other avenues to evade detection, it added. “Edge devices and virtualization software are particularly attractive to threat actors because they are challenging to monitor.” Cyber criminals know that using a zero-day vulnerability will increase the number of victims and, based on recent mass extortion events, the number of organizations that may pay high ransomware or extortion demands, the firm added.


RECOMMENDED