Quantcast
Channel: Cyber agencies urge organizations to collaborate to stop fast flux DNS attacks | CSO Online
Viewing all articles
Browse latest Browse all 1594

Custodians looking to beat offenders in the GenAI cybersecurity battle

$
0
0

Generative AI (GenAI) enabled threats, such as highly convincing phishing emails and morphed digital identities, which accurately mimic human communication, are evolving in real time, surpassing existing security measures and posing challenges to legacy defenses.

“The availability of large language models (LLMs) has significantly reduced the barrier to entry for threat actors, leading to an increase in large-scale attacks,” said Mir Kashifuddin, PwC’s Data Risk & Privacy Leader at PwC US. “The rapidly evolving nature of GenAI allows threat actors to quickly design and iterate attack methods, necessitating increased agility to constantly adapt defenses designed to detect unusual activity.”

The technology, however, has a dual capacity in that it can also empower security teams by helping them code rapid, state-of-the-art solutions like an AI-driven anomaly detection system that can promptly identify and address irregular network activities.

The user-agnostic nature of this technology redefines its position in the continuously evolving cybersecurity warfare.

Definitive adversary advantage

GenAI has undeniably advanced adversarial operations by providing sophisticated codes for diverse attacks. Security analysts have published multiple proofs-of-concept (POCs) showing how the technology can be manipulated to script up malice like turning legitimate tools rogue, creating living-of-the-land (LOTL), and highly evasive malware.

In addition to allowing cybercriminals to rapidly develop and refine their attack strategies, Kashifuddin pointed out, the technology enables threat actors to mimic the tactics of other groups, complicating the identification of perpetrators based on their toolsets. This, coupled with the widespread availability of GenAI tools diminishes the effectiveness of traditional identification methods.

In fact, a July 2023 report by IBM found only one-third of breaches are detected by the affected organizations themselves, underscoring the need for improved threat detection mechanisms.

“Spearfishing attacks are now very targeted and very realistic compared to how they once were,” said Chris Steffen, vice president of research at Enterprise Management Associates. “Using a GenAI tool, you can make a very convincing email that sounds and appears exactly like the spoofed author. Previously, those emails were often rejected immediately because they would never pass the ‘smell’ test – the spoofed author would never write something like that or so poorly. Now they can even use mannerisms and specific styles to craft something that can fool anyone at first look.”

GenAI tools themselves, apart from aiding threat actors, are prone to a new generation of attacks, possible only for their nature. “GenAI applications, like any IT system, are also targets of attacks,” said Augusto Barros, vice president of cybersecurity evangelist at Securonix. “Attackers can exploit AI models through various methods, such as Data poisoning, adversarial attacks, model inversion, and Explainability gap.”

A violated GenAI model can create more serious damage. Attacks to extract information against models trained on highly sensitive data (in a model inversion attack) would be similar in damage to other high-profile breaches and data leaks, Barros noted. “In places where GenAI output is being used for decision-making, the impact of incorrect decisions will be the direct result of potential attacks,” he added. “So far, damage and impact have been limited, ranging from simple embarrassing situations to some limited financial losses.”

GenAI’s ability to produce highly credible disinformation is another issue that can erode trust in legitimate data. This presents a significant threat to information integrity, potentially leading to financial and reputational damage for businesses.

Security gets a piece of the pie

This constant threat evolution necessitates that cybersecurity defenses remain equally agile and adaptive. GenAI, quite ironically, helps. By simulating potential attack scenarios, GenAI can help identify vulnerabilities before they can be exploited. IBM’s Watson for Cyber Security, for instance, uses GenAI to analyze vast amounts of data and predict potential threats.

Other than scanning huge amounts of data for potential threats, the technology comes in handy at sniffing out anomalies. By analyzing patterns and behaviors, GenAI can pinpoint suspicious activities, an ability well realized by Darktrace, a cybersecurity company that uses GenAI to understand normal network behavior and identify deviations.

“GenAI can efficiently handle many tasks typically performed by level-one security operations center (SOC) analysts,” Kashifuddin said. “This allows analysts to focus on more strategic approaches to cyber defense. GenAI can examine predefined detection rules used by SOC analysts, identify any gaps, and even discover new types of attacks that analysts may have missed. Additionally, GenAI can learn to recognize sophisticated spear-phishing attempts and detect patterns and anomalies that traditional signature-based detection systems might overlook.”

GenAI can also play a crucial role in automating incident response. Barros believes incident investigation and response activities are so far the most improved with GenAI. “During investigations, analysts receive and query multiple sources of information to get a clear picture of what is happening in their environment,” he said. “GenAI has been able to turn the data obtained from all those sources into a cohesive, easy-to-read, and understandable story, reducing the cognitive load on the analyst and speeding up the process of understanding the attack and its implications.”

Later, when the analyst is wrapping up the incident response process, it also helps in producing the required documentation, ingesting notes from all parties involved, and extracting data to generate summaries and reports, Barros added.

Lastly, and perhaps most importantly, GenAI assists in creating more secure software by generating code that adheres to security best practices.

“Generative AI streamlines the development of new software code and documentation,” noted Carlos Morales, senior vice president of solutions at Vercara. “This has a positive effect on the speed of software innovation in cybersecurity solutions. Cybersecurity vendors also use it to generate simple algorithms that can then be pieced together to create new higher-level innovations, which is beneficial for cybersecurity teams at businesses that build out their own tools and orchestration solutions.”

Notably, organizations utilizing extensive AI and automation experience a breach lifecycle that is 108 days shorter than those without such technologies, leading to significant cost savings.

Moderation is the key

While the security, as well as the global technology community, seem united at somehow regulating this new-age technology, there are a limited number of things that can actually be done.

“There are two ways to combat attacks enabled by the widespread use of GenAI,” Kashifuddin said. “For internal threats, it comes down to deploying ‘cyber for GenAI’. For external threats, the use of ‘GenAI for cyber’ defense is becoming more of a reality and evolving quickly.”

The use of cyber for Gen AI threats simply means applying fundamental controls to protect company resources from a GenAI-based attack, he explained. “Traditional data protection tools like Data Loss Prevention (DLP), Cloud Access Security Broker (CASB) when used in conjunction with web proxies amplify a company’s ability to detect and restrict exfiltration of sensitive data to external GenAI services.”

“GenAI for cyber” refers to a growing class of techniques using GenAI to combat GenAI-induced attacks. Apart from advanced phishing detection and automated incident response, this includes a bunch of new ways to better the model in order to neutralize adversarial activities.

“The discipline of protecting AI systems is just beginning to evolve, but there are some interesting techniques for that already,” Barros said. “Defending against these threats involves several strategies, such as adversarial training (exposing AI models to examples of potential attacks during training helps build resilience), explainable AI approaches (providing insights into the reasoning behind AI decisions, aiding in the identification of vulnerabilities and biases) and algorithm auditing (periodically testing AI models).”

Waking up to the imminent threats from and to this futuristic tool, several nations have already implemented regulations on its use. China introduced interim measures effective from July 2023, which require service providers to submit security assessments and receive clearance before releasing mass-market AI products. A regulatory EU AI Act was finalized in May 2024 that includes directions on the use of GenAI. The US, Brazil, and Japan are other major countries that have either approved or are working on bringing in similar regulations.

In conclusion, GenAI stands as a double-edged sword in the field of cybersecurity. Its advanced capabilities offer the potential for both threat actors and cybersecurity teams, leveling the playing field without granting any inherent advantage to either side.

As has always been the case, the true determinant of success will be how well each side harnesses this powerful tool to outmaneuver the other in the ongoing cybersecurity arms race. “Cybersecurity has always been an arms war,” said Morales. “That hasn’t changed. Generative AI is the accelerant that is ushering in a new era of growth in the race.”


Viewing all articles
Browse latest Browse all 1594

Trending Articles