Quantcast
Viewing all articles
Browse latest Browse all 1594

CISOs have to get on top of AI technologies, warns Microsoft

CISOs have to get on top of artificial intelligence technologies to defend their organizations, because threat actors are already using generative AI (genAI) to create malware, better phishing lures and deepfake videos, warns Microsoft.

The alert came last week as part of the company’s annual Digital Defense Report.

“The rise of generative AI applications poses a serious threat to organizations that haven’t implemented sufficient data governance controls,” the report says.

“We expect two diverging trends pertaining to AI-enabled cyber-threat actors and defenders,” the report says.

If organizations are early adopters of AI tools, they can use machine learning to rapidly ingest and infer evolving tactics, techniques, and procedures (TTPs), thus detecting and blocking malware and malicious code. But, the report warns, “hesitance to incorporate AI into defensive strategies will open a window of opportunity for threat actors to exploit gaps they identify with AI tools. This means the early AI adopters will enjoy a near-term advantage afforded by the nimbleness of AI.

“However, when it comes to AI-enabled human targeting [by threat actors], threats will be more difficult to detect and defend against—even with AI tools assisting defensive strategies.”

Lower barriers to entry

The main way AI has been changing the threat landscape is by lowering barriers to entry for the use of this advanced technology, says Ashley Jess, a Canadian-based senior threat analyst for Intel471. As a result, anyone with malicious intent but without technical skills finds it easier today to start exploring the tactics and techniques cybercriminals use for things like coding, social engineering, and deepfakes.

“Even just a couple of years ago, it used to be very expensive to do these things yourself,” she said in an interview. “You had to host it yourself, had to have an average of 500 source images to make a convincing deepfake. [But] as these tools become public and they improve and get cheap to use, it’s allowing more people to explore the technology.”

For example, she has seen threat actors offer specialized AI tools to the criminal market, including one that collects information about the success of social engineering lures and then uses AI to visualize which lures were the most effective. Those using the tool can learn how to get more victims to click on a malicious file.

“The social engineering space is the most alarming [use of AI by threat actors] because it’s so effective right now … It’s the simplest to do and most impactful in terms of victim count,” she said.

And, she added, “a lot of people are talking right about where it can go in terms of malware coding, but it’s still very rudimentary about the code it can make.”

In its latest Influence and Cyber Operations report, OpenAI acknowledged that it has seen threat actors using its ChatGPT model to create malware. But, it added, so far, “we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences.”

In an email, John Hewie, Microsoft Canada’s national security officer, said there is increasing concern about how genAI will be used to accelerate threat actors’ ability to drive even more efficiency and speed into their attack ecosystem, including making ransomware operators even more productive. The Digital Defense Report shows a 2.75 times increase year over year in human-operated ransomware-linked encounters, he pointed out. 

“What’s more,” he added, “at a time when the world is managing an overwhelming influx of information delivered through both formal and informal channels, the issue of combating misinformation is becoming increasingly vital. At the beginning of this transformative technological era, AI’s impact on industries is significant, but its role in securing critical data and assets against growing cybersecurity threats is crucial.”

Many chief information security officers (CISOs) are aware of the potential AI offers in cybersecurity, he said, but whether they are fully prepared to embrace its possibilities depends on several factors.

“First, CISOs understand that AI can enhance threat detection, automate responses, and strengthen defenses. Tools that use machine learning to detect anomalies or predict threats are increasingly integrated into cybersecurity strategies,” he noted. “Many organizations are also already deploying AI-driven cybersecurity solutions. CISOs who have embraced these tools are taking steps to stay ahead of evolving threats by leveraging AI to enhance efficiency, reduce human error, and process large amounts of data faster than traditional systems.” 

On the other hand, he added, many CISOs don’t have the skills they need. “AI requires specialized knowledge to implement effectively, and many CISOs may not have a deep understanding of AI technologies, algorithms, or the data science needed to optimize these systems. Hiring or training AI specialists remains a challenge for many organizations.” 

AI also thrives on high-quality data, he said, but many organizations struggle to manage and provide the volume and calibre of data needed to train AI models effectively. That limits their ability to fully realize AI’s capabilities. And implementing AI in cybersecurity requires integrating it with legacy systems, which may not be AI-ready.

“To fully embrace the power of AI,” he said, “organizations will need to invest in training, tools, and robust strategies that address these barriers.” 

Nation state-based threat actors among biggest users

The report says threat actors backed by Russia, China, and Iran are among the biggest users of AI in social media influence operations.

For example, the report says, China-linked threat actors are creating “sleek, compelling visual narratives” with AI, emphasizing discord in the US and criticizing the Biden administration. One group, dubbed Taizi Flood, generates virtual news anchors hosting stories over 175 websites in 58 languages. Russian-affiliated threat actors create fully synthetic deepfake videos of prominent political figures, although so far they have been quickly exposed as fakes. But a fabricated video that appeared to come from a legitimate French broadcast, alleging Ukraine planned to assassinate the French president, gained traction.

Pro-Iranian groups so far are using AI sparingly, but are gradually increasing AI generated images and videos.

So far, AI generated content by nation-state threat actors has had a limited effect, says the report. But, it adds, if this content is integrated into creative and multifaceted influence operations, AI “may prove to offer a significant capability in reaching and engaging audiences in the future.”

What should CISOs do?

The report also notes that AI systems have limitations.

As a system is built, developers should make a list of the ways in which it could potentially go wrong and develop a large test suite of example inputs that may trigger those outcomes, the report says. Likewise, there should be lists of intended and “uncommon” inputs to the system as well. Then, using a test framework, these lists can be run against the system in bulk, with generative AI once again helping efficiently evaluate the outputs for correctness. These tests can re-run whenever the system is updated, the report says, much like ordinary integration tests.

CISOs should leverage AI where possible, the report says. That includes:

  • using AI to prioritize incidents and reduce time to resolve/mitigate. AI security solutions provide more than just a graphical representation of events, the report notes; they generate a comprehensive incident summary that allows SOC analysts to quickly understand the situation and identify human-operated ransomware targeting mission-critical devices and users, enabling swift and decisive action. Using AI, an analyst can instead assess an encoded command line run on a suspicious device from the incident. What would have taken a junior analyst dozens of minutes and several tools can now be achieved at machine-speed, the report says;
  • use AI to scan proprietary or public information for risk assessment;
  • use LLMs to scan diverse data (tickets, reports on previous security incidents, playbooks etc.) for themes over time;
  • use AI to prioritize work of the IT department, including regulatory compliance, based in part on relevant policies and procedures;
  • use AI to augment in house cybersecurity-related datasets by scraping online content for threat intelligence and vulnerabilities;
  • use LLMs to generate complete and accurate answers to employee security-related questions.
  • a security analyst could query an AI system with prompts like, “Triage the following email and point out what you find suspicious,” and then “Based on your investigation create a containment plan.”

In addition, Jess noted, CISOs have a number of tools they can use such as:

She also pointed out that many countries have AI guidelines for organizations to follow, including the US CISA’s Safety and Security Guidelines for critical infrastructure providers, and earlier this year, the Council of Europe adopted a legally binding treaty for organizations doing business in the European Union that sets out a legal framework covering the entire lifecycle of AI systems.

CISOs need to understand how AI will be integrated into their systems and how AI products may interact, Jess said. AI is pretty good for daily repetitive tasks, but if an infosec leader wants to do more, make sure there are guardrails on the outputs — in other words, all recommendations from the system should be checked. Also make sure you understand all the data each AI system will access.

And while AI can create nifty lures and deepfakes that employees may fall for, she added, don’t forget that basic cyber hygiene will still be very effective in blunting the impact. This includes cybersecurity awareness training on phishing, business process rules on funds disbursement and adaptive authentication and access management.


Viewing all articles
Browse latest Browse all 1594

Trending Articles