Generative AI, which has the unique ability to create original content and actions, had its conceptual origins in 1906 when Russian mathematician Andrei Andreevich Markov created a stochastic model of probabilities known as the Markov chain. Markov’s ideas remained mainly theoretical until the 1960s and 1970s, when the computing era took hold and early versions of machine learning were developed.
After that, the real-world applications of generative AI slowly emerged until OpenAI launched a public beta of its chatbot, ChatGPT, which ushered in an AI technology gold rush and sparked a seemingly endless emergence of new AI technologies.
Around 18 months after ChatGPT’s launch, an estimated 85% of organizations say they are using AI code-generation tools, even though 80% of those organizations also worry about AI security. The security risks of AI are wide-ranging and include content anomalies, inadequate data protection, poor application security, and possibly code hallucinations.
Despite a wealth of emerging risk management frameworks, experts suggest CISOs grapple with the startling newness of AI threats as the AI environment swiftly changes. They also recommend that CISOs reach out across their organizations to the other departments under pressure to deploy AI technologies quickly.
Moreover, against this backdrop, CISOs must continue to ensure that their traditional cybersecurity risk management programs are sound enough to deal with the added layer of new AI threats.
An abundance of high-level efforts to address AI risks
The fears surrounding AI’s risks have fostered many new models for managing those risks. Chief among these is a robust AI risk management framework developed by NIST.
According to Munish Walther-Puri, adjunct professor at NYU’s Center for Global Affairs, many other AI risk management frameworks are emerging, including efforts by the Center for Security and Emerging Technology, the Partnership on AI, and the Responsible Artificial Intelligence Institute.
In addition to these frameworks, Walther-Puri points to a host of private sector governance, risk management, and compliance (GRC) frameworks that tackle AI threats, including models from Credo AI, Armilla AI, Holistic AI, QuantPI, and Saidot, to name a few.
Last week, a group of tech titans, including Google, IBM, Intel, Microsoft, NVIDIA, PayPal, Anthropic, Cisco, Chainguard, OpenAI, and Wiz, launched the Coalition for Secure AI (CoSAI), an open-source initiative designed to give practitioners and developers the guidance and tools they need to create secure-by-design AI systems.
Moreover, under a 2023 AI safety and security White House executive order, NIST released last week three final guidance documents and a draft guidance document from the newly created US AI Safety Institute, all intended to help mitigate AI risks. NIST also re-released a test platform called Dioptra for assessing AI’s “trustworthy” characteristics, namely AI that is “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair,” with harmful bias managed.
CISOs should prepare for a rapidly changing environment
Despite the enormous intellectual, technical, and government resources devoted to creating AI risk models, practical advice for CISOs on how to best manage AI risks is currently in short supply.
Although CISOs and security teams have come to understand the supply chain risks of traditional software and code, particularly open-source software, managing AI risks is a whole new ballgame. “The difference is that AI and the use of AI models are new.” Alon Schindel, VP of data and threat research at Wiz tells CSO.
“We have never seen technology developed so fast like these models,” he says. “It’s not like the machine learning models of the past. There are some great opportunities here, but the work is not done yet. We still haven’t worked out how to ensure this feature will be the most effective for security teams.”
James Robinson, CISO at Netskope, tells CSO, “It’s still very early days. It’s rapidly developing. The research reports are coming out amazingly fast, and there’s a lot of excitement and investment. The landscape continues to evolve. That’s one thing CISOs must be prepared for.”
“Newer architecture and newer models are advancing by the second nowadays,” Omar Santos, distinguished engineer at Cisco and co-chair of CoSAI, tells CSO. “It’s just like any supply chain,” Walther-Puri says. “By trying to accelerate and get a product to market, organizations will onboard AI suppliers that they haven’t fully vetted.”
One silver lining to AI’s disorienting eruption is that, unlike security solutions for traditional digital and IT technologies, which had to be bolted on after the fact, cybersecurity professionals now have the chance to incorporate more robust security into AI at the outset. “This might be the first time when you see that there’s a new domain and new technology that the security products are being developed together with this domain,” Schindel says.
CISOs need to work with their organizations’ AI deployers
While CISOs are held accountable for the security of their organization’s operations, the pressure to deploy AI technologies typically comes from teams outside their departments, who may not be well-versed in the best security practices.
“We have new teams now,” Schindel says. “Now, the AI engineers or the data science teams are under pressure because they need to deploy new services, models, and pipelines with the new LLMs [large language models]. They need to deploy this code to the production environment as fast as possible. And they don’t necessarily understand all the different risks. On the other hand, the security teams don’t necessarily understand the risks of AI, or they’re not even familiar with the different concepts of AI.”
CISOs must reach outside their groups to work with people in the organization who are adopting AI technologies to ensure the most secure deployments possible. “Since the early eighties, we all figured out security was a problem,” David LaBianca, senior director of security engineering at Google and a CoSAI co-chair, tells CSO. “And you have to apply those foundations to even your use of AI because they still share lots of the same problems. You’ll get caught out if you don’t know how to think about the changing threat landscape due to your use of AI.”
“The threat to the implementation of AI is that most companies are asking all their individuals to do something with AI,” Cisco’s Santos says. Corporate leaders say, “‘ We need to do something with AI tomorrow. We need to do something with AI yesterday. Let’s accelerate the experimentation.’ And because of that speed of innovation and experimentation, we’re reintroducing many of the security issues we have already experienced, such as poor access controls, identity management, etc.”
AI security controls need solid risk management to work
No security controls on AI will work well if the organization lacks a solid, well-established traditional cybersecurity risk management program.
“The way we frame it is that you have to do the foundational best practices of classical security for information systems,” Google’s La Bianca says. “If you’re not doing privileged access management well, if you don’t have really strong network boundaries and host boundaries, if you don’t know where you’re downloading your software from or what your developers are doing, you’re going to have all the same failures you would have in a classical piece of software.”
“We’re beating that drum because it’s easy to get sucked in by the whizzbang elements of AI or just the brand-new unique threats,” La Bianca says. “The squeaky wheel gets the grease, but we want people to remember that the foundational element is important.”
“Step zero is inventory, inventory, inventory,” Santos says, referring to the critical risk management practice of maintaining accurate and up-to-date software inventory records. “Of course, having detailed records of the origins of all the software and everything else is easier said than done.”
While CISOs currently have few easy go-to solutions for securing AI software and applications, the best option is to learn as much as possible while this technological wild west matures into a more stable ecosystem. Netskope’s Robinson says, “Security leaders are going to have to get more and more comfortable with understanding the AI threat, vulnerability, and response landscape they’re in.”