Cybersecurity solutions provider Securiti has rolled out a new tool, Gencore AI, to help enterprises build generative AI systems, co-pilots, and AI agents.
The solution, according to Securiti, borrows its homegrown data security and compliance capabilities to deliver a generative AI maker that addresses the control and governance challenges faced by similar tools while handling enterprise data for building smart in-house models.
“For enterprise organizations, the biggest barrier to deploying Gen AI systems at scale is safely connecting to data systems while ensuring proper controls and governance throughout the AI pipeline,” said Rehan Jalil, chief executive officer at Securiti. “Since the majority of an organization’s data is unstructured data, it is critical to govern and control these assets as they are tapped to fuel AI.”
Gencore AI, using proprietary technologies, enables organizations to safely connect to hundreds of data systems while preserving data controls and governance, Jalil added.
Homegrown knowledge graphs
Leveraging its existing security controls, Securiti’s Gencore AI aims to build AI systems aligning with an organization’s corporate policies and entitlements, in order to protect sensitive data from malicious attacks.
“Gencore AI enables organizations to easily and quickly build secure enterprise-grade AI systems,” Jalil said. “It is powered by a unique knowledge graph that maintains granular, contextual insights about data and AI systems.”
The unique knowledge graph Jalil refers to is called the “Data Command Graph.” This capability, Jalil notes, is built on Securiti’s ability to discover “sensitive data and AI systems across a range of public cloud, data cloud, on-prem or private cloud, and SaaS applications – leveraging hundreds of built-in classifiers and over 400 native connectors.”
This graph will reportedly provide granular insights–down to the file, column, row, or CLOB level, supporting “billions of nodes”. This feature can help process huge amounts of unstructured data (on top of structured data) that organizations usually need to train smart models on.
Conversation-aware LLM firewalls
As businesses deploy GenAI solutions at scale, large language model (LLM) firewalls are becoming extremely relevant as a tool for securing AI interactions, such as detecting and blocking unauthorized data access and anomalous behavior. There are several LLM firewall vendors, including Open AI, Nvidia, Anthropic, and Scale AI.
Securiti’s Gencore AI includes an LLM firewall with, it claims, additional features on legacy offerings which include OWASP-aligned controls, pre-configured AI security policies, and data context awareness.
“Gencore AI automatically protects sensitive information and upholds corporate data governance,” Jalil said. “With built-in regulatory knowledge, Gencore AI also ensures AI processes comply with relevant regulations, such as the EU AI Act and NIST AI RMF.” Securiti’s GenCore AI is already available for a varying “per-feature” subscription.