AFA
AFA

The Crucial Network Security Guardrails for Ensuring GenAI Success

- Updated Sep 23, 2024
Illustration: © AI For All
Millions of organizations are actively deploying and leveraging generative AI (GenAI) applications to streamline productivity, reduce costs, and improve efficiencies. A high percentage of enterprises are trialing GenAI pilot programs. 
By 2026, it is anticipated that more than 80 percent of enterprises will have deployed generative AI-enabled applications, according to Gartner. While the business value of GenAI is undeniable, the potential security risks and vulnerabilities that can arise when leveraging such technology are vast.
As organizations experiment with GenAI and scale their ecosystems, it is imperative to proceed with caution and implement strict security guardrails from the outset. Otherwise, organizations run the risk of falling victim to cyber incidents and data exfiltration. 
With GenAI being a relatively new phenomenon, many organizations remain unaware of the core network security measures required to safeguard the use of the technology. There are several core guardrails enterprises must deploy to protect against the security risks of these applications.
Security Guardrails
#1: Establish Security Controls at Ingress and Egress Levels
Customized GenAI models comprise high volumes of confidential or proprietary company data. Organizations must set governance and security controls at the ingress and egress level to safeguard this sensitive data from malicious actors. 
"Ingress" refers to the data and traffic that flows into GenAI applications from either the Internet or external sources. Setting fine-grain controls that regulate what can enter into GenAI applications is key to avoiding data exfiltration. 
"Egress" on the other hand, alludes to the traffic leaving GenAI applications. In this circumstance, it is essential to devise policies and procedures that ensure specific data does not leave enterprises in an unregulated way.
#2: Enforce Isolation and Implement Micro-Segmentation
To limit the risk of exposure, particularly with pre-packaged GenAI models, organizations must implement a sufficient level of multi-tenancy and segregation between GenAI applications and their broader infrastructure. 
Establishing these barriers will limit the blast radius of potential compromise and will prevent the incident from spreading to other applications within the enterprise. This separation can be achieved at either the application level or within specific namespaces.
In addition, implementing micro-segmentation at the tenant or namespace level is critical, especially for preventing the lateral movement of attackers. 
This technique divides networks into smaller, isolated segments, allowing for more granular control over traffic flow, significantly bolstering overall security posture.
#3: Scan for Vulnerabilities at Build and Runtime
Scanning for vulnerabilities at the build and runtime phases is another important piece of the puzzle. Vulnerabilities come in a variety of forms and stem from various types of software flaws and misconfigurations.
Vulnerabilities can be inherited from open-source libraries, base images, and other third-party components—some of which are known and others that are yet to be discovered. Security risks can also originate from first-party application code. 
Organizations must implement controls that enable them to continuously track these vulnerabilities and misconfigurations to determine if any are present within their GenAI applications and how they may potentially be exploited by attackers. 
Organizations should keep track of all these issues in both build and runtime phases to prioritize remediation efforts and proactively plug security gaps.
#4: Adopt a Least Privilege Approach
One of the most prevalent challenges with GenAI is combinatorial. Most organizations have numerous GenAI applications that are accessing multiple data sources and using multiple models. 
It leads to the pivotal security challenge of trying to control which models and applications can communicate with each other as well as with various data sources. 
Oftentimes, organizations have a large number of permutations enabled by default allowing all applications and models to communicate with one another. Organizations must adopt a least privilege approach when it comes to GenAI, enabling connectivity for communication only if it is required.
#5: Promote Collaboration Between Developers and Security Teams
To ensure organizations are deriving the most value from GenAI applications while keeping security at the forefront, it is also essential to promote cross-collaboration between the developers experimenting with GenAI applications and security teams trying to secure these innovations.
Developers must be part of setting security guardrails. They understand the specific makeup of the technology and can help security teams garner a deeper understanding of where possible risks may arise. 
Security teams must also effectively communicate their reasoning behind the need for security controls at various aspects of the application.
#6: Enable Security Controls and Troubleshooting
Many GenAI applications span across multiple clusters, leading to security and troubleshooting challenges. 
Organizations need a single span of control to enforce network security across multiple clusters, and a centralized way to troubleshoot issues across multiple clusters. Enforcing controls can be achieved through federated policies.
To address issues with troubleshooting, organizations should use tools that monitor activities and traffic within and across clusters to quickly visualize and identify abnormal patterns or behaviors.
Challenges with troubleshooting have become a critical issue for organizations deploying GenAI applications. Enterprises have to run GenAI applications on multiple Graphics Processing Units (GPUs) and sometimes end up having hundreds, if not thousands, of GPUs to train their GenAI models. 
As the cost of GPUs is very high when downtime occurs and applications are not running, being able to troubleshoot and establish the root cause rapidly is very important to avoid expensive downtime.
Latency remains one of the biggest pain points and organizations must establish the means to identify the root cause of such latency quickly. This requires the use of specific tools that can visualize what is happening and identify where issues persist. This will enable organizations to troubleshoot faster and reduce their cost of downtime.
Securing the Future
As the business case for Generative AI applications becomes more apparent, there is no doubt that the technology's rate of adoption is set to continue proliferating. 
However, the excitement and hype around this new technical innovation cannot overshadow the need for strict security guardrails, and the governance that is required to utilize and adopt GenAI applications securely. 
Cybercriminals are jumping on the GenAI bandwagon, too—looking for ways to exploit the technology—and organizations must keep their guard up. There is no time like the present to start rolling out these preventative network security guardrails.
Generative AI
Cybersecurity
Enterprise AI
Author
Ratan Tipirneni is President & CEO at Tigera, where he is responsible for defining strategy, leading execution, and scaling revenues. Ratan is an entrepreneurial executive with extensive experience incubating, building, and scaling software businesses from early stage to hundreds of millions of dollars in revenue. He is a proven leader with a track record of building world-class teams.
Author
Ratan Tipirneni is President & CEO at Tigera, where he is responsible for defining strategy, leading execution, and scaling revenues. Ratan is an entrepreneurial executive with extensive experience incubating, building, and scaling software businesses from early stage to hundreds of millions of dollars in revenue. He is a proven leader with a track record of building world-class teams.