PepsiCo and JetBlue: Case Studies in Securing RAG LLM Deployments
By Carsten Krause, July 3, 2024
The Imperative of Secure LLM Deployment
The rise of Retrieval-Augmented Generation (RAG) models in natural language processing has revolutionized various industries. However, the deployment of RAG Language Learning Models (LLMs) necessitates a stringent focus on security, particularly when dealing with sensitive corporate data. This article delves into the secure deployment of RAG LLMs within an internally hosted environment, emphasizing role-based access control (RBAC), monitoring security measures, and contrasting these setups with public LLM deployments. Additionally, we explore case studies from PepsiCo and JetBlue, illustrating their approaches to securing generative AI in both internal and external contexts.
Secure Deployment of RAG LLMs: Internal Hosting with RBAC
Role-Based Access Control (RBAC)
Implementing RBAC is crucial for the secure deployment of RAG LLMs within an organization. RBAC ensures that users have access only to the data and functions necessary for their roles, minimizing the risk of data breaches and unauthorized access.
- Define Roles and Permissions: Clearly delineate roles within the organization, such as data scientists, analysts, and administrators. Assign specific permissions to each role, ensuring that access to the LLM and its outputs is strictly controlled.
- Implement Authentication and Authorization: Use multi-factor authentication (MFA) and robust authorization protocols to verify user identities and enforce access policies.
- Audit and Monitor Access: Regularly audit access logs to detect any anomalies or unauthorized access attempts. Automated monitoring tools can alert administrators to potential security breaches in real-time.
Secure Hosting Environment
Deploying RAG LLMs in a secure, internally hosted environment involves several best practices:
- Isolate the LLM Environment: Use virtual private networks (VPNs) and network segmentation to isolate the LLM environment from the broader corporate network. This reduces the risk of lateral movement by malicious actors.
- Encrypt Data: Ensure that all data, both in transit and at rest, is encrypted using industry-standard encryption protocols.
- Regular Security Updates: Keep all software and hardware components up to date with the latest security patches to protect against known vulnerabilities.
- Intrusion Detection Systems (IDS): Deploy IDS to monitor network traffic for suspicious activity and potential intrusions.
Monitoring and Security Measures
Continuous monitoring and proactive security measures are essential to maintaining the integrity of RAG LLM deployments:
- Real-Time Monitoring: Utilize security information and event management (SIEM) systems to provide real-time monitoring and analysis of security alerts.
- Regular Security Audits: Conduct regular security audits and penetration testing to identify and mitigate vulnerabilities.
- Incident Response Plan: Develop and regularly update an incident response plan to quickly address any security breaches.
Contrasting Internal and Public LLM Deployments
Public LLM Deployments: Risks and Mitigations
Deploying RAG LLMs on public platforms, such as corporate websites, introduces additional security challenges. The primary concerns include unauthorized access, data leakage, and potential misuse of the model.
- Access Control: Implement stringent access controls to restrict usage to authorized users. This can include user authentication, API rate limiting, and usage monitoring.
- Data Privacy: Ensure that any user data processed by the public LLM is anonymized and complies with data privacy regulations such as GDPR or CCPA.
- Model Misuse Prevention: Implement safeguards to prevent misuse of the model, such as filtering out harmful or inappropriate content and monitoring for abuse patterns.
- Regular Updates and Monitoring: Keep the model and its deployment environment up to date with the latest security patches. Use monitoring tools to detect and respond to potential security incidents promptly.
Case Studies: PepsiCo and JetBlue
PepsiCo: Securing Internal RAG LLM Deployment
PepsiCo, a global food and beverage leader, has successfully integrated RAG LLMs within its internal operations, primarily focusing on optimizing supply chain management and enhancing market analysis capabilities. The company’s commitment to security and efficiency in AI deployment is reflected in its comprehensive approach.
Role-Based Access Control Implementation

PepsiCo’s internal RAG LLM deployment involved meticulous planning and execution of role-based access control (RBAC):
- Role Definition and Permissions: PepsiCo clearly defined roles such as data scientists, supply chain analysts, and IT administrators. Each role was assigned specific permissions, ensuring that users had access only to the data and functionalities necessary for their tasks.
- Authentication and Authorization: Multi-factor authentication (MFA) was implemented to add an additional layer of security, ensuring that only authorized personnel could access the LLM. Robust authorization protocols verified user identities and enforced access policies rigorously.
Secure Hosting Environment
The LLM is hosted in a highly secure environment, adhering to best practices for data protection and network security:
PepsiCo has leveraged Databricks on Azure to manage and deploy its RAG LLMs efficiently. Databricks provided a unified analytics platform that facilitated the integration of AI models into their existing data workflows, ensuring scalability and compliance with security standards.
- Scalability: The Databricks platform allowed PepsiCo to scale their LLM deployment as needed, accommodating increasing data volumes and computational requirements.
- Security: Databricks’ robust security features, including data encryption and access controls, complemented PepsiCo’s internal security measures, ensuring a secure deployment environment.
- Network Isolation: PepsiCo utilized virtual private networks (VPNs) and network segmentation to isolate the LLM environment from the broader corporate network, reducing the risk of lateral movement by malicious actors.
- Data Encryption: All data, whether in transit or at rest, is encrypted using industry-standard encryption protocols, ensuring data integrity and confidentiality.
- Security Patches and Updates: Regular updates and security patches were applied to all software and hardware components, protecting against known vulnerabilities and emerging threats.
- Intrusion Detection Systems (IDS): IDS were deployed to monitor network traffic for suspicious activity and potential intrusions, providing real-time alerts to security teams.
Monitoring and Security Measures
PepsiCo adopted a proactive approach to monitoring and securing its RAG LLM deployment:
- Real-Time Monitoring: Security information and event management (SIEM) systems were utilized to provide real-time monitoring and analysis of security alerts, enabling quick detection and response to potential threats.
- Regular Security Audits: Routine security audits and penetration testing were conducted to identify and mitigate vulnerabilities, ensuring the continuous security of the deployment.
- Incident Response Plan: An incident response plan was developed and regularly updated, enabling PepsiCo to quickly address and remediate any security breaches.
For more details on PepsiCo’s approach, visit PepsiCo’s AI Strategy.
JetBlue: Public LLM Deployment on Corporate Website
JetBlue, a major American airline, has integrated RAG LLMs into their customer service platform on their corporate website, enhancing user interaction and support while maintaining stringent security measures.
Secure Public Deployment
JetBlue’s approach to deploying RAG LLMs publicly focused on robust security measures to protect user data and prevent misuse:

JetBlue also utilized Databricks on Azure to enhance the management and deployment of their RAG LLMs. The platform provided the necessary infrastructure to support large-scale data processing and AI model integration.
- Data Processing: Databricks enabled efficient processing and analysis of vast amounts of customer interaction data, improving the accuracy and responsiveness of the LLM.
- Security and Compliance: Databricks’ built-in security features ensured that JetBlue’s LLM deployment adhered to stringent security and compliance standards.
- User Authentication: Customers were required to authenticate before accessing certain features powered by the LLM, ensuring secure interactions and protecting user data.
- Data Privacy Compliance: JetBlue anonymized customer data to protect privacy and complied with all relevant data protection regulations, such as GDPR and CCPA.
- Model Misuse Prevention: The LLM was programmed to filter out harmful or inappropriate content, and monitoring tools were used to detect and respond to abuse patterns promptly.
Continuous Monitoring and Updates
JetBlue ensured the ongoing security of its public LLM deployment through continuous monitoring and regular updates:
- Real-Time Monitoring: Security information and event management (SIEM) systems provided real-time monitoring and analysis of security alerts, enabling quick detection and response to potential threats.
- Security Patches and Updates: The model and its deployment environment were kept up to date with the latest security patches, protecting against known vulnerabilities and emerging threats.
- User Activity Monitoring: JetBlue monitored user activity to detect and prevent misuse of the LLM, ensuring that interactions remained secure and appropriate.
For more information on JetBlue’s AI deployment, visit JetBlue’s AI Initiatives.
Enhancing the Security of Large Language Model Deployments: Key Platforms and Solutions
Deploying Large Language Models (LLMs) securely requires a combination of robust security measures and advanced technological solutions. Companies looking to enhance the security of their LLM deployments can benefit from various specialized platforms that offer a suite of security features. These platforms address critical aspects of security such as role-based access control, data encryption, real-time monitoring, and compliance with data privacy regulations. They provide comprehensive solutions designed to mitigate risks, prevent unauthorized access, and ensure the integrity and confidentiality of data. By leveraging these tools, organizations can fortify their LLM deployments against potential threats and vulnerabilities, ensuring a secure and compliant operational environment. The table below lists some of the leading companies and platforms that specialize in securing LLM deployments, highlighting their key features and capabilities.
Companies Enabling Secure LLM Deployment
| Company | Solution | Features | Website |
|---|---|---|---|
| THEOM | Secure LLM Deployment Platform | Role-based access, encryption, real-time monitoring | THEOM |
| Premcloud LLM Data Feeder | Secure Data Feeder | Secure data ingestion, RBAC, compliance with data privacy regulations | Premcloud |
| Databricks | Unified Analytics Platform | Scalable data processing, security compliance, integration with Azure | Databricks |
| AWS SageMaker | Machine Learning Service | Secure deployment, monitoring, encryption, and compliance | AWS SageMaker |
| Microsoft Azure AI | AI Platform | Security and compliance, integration with Azure services, RBAC | Azure AI |
| Google Cloud AI | AI and Machine Learning Services | Robust security features, scalability, integration with Google Cloud | Google Cloud AI |
| Giskard | LLMOps Management Platform | Centralized quality and security management, automated testing | Giskard |
| Lakera | LLM Security Solutions | Red-teaming, risk assessment, differential privacy | Lakera |
| WhyLabs | AI Performance and Security Platform | Real-time monitoring, explainable AI, adversarial testing | WhyLabs |
| CAI Platforms | Autonomous Generative AI Platform | Role-based control, centralized authority, seamless integration with Azure | CAI Platforms |
The CDO TIMES Bottom Line
Deploying Retrieval-Augmented Generation (RAG) Language Learning Models (LLMs) securely, whether internally hosted or on public platforms, requires meticulous planning and robust security measures. Here are the key takeaways:
- Role-Based Access Control (RBAC): Implementing RBAC is essential to ensure that only authorized personnel have access to sensitive data and model functionalities. This minimizes the risk of data breaches and unauthorized access, crucial for both internal and public deployments.
- Secure Hosting Environments: Whether hosting internally or using cloud platforms like Azure with Databricks, isolating the LLM environment, encrypting data, and applying regular security updates are fundamental. This ensures that sensitive information remains protected and the system remains resilient to attacks.
- Continuous Monitoring and Audits: Real-time monitoring using Security Information and Event Management (SIEM) systems, regular security audits, and having an incident response plan in place are vital to detect and mitigate potential security threats swiftly.
- Data Privacy and Compliance: Public LLM deployments must prioritize data privacy, ensuring compliance with regulations such as GDPR and CCPA. This involves anonymizing user data and implementing strict access controls.
- Model Misuse Prevention: Safeguards to prevent the misuse of LLMs, such as filtering out harmful content and monitoring for abuse patterns, are crucial for maintaining the integrity and trustworthiness of the deployed models.
As organizations continue to leverage AI, maintaining a strong security posture will be paramount to their success. The integration of comprehensive security measures and the utilization of robust platforms ensure that generative AI deployments are both efficient and secure.
Love this article? Embrace the full potential and become an esteemed full access member, experiencing the exhilaration of unlimited access to captivating articles, exclusive non-public content, empowering hands-on guides, and transformative training material. Unleash your true potential today!
Order the AI + HI = ECI book by Carsten Krause today! at cdotimes.com/book

Subscribe on LinkedIn: Digital Insider
Become a paid subscriber for unlimited access, exclusive content, no ads: CDO TIMES
Do You Need Help?
Consider bringing on a fractional CIO, CISO, CDO or CAIO from CDO TIMES Leadership as a Service. The expertise of CDO TIMES becomes indispensable for organizations striving to stay ahead in the digital transformation journey. Here are some compelling reasons to engage their experts:
- Deep Expertise: CDO TIMES has a team of experts with deep expertise in the field of Cybersecurity, Digital, Data and AI and its integration into business processes. This knowledge ensures that your organization can leverage digital and AI in the most optimal and innovative ways.
- Strategic Insight: Not only can the CDO TIMES team help develop a Digital & AI strategy, but they can also provide insights into how this strategy fits into your overall business model and objectives. They understand that every business is unique, and so should be its Digital & AI strategy.
- Future-Proofing: With CDO TIMES, organizations can ensure they are future-proofed against rapid technological changes. Our experts stay abreast of the latest AI, Data and digital advancements and can guide your organization to adapt and evolve as the technology does.
- Risk Management: Implementing a Digital & AI strategy is not without its risks. The CDO TIMES can help identify potential pitfalls and develop mitigation strategies, helping you avoid costly mistakes and ensuring a smooth transition with fractional CISO services.
- Competitive Advantage: Finally, by hiring CDO TIMES experts, you are investing in a competitive advantage. Their expertise can help you speed up your innovation processes, bring products to market faster, and stay ahead of your competitors.
By employing the expertise of CDO TIMES, organizations can navigate the complexities of digital innovation with greater confidence and foresight, setting themselves up for success in the rapidly evolving digital economy. The future is digital, and with CDO TIMES, you’ll be well-equipped to lead in this new frontier.
Do you need help with your digital transformation initiatives? We provide fractional CAIO, CDO, CISO and CIO services, do a Preliminary ECI and Tech Navigator Assessment and we will help you drive results and deliver winning digital and AI strategies for you!
Subscribe now for free and never miss out on digital insights delivered right to your inbox!

