AI StrategyArtificial IntelligenceDigitalLLMRAG

PepsiCo and JetBlue: Case Studies in Securing RAG LLM Deployments

By Carsten Krause, July 3, 2024

The Imperative of Secure LLM Deployment

The rise of Retrieval-Augmented Generation (RAG) models in natural language processing has revolutionized various industries. However, the deployment of RAG Language Learning Models (LLMs) necessitates a stringent focus on security, particularly when dealing with sensitive corporate data. This article delves into the secure deployment of RAG LLMs within an internally hosted environment, emphasizing role-based access control (RBAC), monitoring security measures, and contrasting these setups with public LLM deployments. Additionally, we explore case studies from PepsiCo and JetBlue, illustrating their approaches to securing generative AI in both internal and external contexts.

Secure Deployment of RAG LLMs: Internal Hosting with RBAC

Role-Based Access Control (RBAC)

Implementing RBAC is crucial for the secure deployment of RAG LLMs within an organization. RBAC ensures that users have access only to the data and functions necessary for their roles, minimizing the risk of data breaches and unauthorized access.

  1. Define Roles and Permissions: Clearly delineate roles within the organization, such as data scientists, analysts, and administrators. Assign specific permissions to each role, ensuring that access to the LLM and its outputs is strictly controlled.
  2. Implement Authentication and Authorization: Use multi-factor authentication (MFA) and robust authorization protocols to verify user identities and enforce access policies.
  3. Audit and Monitor Access: Regularly audit access logs to detect any anomalies or unauthorized access attempts. Automated monitoring tools can alert administrators to potential security breaches in real-time.

Secure Hosting Environment

Deploying RAG LLMs in a secure, internally hosted environment involves several best practices:

  1. Isolate the LLM Environment: Use virtual private networks (VPNs) and network segmentation to isolate the LLM environment from the broader corporate network. This reduces the risk of lateral movement by malicious actors.
  2. Encrypt Data: Ensure that all data, both in transit and at rest, is encrypted using industry-standard encryption protocols.
  3. Regular Security Updates: Keep all software and hardware components up to date with the latest security patches to protect against known vulnerabilities.
  4. Intrusion Detection Systems (IDS): Deploy IDS to monitor network traffic for suspicious activity and potential intrusions.

Monitoring and Security Measures

Continuous monitoring and proactive security measures are essential to maintaining the integrity of RAG LLM deployments:

  1. Real-Time Monitoring: Utilize security information and event management (SIEM) systems to provide real-time monitoring and analysis of security alerts.
  2. Regular Security Audits: Conduct regular security audits and penetration testing to identify and mitigate vulnerabilities.
  3. Incident Response Plan: Develop and regularly update an incident response plan to quickly address any security breaches.

Contrasting Internal and Public LLM Deployments

Public LLM Deployments: Risks and Mitigations

Deploying RAG LLMs on public platforms, such as corporate websites, introduces additional security challenges. The primary concerns include unauthorized access, data leakage, and potential misuse of the model.

  1. Access Control: Implement stringent access controls to restrict usage to authorized users. This can include user authentication, API rate limiting, and usage monitoring.
  2. Data Privacy: Ensure that any user data processed by the public LLM is anonymized and complies with data privacy regulations such as GDPR or CCPA.
  3. Model Misuse Prevention: Implement safeguards to prevent misuse of the model, such as filtering out harmful or inappropriate content and monitoring for abuse patterns.
  4. Regular Updates and Monitoring: Keep the model and its deployment environment up to date with the latest security patches. Use monitoring tools to detect and respond to potential security incidents promptly.

Case Studies: PepsiCo and JetBlue

PepsiCo: Securing Internal RAG LLM Deployment

PepsiCo, a global food and beverage leader, has successfully integrated RAG LLMs within its internal operations, primarily focusing on optimizing supply chain management and enhancing market analysis capabilities. The company’s commitment to security and efficiency in AI deployment is reflected in its comprehensive approach.

Role-Based Access Control Implementation

PepsiCo’s internal RAG LLM deployment involved meticulous planning and execution of role-based access control (RBAC):

  1. Role Definition and Permissions: PepsiCo clearly defined roles such as data scientists, supply chain analysts, and IT administrators. Each role was assigned specific permissions, ensuring that users had access only to the data and functionalities necessary for their tasks.
  2. Authentication and Authorization: Multi-factor authentication (MFA) was implemented to add an additional layer of security, ensuring that only authorized personnel could access the LLM. Robust authorization protocols verified user identities and enforced access policies rigorously.

Secure Hosting Environment

The LLM is hosted in a highly secure environment, adhering to best practices for data protection and network security:

PepsiCo has leveraged Databricks on Azure to manage and deploy its RAG LLMs efficiently. Databricks provided a unified analytics platform that facilitated the integration of AI models into their existing data workflows, ensuring scalability and compliance with security standards.

  • Scalability: The Databricks platform allowed PepsiCo to scale their LLM deployment as needed, accommodating increasing data volumes and computational requirements.
  • Security: Databricks’ robust security features, including data encryption and access controls, complemented PepsiCo’s internal security measures, ensuring a secure deployment environment.
  • Network Isolation: PepsiCo utilized virtual private networks (VPNs) and network segmentation to isolate the LLM environment from the broader corporate network, reducing the risk of lateral movement by malicious actors.
  • Data Encryption: All data, whether in transit or at rest, is encrypted using industry-standard encryption protocols, ensuring data integrity and confidentiality.
  • Security Patches and Updates: Regular updates and security patches were applied to all software and hardware components, protecting against known vulnerabilities and emerging threats.
  • Intrusion Detection Systems (IDS): IDS were deployed to monitor network traffic for suspicious activity and potential intrusions, providing real-time alerts to security teams.

    Monitoring and Security Measures

    PepsiCo adopted a proactive approach to monitoring and securing its RAG LLM deployment:

    1. Real-Time Monitoring: Security information and event management (SIEM) systems were utilized to provide real-time monitoring and analysis of security alerts, enabling quick detection and response to potential threats.
    2. Regular Security Audits: Routine security audits and penetration testing were conducted to identify and mitigate vulnerabilities, ensuring the continuous security of the deployment.
    3. Incident Response Plan: An incident response plan was developed and regularly updated, enabling PepsiCo to quickly address and remediate any security breaches.

    For more details on PepsiCo’s approach, visit PepsiCo’s AI Strategy.

    JetBlue: Public LLM Deployment on Corporate Website

    JetBlue, a major American airline, has integrated RAG LLMs into their customer service platform on their corporate website, enhancing user interaction and support while maintaining stringent security measures.

    Secure Public Deployment

    JetBlue’s approach to deploying RAG LLMs publicly focused on robust security measures to protect user data and prevent misuse:

    JetBlue also utilized Databricks on Azure to enhance the management and deployment of their RAG LLMs. The platform provided the necessary infrastructure to support large-scale data processing and AI model integration.

    • Data Processing: Databricks enabled efficient processing and analysis of vast amounts of customer interaction data, improving the accuracy and responsiveness of the LLM.
    • Security and Compliance: Databricks’ built-in security features ensured that JetBlue’s LLM deployment adhered to stringent security and compliance standards.
    • User Authentication: Customers were required to authenticate before accessing certain features powered by the LLM, ensuring secure interactions and protecting user data.
    • Data Privacy Compliance: JetBlue anonymized customer data to protect privacy and complied with all relevant data protection regulations, such as GDPR and CCPA.
    • Model Misuse Prevention: The LLM was programmed to filter out harmful or inappropriate content, and monitoring tools were used to detect and respond to abuse patterns promptly.

      Continuous Monitoring and Updates

      JetBlue ensured the ongoing security of its public LLM deployment through continuous monitoring and regular updates:

      1. Real-Time Monitoring: Security information and event management (SIEM) systems provided real-time monitoring and analysis of security alerts, enabling quick detection and response to potential threats.
      2. Security Patches and Updates: The model and its deployment environment were kept up to date with the latest security patches, protecting against known vulnerabilities and emerging threats.
      3. User Activity Monitoring: JetBlue monitored user activity to detect and prevent misuse of the LLM, ensuring that interactions remained secure and appropriate.

      For more information on JetBlue’s AI deployment, visit JetBlue’s AI Initiatives.

      Enhancing the Security of Large Language Model Deployments: Key Platforms and Solutions

      Deploying Large Language Models (LLMs) securely requires a combination of robust security measures and advanced technological solutions. Companies looking to enhance the security of their LLM deployments can benefit from various specialized platforms that offer a suite of security features. These platforms address critical aspects of security such as role-based access control, data encryption, real-time monitoring, and compliance with data privacy regulations. They provide comprehensive solutions designed to mitigate risks, prevent unauthorized access, and ensure the integrity and confidentiality of data. By leveraging these tools, organizations can fortify their LLM deployments against potential threats and vulnerabilities, ensuring a secure and compliant operational environment. The table below lists some of the leading companies and platforms that specialize in securing LLM deployments, highlighting their key features and capabilities.


      Companies Enabling Secure LLM Deployment

      CompanySolutionFeaturesWebsite
      THEOMSecure LLM Deployment PlatformRole-based access, encryption, real-time monitoringTHEOM
      Premcloud LLM Data FeederSecure Data FeederSecure data ingestion, RBAC, compliance with data privacy regulationsPremcloud
      DatabricksUnified Analytics PlatformScalable data processing, security compliance, integration with AzureDatabricks
      AWS SageMaker


      Machine Learning ServiceSecure deployment, monitoring, encryption, and complianceAWS SageMaker
      Microsoft Azure AI


      AI PlatformSecurity and compliance, integration with Azure services, RBACAzure AI
      Google Cloud AI


      AI and Machine Learning ServicesRobust security features, scalability, integration with Google CloudGoogle Cloud AI
      GiskardLLMOps Management PlatformCentralized quality and security management, automated testingGiskard
      LakeraLLM Security SolutionsRed-teaming, risk assessment, differential privacyLakera
      WhyLabsAI Performance and Security PlatformReal-time monitoring, explainable AI, adversarial testingWhyLabs
      CAI PlatformsAutonomous Generative AI PlatformRole-based control, centralized authority, seamless integration with AzureCAI Platforms

      The CDO TIMES Bottom Line

      Deploying Retrieval-Augmented Generation (RAG) Language Learning Models (LLMs) securely, whether internally hosted or on public platforms, requires meticulous planning and robust security measures. Here are the key takeaways:

      1. Role-Based Access Control (RBAC): Implementing RBAC is essential to ensure that only authorized personnel have access to sensitive data and model functionalities. This minimizes the risk of data breaches and unauthorized access, crucial for both internal and public deployments.
      2. Secure Hosting Environments: Whether hosting internally or using cloud platforms like Azure with Databricks, isolating the LLM environment, encrypting data, and applying regular security updates are fundamental. This ensures that sensitive information remains protected and the system remains resilient to attacks.
      3. Continuous Monitoring and Audits: Real-time monitoring using Security Information and Event Management (SIEM) systems, regular security audits, and having an incident response plan in place are vital to detect and mitigate potential security threats swiftly.
      4. Data Privacy and Compliance: Public LLM deployments must prioritize data privacy, ensuring compliance with regulations such as GDPR and CCPA. This involves anonymizing user data and implementing strict access controls.
      5. Model Misuse Prevention: Safeguards to prevent the misuse of LLMs, such as filtering out harmful content and monitoring for abuse patterns, are crucial for maintaining the integrity and trustworthiness of the deployed models.

      As organizations continue to leverage AI, maintaining a strong security posture will be paramount to their success. The integration of comprehensive security measures and the utilization of robust platforms ensure that generative AI deployments are both efficient and secure.

      Love this article? Embrace the full potential and become an esteemed full access member, experiencing the exhilaration of unlimited access to captivating articles, exclusive non-public content, empowering hands-on guides, and transformative training material. Unleash your true potential today!

      Order the AI + HI = ECI book by Carsten Krause today! at cdotimes.com/book

      Subscribe on LinkedIn: Digital Insider

      Become a paid subscriber for unlimited access, exclusive content, no ads: CDO TIMES

      Do You Need Help?

      Consider bringing on a fractional CIO, CISO, CDO or CAIO from CDO TIMES Leadership as a Service. The expertise of CDO TIMES becomes indispensable for organizations striving to stay ahead in the digital transformation journey. Here are some compelling reasons to engage their experts:

      1. Deep Expertise: CDO TIMES has a team of experts with deep expertise in the field of Cybersecurity, Digital, Data and AI and its integration into business processes. This knowledge ensures that your organization can leverage digital and AI in the most optimal and innovative ways.
      2. Strategic Insight: Not only can the CDO TIMES team help develop a Digital & AI strategy, but they can also provide insights into how this strategy fits into your overall business model and objectives. They understand that every business is unique, and so should be its Digital & AI strategy.
      3. Future-Proofing: With CDO TIMES, organizations can ensure they are future-proofed against rapid technological changes. Our experts stay abreast of the latest AI, Data and digital advancements and can guide your organization to adapt and evolve as the technology does.
      4. Risk Management: Implementing a Digital & AI strategy is not without its risks. The CDO TIMES can help identify potential pitfalls and develop mitigation strategies, helping you avoid costly mistakes and ensuring a smooth transition with fractional CISO services.
      5. Competitive Advantage: Finally, by hiring CDO TIMES experts, you are investing in a competitive advantage. Their expertise can help you speed up your innovation processes, bring products to market faster, and stay ahead of your competitors.

      By employing the expertise of CDO TIMES, organizations can navigate the complexities of digital innovation with greater confidence and foresight, setting themselves up for success in the rapidly evolving digital economy. The future is digital, and with CDO TIMES, you’ll be well-equipped to lead in this new frontier.

      Subscribe now for free and never miss out on digital insights delivered right to your inbox!

      Carsten Krause

      I am Carsten Krause, CDO, founder and the driving force behind The CDO TIMES, a premier digital magazine for C-level executives. With a rich background in AI strategy, digital transformation, and cyber security, I bring unparalleled insights and innovative solutions to the forefront. My expertise in data strategy and executive leadership, combined with a commitment to authenticity and continuous learning, positions me as a thought leader dedicated to empowering organizations and individuals to navigate the complexities of the digital age with confidence and agility. The CDO TIMES publishing, events and consulting team also assesses and transforms organizations with actionable roadmaps delivering top line and bottom line improvements. With CDO TIMES consulting, events and learning solutions you can stay future proof leveraging technology thought leadership and executive leadership insights. Contact us at: info@cdotimes.com to get in touch.

      Leave a Reply