From the course: Building Secure and Trustworthy LLMs Using NVIDIA Guardrails

Unlock the full course today

Join today to access over 24,000 courses taught by industry experts.

Best practices for implementing guardrails

Best practices for implementing guardrails

- It's time now to take a look at some best practices for implementing guardrails. By the end of this session, you will have a comprehensive understanding of how to deploy LLMs in a responsible and effective manner, and also some of the risks involved while doing so. So let's dive right in. Let's start by understanding prompt injection, a critical security threat to LLMs. Prompt injection occurs when an adversary manipulates the input to control the LLM's output. There are two main types of prompt injection attacks. First, direct prompt injection, where the attacker directly interacts with the LLM to produce a specific response. Second, indirect prompt injection, which involves manipulating external data sources that the LLM uses causing it to generate malicious output. Both of these can have severe consequences affecting downstream processes and user interactions. By implementing guardrails, we can mitigate some of these risks. Here's a diagram illustrating a direct prompt injection…

Contents