NeMo Guardrails, the Ultimate Open-Source LLM Security Toolkit | by Wenqi Glantz | Feb, 2024

Exploring NeMo ‘ practical use cases

Wenqi Glantz
Towards Data Science
generated by DALL-E 3 by the author

On the topic of LLM security, we have explored OWASP top 10 for LLM applications, Llama Guard, and Lighthouz AI so far from different angles. , we are going to explore NeMo Guardrails, an developed by for easily adding programmable guardrails to LLM-based conversational systems.

How is NeMo Guardrails different from Llama Guard, which we dived into in a previous article? Let’s put them side by side and compare their .

Table by author

As we can see, Llama Guard and NeMo Guardrails are fundamentally different:

  • Llama Guard is a large language model, finetuned from Llama 2, and an input-output model. It comes with six unsafe categories, and developers can those categories by adding additional unsafe categories to tailor to their use cases for input-output moderation.
  • NeMo Guardrails is a much more comprehensive LLM security , offering a broader set of programmable guardrails to control and LLM inputs and outputs, including content moderation, topic guidance, which steers conversations towards specific topics, hallucination prevention, which reduces the generation of factually incorrect or nonsensical content, and response shaping.
Image source: NeMo Guardrails GitHub Repo README

Let’s dive into the implementation details on how to add NeMo Guardrails to an RAG pipeline built with RecursiveRetrieverSmallToBigPack, an advanced retrieval pack from LlamaIndex. How does this pack work? It takes our document and breaks it down, starting with the larger sections (parent chunks) and chopping them up into smaller pieces (child chunks). It links each child chunk to its parent…

Source link