Skip to content

Latest commit

 

History

History

llama_guard

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

Meta Llama Guard demo

Meta Llama Guard is a language model that provides input and output guardrails for LLM inference. For more details and model cards, please visit the main repository for each model, Meta Llama Guard and Meta Llama Guard 2.

This folder contains an example file to run inference with a locally hosted model, either using the Hugging Face Hub or a local path.

Requirements

  1. Access to Llama guard model weights on Hugging Face. To get access, follow the steps described here
  2. Llama recipes package and it's dependencies installed

Llama Guard inference script

For testing, you can add User or User/Agent interactions into the prompts list and the run the script to verify the results. When the conversation has one or more Agent responses, it's considered of type agent.

    prompts: List[Tuple[List[str], AgentType]] = [
        (["<Sample user prompt>"], AgentType.USER),

        (["<Sample user prompt>",
        "<Sample agent response>"], AgentType.AGENT),

        (["<Sample user prompt>",
        "<Sample agent response>",
        "<Sample user reply>",
        "<Sample agent response>",], AgentType.AGENT),

    ]

The complete prompt is built with the build_custom_prompt function, defined in prompt_format.py. The file contains the default Meta Llama Guard categories. These categories can adjusted and new ones can be added, as described in the research paper, on section 4.5 Studying the adaptability of the model.

To run the samples, with all the dependencies installed, execute this command:

python recipes/responsible_ai/llama_guard/inference.py

This is the output:

['<Sample user prompt>']
> safe

==================================

['<Sample user prompt>', '<Sample agent response>']
> safe

==================================

['<Sample user prompt>', '<Sample agent response>', '<Sample user reply>', '<Sample agent response>']
> safe

==================================

To run it with a local model, you can use the model_id param in the inference script:

python recipes/responsible_ai/llama_guard/inference.py --model_id=/home/ubuntu/models/llama3/llama_guard_2-hf/ --llama_guard_version=LLAMA_GUARD_2

Note: Make sure to also add the llama_guard_version if when it does not match the default, the script allows you to run the prompt format from Meta Llama Guard 1 on Meta Llama Guard 2

Inference Safety Checker

When running the regular inference script with prompts, Meta Llama Guard will be used as a safety checker on the user prompt and the model output. If both are safe, the result will be shown, else a message with the error will be shown, with the word unsafe and a comma separated list of categories infringed. Meta Llama Guard is always loaded quantized using Hugging Face Transformers library with bitsandbytes.

In this case, the default categories are applied by the tokenizer, using the apply_chat_template method.

Use this command for testing with a quantized Llama model, modifying the values accordingly:

python examples/inference.py --model_name <path_to_regular_llama_model> --prompt_file <path_to_prompt_file> --quantization --enable_llamaguard_content_safety