The author found that OpenAI has done a good job of adding safety measures to its models, and that LLaMA Guard is a useful tool for adding additional, custom safety policies. They also noted that the practice of chaining models, such as passing responses from OpenAI models to LLaMA, is becoming more common. They encouraged others to try out LLaMA Guard and provided links to the tool on GitHub, Colab, and a YouTube demo.
Key takeaways:
- The author spent time experimenting with LLaMA Guard, a tool by Meta that allows users to add safety measures to generative AI.
- LLaMA Guard allows users to define their own safety taxonomy, setting custom policies for safe and unsafe interactions between humans and AI.
- The author tested the safety of conversations with OpenAI’s ChatGPT using LLaMA Guard and found that OpenAI has done a good job of adding safety measures to its models.
- LLaMA Guard allows for model chaining, passing responses from OpenAI models to LLaMA, which is becoming increasingly common and is expected to lead to more complex pipelines in the future.