Constitutional AI with Open LLMs
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Constitutional AI with Open LLMs Published February 1, 2024 Update on GitHub Upvote 17 +11 Shengyi Costa Huang vwxyzjn Follow Lewis Tunstall lewtun Follow Edward Beeching edbeeching Follow Leandro von Werra lvwerra Follow Omar Sanseviero osanseviero Follow Kashif Rasul kashif Follow Thomas Wolf thomwolf Follow Since the launch of ChatGPT in 2022, we have seen tremendous progress in LLMs, ranging from the release of powerful pretrained models like Llama 2 and Mixtral, to the development of new alignment techniques like Direct Preference Optimization. However, deploying LLMs in consumer applications poses several challenges, including the need to add guardrails that prevent the model from generating undesirable responses. For example, if you are building an AI tutor for children, then you don’t want it to generate toxic answers or teach them to write scam emails! To align these LLMs according to a set of values, researchers at Anthropic have proposed a technique called Constitutional AI (CAI), which asks the models to critique their outputs and self-improve according to a set of user-defined principles. This is exciting because the practitioners only need to define the principles instead of having to collect expensive human feedback to improve the model. In this work, we present an end-to-end recipe for doing Constitutional AI with open models. We are also releasing a new tool called llm-swarm to leverage GPU Slurm clusters for scalable synthetic data genera...