Introducing the Chatbot Guardrails Arena
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Introducing the Chatbot Guardrails Arena Published March 21, 2024 Update on GitHub Upvote 6 Sonali Pattnaik sonalipnaik Follow guest Rohan Karan rohankaran Follow guest Srijan Kumar srijankedia Follow guest Clémentine Fourrier clefourrier Follow With the recent advancements in augmented LLM capabilities, deployment of enterprise AI assistants (such as chatbots and agents) with access to internal databases is likely to increase; this trend could help with many tasks, from internal document summarization to personalized customer and employee support. However, data privacy of said databases can be a serious concern (see 1, 2 and 3) when deploying these models in production. So far, guardrails have emerged as the widely accepted technique to ensure the quality, security, and privacy of AI chatbots, but anecdotal evidence suggests that even the best guardrails can be circumvented with relative ease. Lighthouz AI is therefore launching the Chatbot Guardrails Arena in collaboration with Hugging Face, to stress test LLMs and privacy guardrails in leaking sensitive data. Put on your creative caps! Chat with two anonymous LLMs with guardrails and try to trick them into revealing sensitive financial information. Cast your vote for the model that demonstrates greater privacy. The votes will be compiled into a leaderboard showcasing the LLMs and guardrails rated highest by the community for their privacy. Our vision behind the Chatbot Guardrails Arena is to establish t...