Open-source LLMs as LangChain Agents
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Open-source LLMs as LangChain Agents Published January 24, 2024 Update on GitHub Upvote 76 +70 Aymeric Roucher m-ric Follow Joffrey THOMAS Jofthomas Follow Andrew Reed andrewrreed Follow TL;DR Open-source LLMs have now reached a performance level that makes them suitable reasoning engines for powering agent workflows: Mixtral even surpasses GPT-3.5 on our benchmark, and its performance could easily be further enhanced with fine-tuning. We've released the simplest agentic library out there: smolagents! Go checkout the smolagents introduction blog here. Introduction Large Language Models (LLMs) trained for causal language modeling can tackle a wide range of tasks, but they often struggle with basic tasks like logic, calculation, and search. The worst scenario is when they perform poorly in a domain, such as math, yet still attempt to handle all the calculations themselves. To overcome this weakness, amongst other approaches, one can integrate the LLM into a system where it can call tools: such a system is called an LLM agent. In this post, we explain the inner workings of ReAct agents, then show how to build them using the ChatHuggingFace class recently integrated in LangChain. Finally, we benchmark several open-source LLMs against GPT-3.5 and GPT-4. Table of Contents What are agents? Toy example of a ReAct agent's inner working Challenges of agent systems Running agents with LangChain Agents Showdown: how do different LLMs perform as general purpose reasoni...