Why have you given up on LangChain?
Perhaps from the day it was born, LangChain was destined to be a product with polarizing reputations.
Those who are optimistic about LangChain appreciate its rich tools and components and its ease of integration. Those who are not optimistic about LangChain believe that it is doomed to fail - in this era when technology changes so fast, it is simply not feasible to build everything with LangChain.
Some exaggeration:
"In my consulting work, I spend 70% of my energy convincing people not to use langchain or llamaindex. This solves 90% of their problems."
Recently, an article LangChain’s complaints have once again become the focus of hot discussion:
The author, Fabian Both, is a deep learning engineer at the AI testing tool Octomind. The Octomind team uses an AI Agent with multiple LLMs to automatically create and fix end-to-end tests in Playwright.
This is a story that lasted for more than a year, starting from the choice of LangChain, and then entering the stage of tenacious struggle with LangChain. In 2024, they finally decided to say goodbye to LangChain.
Let's see what they went through:
"LangChain was the best choice"
We were using LangChain in production for over 12 months, starting in early 2023 and then removing it in 2024 .
In 2023, LangChain seems to be our best choice. It has an impressive array of components and tools, and its popularity has skyrocketed. LangChain promised to "allow developers to go from an idea to runnable code in an afternoon," but as our needs became more and more complex, problems began to surface.
LangChain becomes a source of resistance rather than a source of productivity.
As LangChain’s inflexibility began to show, we began to delve deeper into LangChain’s internals to improve the underlying behavior of the system. However, because LangChain intentionally makes many details abstract, we cannot easily write the required underlying code.
As we all know, AI and LLM are rapidly changing fields, with new concepts and ideas emerging every week. However, the framework design of LangChain, an abstract concept created around multiple emerging technologies, is difficult to stand the test of time.
Why LangChain is so abstract
Initially, LangChain can also help when our simple needs match LangChain’s usage assumptions. But its high-level abstractions quickly made our code more difficult to understand and frustrating to maintain. It’s not a good sign when the team spends as much time understanding and debugging LangChain as it does building features.
The problem with LangChain’s abstract approach can be illustrated by the trivial example of “translating an English word into Italian”.
Here is a Python example using only the OpenAI package:
This is a simple and easy-to-understand code containing only one class and one function call. The rest is standard Python code.
Compare this to LangChain’s version:
The code is roughly the same, but that’s where the similarity ends.
We now have three classes and four function calls. But what is worrying is that LangChain introduces three new abstract concepts:
Prompt template: Provides Prompt for LLM;
Output parser: Processes the output from LLM;
Chain: LangChain The "LCEL syntax" covers Python's | operator.
All LangChain does is increase the complexity of the code without any obvious benefits.
This kind of code may be fine for early prototypes. But for production use, each component must be reasonably understood so that it does not crash unexpectedly under actual use conditions. You must adhere to the given data structures and design your application around these abstractions.
Let’s look at another abstract comparison in Python, this time getting JSON from an API.
Use the built-in http package:
Use the requests package:
The difference is obvious. This is what good abstraction feels like.
Of course, these are trivial examples. But what I'm trying to say is that good abstractions simplify code and reduce the cognitive load required to understand it.
LangChain tries to make your life easier by hiding the details and doing more with less code. However, if this comes at the expense of simplicity and flexibility, then abstraction loses value.
LangChain is also used to using abstractions on top of other abstractions, so you often have to think in terms of nested abstractions to use the API correctly. This inevitably leads to understanding huge stack traces and debugging internal framework code you didn't write, rather than implementing new features.
Impact of LangChain on Development Teams
Generally speaking, applications heavily use AI Agents to perform different types of tasks such as discovering test cases, generating Playwright tests, and automatic fixes.
When we want to move from a single Sequential Agent architecture to a more complex architecture, LangChain becomes a limiting factor. For example, generate Sub-Agents and have them interact with the original Agent. Or multiple professional Agents interact with each other.
In another example, we need to dynamically change the availability of tools that the Agent can access based on business logic and the output of the LLM. However, LangChain does not provide a method to observe the Agent state from the outside, which resulted in us having to reduce the scope of implementation to adapt to the limited functionality of LangChain Agent.
Once we remove it, we no longer need to translate our needs into a solution suitable for LangChain. We just need to write code.
So, if not using LangChain, what framework should you use? Maybe you don't need a framework at all.
Do we really need a framework for building AI applications?
LangChain provided us with LLM functionality in the early days, allowing us to focus on building applications. But in hindsight, we would have been better off in the long term without the framework.
LangChain The long list of components gives the impression that building an LLM-powered application is very complex. But the core components required for most applications are usually as follows:
Client for LLM communication
Functions/tools for function calls
Vector database for RAG
Observability platform for tracking, assessment, and more.
The Agent space is evolving rapidly, bringing exciting possibilities and interesting use cases, but our advice - keep it simple for now until the usage patterns of Agents are solidified. Much development work in the field of artificial intelligence is driven by experimentation and prototyping.
The above is Fabian Both’s personal experience over the past year, but LangChain is not entirely without merit.
Another developer Tim Valishev said that he will stick with LangChain for a while longer:
I really like Langsmith:
Visual logging out of the box
Prompt playground, You can instantly fix prompts from the logs and see how it performs under the same inputs
Easily build test datasets directly from the logs, with the option to run simple test sets in prompts with one click (or do it in code) End-to-end testing)
Test score history
Prompt version control
and it provides good support for streaming of the entire chain, it takes some time to implement this manually.
What’s more, relying solely on APIs is not enough. The APIs of each large model manufacturer are different, and "seamless switching" is not possible.
What do you think?
Original link: https://www.octomind.dev/blog/why-we-no-longer-use-langchain-for-building-our-ai-agents
The above is the detailed content of Why have you given up on LangChain?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Editor of the Machine Power Report: Yang Wen The wave of artificial intelligence represented by large models and AIGC has been quietly changing the way we live and work, but most people still don’t know how to use it. Therefore, we have launched the "AI in Use" column to introduce in detail how to use AI through intuitive, interesting and concise artificial intelligence use cases and stimulate everyone's thinking. We also welcome readers to submit innovative, hands-on use cases. Oh my God, AI has really become a genius. Recently, it has become a hot topic that it is difficult to distinguish the authenticity of AI-generated pictures. (For details, please go to: AI in use | Become an AI beauty in three steps, and be beaten back to your original shape by AI in a second) In addition to the popular AI Google lady on the Internet, various FLUX generators have emerged on social platforms
