Are scattered AI prompts slowing down your development process? Discover how LangChain Hub can revolutionize your workflow, making prompt management seamless and efficient for JavaScript engineers.
Imagine managing a project with crucial information scattered across files. Frustrating, right? This is the reality for developers dealing with AI prompts. LangChain Hub centralizes prompt management, transforming workflows just as GitHub did for code collaboration.
LangChain Hub provides an intuitive interface for uploading, browsing, pulling, collaborating, versioning, and organizing prompts. This not only streamlines workflows but also fosters collaboration and innovation, making it an essential tool.
LangChain Hub is a powerful tool designed for JavaScript developers to centralize, manage, and collaborate on AI prompts efficiently.
Explore prompts from other developers, gaining new ideas and solutions. Learn new techniques, improve existing prompts, and foster a collaborative environment.
LangChain Hub brings all your AI prompts under one roof, eliminating the chaos of scattered files and fragmented storage. With everything neatly organized in one place, managing your prompts has never been easier.
Navigating LangChain Hub is a breeze, thanks to its intuitive design. Uploading, browsing, and managing your prompts is straightforward, boosting your productivity and minimizing the time spent on learning the tool.
LangChain Hub makes it simple to share and collaborate on prompts with your team. This seamless sharing fosters innovation and collective problem-solving, making teamwork more efficient and effective.
Never lose track of your prompt iterations with LangChain Hub's version control. You can easily revert to previous versions or monitor changes over time, ensuring you always have access to the best version of your prompt.
Find the prompts you need in no time with advanced search and filtering options. You can filter prompts by use-case, type, language, and model, ensuring you quickly access the most relevant resources. These features save you time and enhance your workflow, making prompt management more efficient and tailored to your specific project needs.
Tailor prompts to your specific project requirements effortlessly. LangChain Hub's customization options ensure your prompts fit seamlessly into your development process, adapting to your unique needs.
Let's set up a project to use prompt templates in LangChain Hub to highlight its value.
We'll start by using the demo project I created for the article Getting Started: LangSmith for JavaScript LLM Apps. While I encourage you to read that article, it's not required to follow along.
LANGCHAIN_PROJECT="langsmith-demo" # Name of your LangSmith project LANGCHAIN_TRACING_V2=true # Enable advanced tracing features LANGCHAIN_API_KEY=<your-api-key> # Your LangSmith API key OPENAI_API_KEY=<your-openai-api-key> # Your OpenAI API key
The demo app responds to the question "What is the capital of France?" in the voice of Mr. Burns from the Simpsons. To accomplish this we use the following prompt:
Act as a world-class expert in the field and provide a detailed response to the inquiry using the context provided. The tone of your response should be that of The Simpsons' Mr. Burns. <context> {context} </context>
The prompt is currently hardcoded in the app, which is manageable for now. However, in a real-world application, this approach can become difficult to manage. As we add more steps and multiple prompts to the chain, it can quickly become confusing and hard to maintain. Therefore, let's move our prompt to LangChain Hub.
If you followed the steps above, you should have a LangSmith account.
Go to smith.langchain.com/hub and click "New Prompt."
You'll then want to give your prompt a name, set visibility (default private), description, readme, use case, language, and model. Note: the owner is "@kenzic", this will be different for you. See the screenshot for values.
Once you've created your prompt, you'll want to select the prompt type. For this task, we'll select "Chat Prompt".
Create a "System" message with the value:
Act as a world-class expert in the field and provide a detailed response to the inquiry using the context provided. The tone of your response should be that of The Simpsons' Mr. Burns. <context> {context} </context>
Next, create a "Human" message with the value:
Please address the following inquiry:\n{input}
Before we commit this, we can test it out in the playground. To the right of the message chain, you will notice the section "Inputs" with the variables we specified in the messages. To confirm it's working as expected, I tested with the following:
context: The capital of France is Springfield. It was Paris but changed in 2024.
input: What is the capital of France
Once you have the Inputs defined, under Settings you'll want to select the model we're testing against. Select GPT-3.5-turbo. For this to work you'll need to add your OpenAI API key by clicking the "Secrets & API Keys" button. Great, now we're ready to test. Click the "Start" button and watch it generate the output. You should see something like:
Ah, yes, the capital of France, or should I say, Springfield! Paris may have been the capital in the past, but as of 2024, Springfield reigns supreme as the new capital of France. A change of this magnitude surely raises questions and eyebrows, but rest assured, the decision has been made and Springfield now holds the title of the capital of France. How utterly delightful!
Once we're happy with our prompt, we need to commit it. Simply click the "Commit" button!
Great, now that we have a finished prompt we'll want to update our code to reference it instead of the hardcoded prompt template.
First, we need to import the hub function to pull our template into our code:
import * as hub from "langchain/hub";
Next, let's delete the ChatPromptTemplate in the code and replace it with:
const answerGenerationChainPrompt = await hub.pull( "[YOURORG]/mr-burns-answer-prompt" );
Note: You can delete the ANSWER_CHAIN_SYSTEM_TEMPLATE variable too
Finally, let's test it out! run yarn start to execute the script. If everything works properly, you will see the output in the voice of Mr. Burns informing you the capital of France is Paris.
If you want to take it a step further, you can lock your prompts by the version. To do this, simply append a colon and the version number to the end of the name like so:
const answerGenerationChainPrompt = await hub.pull( "[YOURORG]/mr-burns-answer-prompt:[YOURVERSION]" ); // for me it looks like: const answerGenerationChainPrompt = await hub.pull( "kenzic/mr-burns-answer-prompt:d123dc92" );
That's it!
We've explored how LangChain Hub centralizes prompt management, enhances collaboration, and integrates into your workflow. To improve your efficiency with LangChain Hub, consider diving deeper into the customization and integration possibilities.
LangChain Hub is more than a tool; it's a catalyst for innovation and collaboration in AI development. Embrace this revolutionary platform and elevate your JavaScript LLM applications to new heights.
Throughout this guide, we tackled how to:
Keep building and experimenting, and I'm excited to see how you'll push the boundaries of what's possible with AI and LangChain Hub!
To stay connected and share your journey, feel free to reach out through the following channels:
The above is the detailed content of Transform Your Workflow with LangSmith Hub: A Game-Changer for JavaScript Engineers. For more information, please follow other related articles on the PHP Chinese website!