


Build an AI code review assistant with vev, litellm and Agenta
This tutorial demonstrates building a production-ready AI pull request reviewer using LLMOps best practices. The final application, accessible here, accepts a public PR URL and returns an AI-generated review.
Application Overview
This tutorial covers:
- Code Development: Retrieving PR diffs from GitHub and leveraging LiteLLM for LLM interaction.
- Observability: Implementing Agenta for application monitoring and debugging.
- Prompt Engineering: Iterating on prompts and model selection using Agenta's playground.
- LLM Evaluation: Employing LLM-as-a-judge for prompt and model assessment.
- Deployment: Deploying the application as an API and creating a simple UI with v0.dev.
Core Logic
The AI assistant's workflow is simple: given a PR URL, it retrieves the diff from GitHub and submits it to an LLM for review.
GitHub diffs are accessed via:
<code>https://patch-diff.githubusercontent.com/raw/{owner}/{repo}/pull/{pr_number}.diff</code>
This Python function fetches the diff:
def get_pr_diff(pr_url): # ... (Code remains the same) return response.text
LiteLLM facilitates LLM interactions, offering a consistent interface across various providers.
prompt_system = """ You are an expert Python developer performing a file-by-file review of a pull request. You have access to the full diff of the file to understand the overall context and structure. However, focus on reviewing only the specific hunk provided. """ prompt_user = """ Here is the diff for the file: {diff} Please provide a critique of the changes made in this file. """ def generate_critique(pr_url: str): diff = get_pr_diff(pr_url) response = litellm.completion( model=config.model, messages=[ {"content": config.system_prompt, "role": "system"}, {"content": config.user_prompt.format(diff=diff), "role": "user"}, ], ) return response.choices[0].message.content
Implementing Observability with Agenta
Agenta enhances observability, tracking inputs, outputs, and data flow for easier debugging.
Initialize Agenta and configure LiteLLM callbacks:
import agenta as ag ag.init() litellm.callbacks = [ag.callbacks.litellm_handler()]
Instrument functions with Agenta decorators:
@ag.instrument() def generate_critique(pr_url: str): # ... (Code remains the same) return response.choices[0].message.content
Set the AGENTA_API_KEY
environment variable (obtained from Agenta) and optionally AGENTA_HOST
for self-hosting.
Creating an LLM Playground
Agenta's custom workflow feature provides an IDE-like playground for iterative development. The following code snippet demonstrates the configuration and integration with Agenta:
from pydantic import BaseModel, Field from typing import Annotated import agenta as ag import litellm from agenta.sdk.assets import supported_llm_models # ... (previous code) class Config(BaseModel): system_prompt: str = prompt_system user_prompt: str = prompt_user model: Annotated[str, ag.MultipleChoice(choices=supported_llm_models)] = Field(default="gpt-3.5-turbo") @ag.route("/", config_schema=Config) @ag.instrument() def generate_critique(pr_url:str): diff = get_pr_diff(pr_url) config = ag.ConfigManager.get_from_route(schema=Config) response = litellm.completion( model=config.model, messages=[ {"content": config.system_prompt, "role": "system"}, {"content": config.user_prompt.format(diff=diff), "role": "user"}, ], ) return response.choices[0].message.content
Serving and Evaluating with Agenta
- Run
agenta init
specifying the app name and API key. - Run
agenta variant serve app.py
.
This makes the application accessible through Agenta's playground for end-to-end testing. LLM-as-a-judge is used for evaluation. The evaluator prompt is:
<code>You are an evaluator grading the quality of a PR review. CRITERIA: ... (criteria remain the same) ANSWER ONLY THE SCORE. DO NOT USE MARKDOWN. DO NOT PROVIDE ANYTHING OTHER THAN THE NUMBER</code>
The user prompt for the evaluator:
<code>https://patch-diff.githubusercontent.com/raw/{owner}/{repo}/pull/{pr_number}.diff</code>
Deployment and Frontend
Deployment is done through Agenta's UI:
- Navigate to the overview page.
- Click the three dots next to the chosen variant.
- Select "Deploy to Production".
A v0.dev frontend was used for rapid UI creation.
Next Steps and Conclusion
Future improvements include prompt refinement, incorporating full code context, and handling large diffs. This tutorial successfully demonstrates building, evaluating, and deploying a production-ready AI pull request reviewer using Agenta and LiteLLM.
The above is the detailed content of Build an AI code review assistant with vev, litellm and Agenta. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Python is suitable for data science, web development and automation tasks, while C is suitable for system programming, game development and embedded systems. Python is known for its simplicity and powerful ecosystem, while C is known for its high performance and underlying control capabilities.

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

Python excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.

You can learn the basics of Python within two hours. 1. Learn variables and data types, 2. Master control structures such as if statements and loops, 3. Understand the definition and use of functions. These will help you start writing simple Python programs.

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

Python excels in automation, scripting, and task management. 1) Automation: File backup is realized through standard libraries such as os and shutil. 2) Script writing: Use the psutil library to monitor system resources. 3) Task management: Use the schedule library to schedule tasks. Python's ease of use and rich library support makes it the preferred tool in these areas.
