In Part 1, we built the core analysis tools for our code reviewer. Now we'll create an AI assistant that can use these tools effectively. We'll go through each component step by step, explaining how everything works together.
For ClientAI's docs see here and for Github Repo, here.
First, we need to make our tools available to the AI system. Here's how we register them:
def create_review_tools() -> List[ToolConfig]: """Create the tool configurations for code review.""" return [ ToolConfig( tool=analyze_python_code, name="code_analyzer", description=( "Analyze Python code structure and complexity. " "Expects a 'code' parameter with the Python code as a string." ), scopes=["observe"], ), ToolConfig( tool=check_style_issues, name="style_checker", description=( "Check Python code style issues. " "Expects a 'code' parameter with the Python code as a string." ), scopes=["observe"], ), ToolConfig( tool=generate_docstring, name="docstring_generator", description=( "Generate docstring suggestions for Python code. " "Expects a 'code' parameter with the Python code as a string." ), scopes=["act"], ), ]
Let's break down what's happening here:
Each tool is wrapped in a ToolConfig object that tells ClientAI:
We classify our tools into two categories:
Now let's create our AI assistant. We'll design it to work in steps, mimicking how a human code reviewer would think:
class CodeReviewAssistant(Agent): """An agent that performs comprehensive Python code review.""" @observe( name="analyze_structure", description="Analyze code structure and style", stream=True, ) def analyze_structure(self, code: str) -> str: """Analyze the code structure, complexity, and style issues.""" self.context.state["code_to_analyze"] = code return """ Please analyze this Python code structure and style: The code to analyze has been provided in the context as 'code_to_analyze'. Use the code_analyzer and style_checker tools to evaluate: 1. Code complexity and structure metrics 2. Style compliance issues 3. Function and class organization 4. Import usage patterns """
This first method is crucial:
Next, we add the improvement suggestion step:
@think( name="suggest_improvements", description="Suggest code improvements based on analysis", stream=True, ) def suggest_improvements(self, analysis_result: str) -> str: """Generate improvement suggestions based on the analysis results.""" current_code = self.context.state.get("current_code", "") return f""" Based on the code analysis of: ``` {% endraw %} python {current_code} {% raw %} ``` And the analysis results: {analysis_result} Please suggest specific improvements for: 1. Reducing complexity where identified 2. Fixing style issues 3. Improving code organization 4. Optimizing import usage 5. Enhancing readability 6. Enhancing explicitness """
This method:
Now let's create a user-friendly interface. We'll break this down into parts:
def main(): # 1. Set up logging logger = logging.getLogger(__name__) # 2. Configure Ollama server config = OllamaServerConfig( host="127.0.0.1", # Local machine port=11434, # Default Ollama port gpu_layers=35, # Adjust based on your GPU cpu_threads=8, # Adjust based on your CPU )
This first part sets up error logging, configures the Ollama server with sensible defaults and allows customization of GPU and CPU usage.
Next, we create the AI client and assistant:
# Use context manager for Ollama server with OllamaManager(config) as manager: # Initialize ClientAI with Ollama client = ClientAI( "ollama", host=f"http://{config.host}:{config.port}" ) # Create code review assistant with tools assistant = CodeReviewAssistant( client=client, default_model="llama3", tools=create_review_tools(), tool_confidence=0.8, # How confident the AI should be before using tools max_tools_per_step=2, # Maximum tools to use per step )
Key points about this setup:
Finally, we create the interactive loop:
def create_review_tools() -> List[ToolConfig]: """Create the tool configurations for code review.""" return [ ToolConfig( tool=analyze_python_code, name="code_analyzer", description=( "Analyze Python code structure and complexity. " "Expects a 'code' parameter with the Python code as a string." ), scopes=["observe"], ), ToolConfig( tool=check_style_issues, name="style_checker", description=( "Check Python code style issues. " "Expects a 'code' parameter with the Python code as a string." ), scopes=["observe"], ), ToolConfig( tool=generate_docstring, name="docstring_generator", description=( "Generate docstring suggestions for Python code. " "Expects a 'code' parameter with the Python code as a string." ), scopes=["act"], ), ]
This interface:
And let's make it a script we're able to run:
class CodeReviewAssistant(Agent): """An agent that performs comprehensive Python code review.""" @observe( name="analyze_structure", description="Analyze code structure and style", stream=True, ) def analyze_structure(self, code: str) -> str: """Analyze the code structure, complexity, and style issues.""" self.context.state["code_to_analyze"] = code return """ Please analyze this Python code structure and style: The code to analyze has been provided in the context as 'code_to_analyze'. Use the code_analyzer and style_checker tools to evaluate: 1. Code complexity and structure metrics 2. Style compliance issues 3. Function and class organization 4. Import usage patterns """
Let's see how the assistant handles real code. Let's run it:
@think( name="suggest_improvements", description="Suggest code improvements based on analysis", stream=True, ) def suggest_improvements(self, analysis_result: str) -> str: """Generate improvement suggestions based on the analysis results.""" current_code = self.context.state.get("current_code", "") return f""" Based on the code analysis of: ``` {% endraw %} python {current_code} {% raw %} ``` And the analysis results: {analysis_result} Please suggest specific improvements for: 1. Reducing complexity where identified 2. Fixing style issues 3. Improving code organization 4. Optimizing import usage 5. Enhancing readability 6. Enhancing explicitness """
Here's an example with issues to find:
def main(): # 1. Set up logging logger = logging.getLogger(__name__) # 2. Configure Ollama server config = OllamaServerConfig( host="127.0.0.1", # Local machine port=11434, # Default Ollama port gpu_layers=35, # Adjust based on your GPU cpu_threads=8, # Adjust based on your CPU )
The assistant will analyze multiple aspects:
Here are some ways to enhance the assistant:
Each of these can be added by creating a new tool function, wrapping it in appropriate JSON formatting, adding it to the create_review_tools() function and then updating the assistant's prompts to use the new tool.
To see more about ClientAI, go to the docs.
If you have any questions, want to discuss tech-related topics, or share your feedback, feel free to reach out to me on social media:
The above is the detailed content of Building a Local AI Code Reviewer with ClientAI and Ollama - Part 2. For more information, please follow other related articles on the PHP Chinese website!