


My HNG Journey. Stage Six: Leveraging Python to Expose DORA Metrics
Introduction
For stage 6, we were tasked with exposing DORA (DevOps Research and, I recently embarked on a project to expose DORA (DevOps Research and Assessment) metrics using Python. This experience taught me valuable lessons about DevOps practices and the intricacies of working with APIs. In this article, I'll walk you through the process, explain what each metric means, and highlight some common pitfalls to watch out for.
What are DORA Metrics?
Before we dive into the code, let's briefly discuss what DORA metrics are:
- Deployment Frequency: How often an organization successfully releases to production.
- Lead Time for Changes: The time it takes a commit to get into production.
- Change Failure Rate: The percentage of deployments causing a failure in production.
- Time to Restore Service: How long it takes to recover from a failure in production.
These metrics help teams measure their software delivery performance and identify areas for improvement.
Getting Started
To begin exposing these metrics, you'll need:
- Python 3.7 or higher
- A GitHub account and personal access token
- Basic knowledge of GitHub's API
First, install the necessary libraries:
pip install requests prometheus_client
The Code Structure
I structured my Python script as a class called DORAMetrics. Here's a simplified version of its initialization:
class DORAMetrics: def __init__(self, github_token, repo_owner, repo_name): self.github_token = github_token self.repo_owner = repo_owner self.repo_name = repo_name self.base_url = f"https://api.github.com/repos/{repo_owner}/{repo_name}" self.headers = { 'Authorization': f'token {github_token}', 'Accept': 'application/vnd.github.v3+json' } # Define Prometheus metrics self.deployment_frequency = Gauge('dora_deployment_frequency', 'Deployment Frequency (per day)') self.lead_time_for_changes = Gauge('dora_lead_time_for_changes', 'Lead Time for Changes (hours)') self.change_failure_rate = Gauge('dora_change_failure_rate', 'Change Failure Rate') self.time_to_restore_service = Gauge('dora_time_to_restore_service', 'Time to Restore Service (hours)')
This setup allows us to interact with the GitHub API and create Prometheus metrics for each DORA metric.
Fetching Data from GitHub
One of the most challenging aspects was retrieving the necessary data from GitHub. Here's how I fetched deployments:
def get_deployments(self, days=30): end_date = datetime.now() start_date = end_date - timedelta(days=days) url = f"{self.base_url}/deployments" params = {'since': start_date.isoformat()} deployments = [] while url: response = requests.get(url, headers=self.headers, params=params) response.raise_for_status() deployments.extend(response.json()) url = response.links.get('next', {}).get('url') params = {} return deployments
This method handles pagination, ensuring we get all deployments within the specified time frame.
Calculating DORA Metrics
Let's look at how I calculated the Deployment Frequency:
def get_deployment_frequency(self, days=30): deployments = self.get_deployments(days) return len(deployments) / days
This simple calculation gives us the average number of deployments per day over the specified period.
Lead Time for Changes
Calculating the Lead Time for Changes was more complex. It required correlating commits with their corresponding deployments:
def get_lead_time_for_changes(self, days=30): commits = self.get_commits(days) deployments = self.get_deployments(days) lead_times = [] for commit in commits: commit_date = datetime.strptime(commit['commit']['author']['date'], '%Y-%m-%dT%H:%M:%SZ') for deployment in deployments: if deployment['sha'] == commit['sha']: deployment_date = datetime.strptime(deployment['created_at'], '%Y-%m-%dT%H:%M:%SZ') lead_time = (deployment_date - commit_date).total_seconds() / 3600 # in hours lead_times.append(lead_time) break return sum(lead_times) / len(lead_times) if lead_times else 0
This method calculates the time difference between each commit and its corresponding deployment. It's important to note that not all commits may result in a deployment, so we only consider those that do. The final result is the average lead time in hours.
One challenge I faced here was matching commits to deployments. In some cases, a deployment might include multiple commits, or a commit might not be deployed immediately. I had to make assumptions based on the available data, which might need adjustment for different development workflows.
Change Failure Rate
Determining the Change Failure Rate required analyzing the status of each deployment:
def get_change_failure_rate(self, days=30): deployments = self.get_deployments(days) if not deployments: return 0 total_deployments = len(deployments) failed_deployments = 0 for deployment in deployments: status_url = deployment['statuses_url'] status_response = requests.get(status_url, headers=self.headers) status_response.raise_for_status() statuses = status_response.json() if statuses and statuses[0]['state'] != 'success': failed_deployments += 1 return failed_deployments / total_deployments if total_deployments > 0 else 0
This method counts the number of failed deployments and divides it by the total number of deployments. The challenge here was defining what constitutes a "failed" deployment. I considered a deployment failed if its most recent status was not "success".
It's worth noting that this approach might not capture all types of failures, especially those that occur after a successful deployment. In a production environment, you might want to integrate with your monitoring or incident management system for more accurate failure detection.
Exposing Metrics with Prometheus
To make these metrics available for Prometheus to scrape, I used the prometheus_client library:
from prometheus_client import start_http_server, Gauge # In the main execution block start_http_server(8000) # Update metrics every 5 minutes while True: dora.update_metrics() time.sleep(300)
This starts a server on port 8000 and updates the metrics every 5 minutes.
Common Pitfalls
During this project, I encountered several challenges:
- API Rate Limiting: GitHub limits the number of API requests you can make. I had to implement pagination and be mindful of how often I updated metrics.
- Token Permissions: Ensure your GitHub token has the necessary permissions to read deployments and commits.
- Data Interpretation: Determining what constitutes a "deployment" or "failure" can be subjective. I had to make assumptions based on the available data.
- Time to Restore Service: This metric was particularly challenging as it typically requires data from an incident management system, which isn't available through GitHub's API alone.
Conclusion
Exposing DORA metrics using Python was an enlightening experience. It deepened my understanding of DevOps practices and improved my skills in working with APIs and data processing.
Remember, these metrics are meant to guide improvement, not as a stick to beat teams with. Use them wisely to foster a culture of continuous improvement in your development process.
Thank you for reading ❤
The above is the detailed content of My HNG Journey. Stage Six: Leveraging Python to Expose DORA Metrics. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python is better than C in development efficiency, but C is higher in execution performance. 1. Python's concise syntax and rich libraries improve development efficiency. 2.C's compilation-type characteristics and hardware control improve execution performance. When making a choice, you need to weigh the development speed and execution efficiency based on project needs.

Is it enough to learn Python for two hours a day? It depends on your goals and learning methods. 1) Develop a clear learning plan, 2) Select appropriate learning resources and methods, 3) Practice and review and consolidate hands-on practice and review and consolidate, and you can gradually master the basic knowledge and advanced functions of Python during this period.

Pythonlistsarepartofthestandardlibrary,whilearraysarenot.Listsarebuilt-in,versatile,andusedforstoringcollections,whereasarraysareprovidedbythearraymoduleandlesscommonlyusedduetolimitedfunctionality.

Python excels in automation, scripting, and task management. 1) Automation: File backup is realized through standard libraries such as os and shutil. 2) Script writing: Use the psutil library to monitor system resources. 3) Task management: Use the schedule library to schedule tasks. Python's ease of use and rich library support makes it the preferred tool in these areas.

Python and C each have their own advantages, and the choice should be based on project requirements. 1) Python is suitable for rapid development and data processing due to its concise syntax and dynamic typing. 2)C is suitable for high performance and system programming due to its static typing and manual memory management.

Key applications of Python in web development include the use of Django and Flask frameworks, API development, data analysis and visualization, machine learning and AI, and performance optimization. 1. Django and Flask framework: Django is suitable for rapid development of complex applications, and Flask is suitable for small or highly customized projects. 2. API development: Use Flask or DjangoRESTFramework to build RESTfulAPI. 3. Data analysis and visualization: Use Python to process data and display it through the web interface. 4. Machine Learning and AI: Python is used to build intelligent web applications. 5. Performance optimization: optimized through asynchronous programming, caching and code
