Home > Backend Development > Python Tutorial > Detailed Tutorial: Crawling GitHub Repository Folders Without API

Detailed Tutorial: Crawling GitHub Repository Folders Without API

Barbara Streisand
Release: 2024-12-16 06:28:14
Original
990 people have browsed it

Detailed Tutorial: Crawling GitHub Repository Folders Without API

Ultra-Detailed Tutorial: Crawling GitHub Repository Folders Without API

This ultra-detailed tutorial, authored by Shpetim Haxhiu, walks you through crawling GitHub repository folders programmatically without relying on the GitHub API. It includes everything from understanding the structure to providing a robust, recursive implementation with enhancements.


1. Setup and Installation

Before you start, ensure you have:

  1. Python: Version 3.7 or higher installed.
  2. Libraries: Install requests and BeautifulSoup.
   pip install requests beautifulsoup4
Copy after login
Copy after login
  1. Editor: Any Python-supported IDE, such as VS Code or PyCharm.

2. Analyzing GitHub HTML Structure

To scrape GitHub folders, you need to understand the HTML structure of a repository page. On a GitHub repository page:

  • Folders are linked with paths like /tree//.
  • Files are linked with paths like /blob//.

Each item (folder or file) is inside a

with the attribute role="rowheader" and contains an tag. For example:

<div role="rowheader">
  <a href="/owner/repo/tree/main/folder-name">folder-name</a>
</div>
Copy after login
Copy after login

3. Implementing the Scraper

3.1. Recursive Crawling Function

The script will recursively scrape folders and print their structure. To limit the recursion depth and avoid unnecessary load, we’ll use a depth parameter.

import requests
from bs4 import BeautifulSoup
import time

def crawl_github_folder(url, depth=0, max_depth=3):
    """
    Recursively crawls a GitHub repository folder structure.

    Parameters:
    - url (str): URL of the GitHub folder to scrape.
    - depth (int): Current recursion depth.
    - max_depth (int): Maximum depth to recurse.
    """
    if depth > max_depth:
        return

    headers = {"User-Agent": "Mozilla/5.0"}
    response = requests.get(url, headers=headers)

    if response.status_code != 200:
        print(f"Failed to access {url} (Status code: {response.status_code})")
        return

    soup = BeautifulSoup(response.text, 'html.parser')

    # Extract folder and file links
    items = soup.select('div[role="rowheader"] a')

    for item in items:
        item_name = item.text.strip()
        item_url = f"https://github.com{item['href']}"

        if '/tree/' in item_url:
            print(f"{'  ' * depth}Folder: {item_name}")
            crawl_github_folder(item_url, depth + 1, max_depth)
        elif '/blob/' in item_url:
            print(f"{'  ' * depth}File: {item_name}")

# Example usage
if __name__ == "__main__":
    repo_url = "https://github.com/<owner>/<repo>/tree/<branch>/<folder>"
    crawl_github_folder(repo_url)
Copy after login
Copy after login

4. Features Explained

  1. Headers for Request: Using a User-Agent string to mimic a browser and avoid blocking.
  2. Recursive Crawling:
    • Detects folders (/tree/) and recursively enters them.
    • Lists files (/blob/) without entering further.
  3. Indentation: Reflects folder hierarchy in the output.
  4. Depth Limitation: Prevents excessive recursion by setting a maximum depth (max_depth).

5. Enhancements

These enhancements are designed to improve the functionality and reliability of the crawler. They address common challenges like exporting results, handling errors, and avoiding rate limits, ensuring the tool is efficient and user-friendly.

5.1. Exporting Results

Save the output to a structured JSON file for easier usage.

   pip install requests beautifulsoup4
Copy after login
Copy after login

5.2. Error Handling

Add robust error handling for network errors and unexpected HTML changes:

<div role="rowheader">
  <a href="/owner/repo/tree/main/folder-name">folder-name</a>
</div>
Copy after login
Copy after login

5.3. Rate Limiting

To avoid being rate-limited by GitHub, introduce delays:

import requests
from bs4 import BeautifulSoup
import time

def crawl_github_folder(url, depth=0, max_depth=3):
    """
    Recursively crawls a GitHub repository folder structure.

    Parameters:
    - url (str): URL of the GitHub folder to scrape.
    - depth (int): Current recursion depth.
    - max_depth (int): Maximum depth to recurse.
    """
    if depth > max_depth:
        return

    headers = {"User-Agent": "Mozilla/5.0"}
    response = requests.get(url, headers=headers)

    if response.status_code != 200:
        print(f"Failed to access {url} (Status code: {response.status_code})")
        return

    soup = BeautifulSoup(response.text, 'html.parser')

    # Extract folder and file links
    items = soup.select('div[role="rowheader"] a')

    for item in items:
        item_name = item.text.strip()
        item_url = f"https://github.com{item['href']}"

        if '/tree/' in item_url:
            print(f"{'  ' * depth}Folder: {item_name}")
            crawl_github_folder(item_url, depth + 1, max_depth)
        elif '/blob/' in item_url:
            print(f"{'  ' * depth}File: {item_name}")

# Example usage
if __name__ == "__main__":
    repo_url = "https://github.com/<owner>/<repo>/tree/<branch>/<folder>"
    crawl_github_folder(repo_url)
Copy after login
Copy after login

6. Ethical Considerations

Authored by Shpetim Haxhiu, an expert in software automation and ethical programming, this section ensures adherence to best practices while using the GitHub crawler.

  • Compliance: Adhere to GitHub’s Terms of Service.
  • Minimize Load: Respect GitHub’s servers by limiting requests and adding delays.
  • Permission: Obtain permission for extensive crawling of private repositories.

7. Complete Code

Here’s the consolidated script with all features included:

import json

def crawl_to_json(url, depth=0, max_depth=3):
    """Crawls and saves results as JSON."""
    result = {}

    if depth > max_depth:
        return result

    headers = {"User-Agent": "Mozilla/5.0"}
    response = requests.get(url, headers=headers)

    if response.status_code != 200:
        print(f"Failed to access {url}")
        return result

    soup = BeautifulSoup(response.text, 'html.parser')
    items = soup.select('div[role="rowheader"] a')

    for item in items:
        item_name = item.text.strip()
        item_url = f"https://github.com{item['href']}"

        if '/tree/' in item_url:
            result[item_name] = crawl_to_json(item_url, depth + 1, max_depth)
        elif '/blob/' in item_url:
            result[item_name] = "file"

    return result

if __name__ == "__main__":
    repo_url = "https://github.com/<owner>/<repo>/tree/<branch>/<folder>"
    structure = crawl_to_json(repo_url)

    with open("output.json", "w") as file:
        json.dump(structure, file, indent=2)

    print("Repository structure saved to output.json")
Copy after login

By following this detailed guide, you can build a robust GitHub folder crawler. This tool can be adapted for various needs while ensuring ethical compliance.


Feel free to leave questions in the comments section! Also, don’t forget to connect with me:

  • Email: shpetim.h@gmail.com
  • LinkedIn: linkedin.com/in/shpetimhaxhiu
  • GitHub: github.com/shpetimhaxhiu

The above is the detailed content of Detailed Tutorial: Crawling GitHub Repository Folders Without API. For more information, please follow other related articles on the PHP Chinese website!

source:dev.to
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template