Home Backend Development Python Tutorial Refactoring based on design principles: example of a data collection crawler system

Refactoring based on design principles: example of a data collection crawler system

Jul 20, 2024 am 07:38 AM

Refactoring based on design principles: example of a data collection crawler system

Introduction

Improving code quality is always an important issue in software development. In this article, we take a data collection crawler system as an example and explain specifically how to apply design principles and best practices through step-by-step refactoring.

Code before improvement

First, we start with a very simple web scraper with all functionality integrated into one class.

Translated with DeepL.com (free version)

project_root/
├── web_scraper.py
├── main.py
└── requirements.txt
Copy after login

web_scraper.py

import requests
import json
import sqlite3

class WebScraper:
    def __init__(self, url):
        self.url = url

    def fetch_data(self):
        response = requests.get(self.url)
        data = response.text
        parsed_data = self.parse_data(data)
        enriched_data = self.enrich_data(parsed_data)
        self.save_data(enriched_data)
        return enriched_data

    def parse_data(self, data):
        return json.loads(data)

    def enrich_data(self, data):
        # Apply business logic here
        # Example: extract only data containing specific keywords
        return {k: v for k, v in data.items() if 'important' in v.lower()}

    def save_data(self, data):
        conn = sqlite3.connect('test.db')
        cursor = conn.cursor()
        cursor.execute('INSERT INTO data (json_data) VALUES (?)', (json.dumps(data),))
        conn.commit()
        conn.close()
Copy after login

main.py

from web_scraper import WebScraper

def main():
    scraper = WebScraper('https://example.com/api/data')
    data = scraper.fetch_data()
    print(data)

if __name__ == "__main__":
    main()
Copy after login

Points to be improved

  1. Violates the principle of single responsibility: one class is responsible for all data acquisition, analysis, enrichment, and storage
  2. Unclear business logic: business logic is embedded in the enrich_data method, but mixed with other processing
  3. Lack of reusability: functions are tightly coupled, making individual reuse difficult
  4. Testing difficulties: difficult to test individual functions independently
  5. Configuration rigidity: database paths and other settings are embedded directly in the code

Refactoring phase

1. Separation of responsibilities: separation of data acquisition, analysis, and storage

  • Major Change: Separation of responsibilities for data acquisition, analysis, and storage into separate classes
  • Objective: Apply single responsibility principle, introduce environmental variables

directory structure

project_root/
├── data_fetcher.py
├── data_parser.py
├── data_saver.py
├── data_enricher.py
├── web_scraper.py
├── main.py
└── requirements.txt
Copy after login

data_enricher.py

class DataEnricher:
    def enrich(self, data):
        return {k: v for k, v in data.items() if 'important' in v.lower()}
Copy after login

web_scraper.py

from data_fetcher import DataFetcher
from data_parser import DataParser
from data_enricher import DataEnricher
from data_saver import DataSaver

class WebScraper:
    def __init__(self, url):
        self.url = url
        self.fetcher = DataFetcher()
        self.parser = DataParser()
        self.enricher = DataEnricher()
        self.saver = DataSaver()

    def fetch_data(self):
        raw_data = self.fetcher.fetch(self.url)
        parsed_data = self.parser.parse(raw_data)
        enriched_data = self.enricher.enrich(parsed_data)
        self.saver.save(enriched_data)
        return enriched_data
Copy after login

This change clarifies the responsibilities of each class and improves reusability and testability. However, the business logic is still embedded in the DataEnricher class.

2. introduction of interfaces and dependency injection

  • Main change: Introduce interfaces and implement dependency injection.
  • Purpose: increase flexibility and extensibility, extend environment variables, abstract business logic

directory structure

project_root/
├── interfaces/
│   ├── __init__.py
│   ├── data_fetcher_interface.py
│   ├── data_parser_interface.py
│   ├── data_enricher_interface.py
│   └── data_saver_interface.py
├── implementations/
│   ├── __init__.py
│   ├── http_data_fetcher.py
│   ├── json_data_parser.py
│   ├── keyword_data_enricher.py
│   └── sqlite_data_saver.py
├── web_scraper.py
├── main.py
└── requirements.txt
Copy after login

interfaces/data_fetcher_interface.py

from abc import ABC, abstractmethod

class DataFetcherInterface(ABC):
    @abstractmethod
    def fetch(self, url: str) -> str:
        pass
Copy after login

interfaces/data_parser_interface.py

from abc import ABC, abstractmethod
from typing import Dict, Any

class DataParserInterface(ABC):
    @abstractmethod
    def parse(self, raw_data: str) -> Dict[str, Any]:
        pass
Copy after login

interfaces/data_enricher_interface.py

from abc import ABC, abstractmethod
from typing import Dict, Any

class DataEnricherInterface(ABC):
    @abstractmethod
    def enrich(self, data: Dict[str, Any]) -> Dict[str, Any]:
        pass
Copy after login

interfaces/data_saver_interface.py

from abc import ABC, abstractmethod
from typing import Dict, Any

class DataSaverInterface(ABC):
    @abstractmethod
    def save(self, data: Dict[str, Any]) -> None:
        pass
Copy after login

implementations/keyword_data_enricher.py

import os
from interfaces.data_enricher_interface import DataEnricherInterface

class KeywordDataEnricher(DataEnricherInterface):
    def __init__(self):
        self.keyword = os.getenv('IMPORTANT_KEYWORD', 'important')

    def enrich(self, data):
        return {k: v for k, v in data.items() if self.keyword in str(v).lower()}
Copy after login

web_scraper.py

from interfaces.data_fetcher_interface import DataFetcherInterface
from interfaces.data_parser_interface import DataParserInterface
from interfaces.data_enricher_interface import DataEnricherInterface
from interfaces.data_saver_interface import DataSaverInterface

class WebScraper:
    def __init__(self, fetcher: DataFetcherInterface, parser: DataParserInterface, 
                 enricher: DataEnricherInterface, saver: DataSaverInterface):
        self.fetcher = fetcher
        self.parser = parser
        self.enricher = enricher
        self.saver = saver

    def fetch_data(self, url):
        raw_data = self.fetcher.fetch(url)
        parsed_data = self.parser.parse(raw_data)
        enriched_data = self.enricher.enrich(parsed_data)
        self.saver.save(enriched_data)
        return enriched_data
Copy after login

The main changes at this stage are

  1. introduction of an interface to facilitate switching to different implementations
  2. dependency injection to make the WebScraper class more flexible
  3. The fetch_data method has been changed to take url as an argument, making URL specification more flexible.
  4. Business logic has been abstracted as DataEnricherInterface and implemented as KeywordDataEnricher.
  5. The business logic has been made more flexible by allowing keywords to be set using environment variables.

These changes have greatly improved the flexibility and extensibility of the system. However, the business logic remains embedded in the DataEnricherInterface and its implementation. The next step is to further separate this business logic and clearly define it as a domain layer.

3. introduction of domain layer and separation of business logic

In the previous step, the introduction of interfaces increased the flexibility of the system. However, the business logic (in this case, data importance determination and filtering) is still treated as part of the data layer. Based on the concept of domain-driven design, treating this business logic as the central concept of the system and implementing it as an independent domain layer provides the following benefits

  1. centralized management of business logic
  2. more expressive code through the domain model
  3. greater flexibility for changing business rules
  4. ease of testing

Updated directory structure:

project_root/
├── domain/
│   ├── __init__.py
│   ├── scraped_data.py
│   └── data_enrichment_service.py
├── data/
│   ├── __init__.py
│   ├── interfaces/
│   │   ├── __init__.py
│   │   ├── data_fetcher_interface.py
│   │   ├── data_parser_interface.py
│   │   └── data_saver_interface.py
│   ├── implementations/
│   │   ├── __init__.py
│   │   ├── http_data_fetcher.py
│   │   ├── json_data_parser.py
│   │   └── sqlite_data_saver.py
├── application/
│   ├── __init__.py
│   └── web_scraper.py
├── main.py
└── requirements.txt
Copy after login

At this stage, the roles of DataEnricherInterface and KeywordDataEnricher will be moved to the ScrapedData model and DataEnrichmentService at the domain layer. Details of this change are provided below.

Before change (Section 2)

class DataEnricherInterface(ABC):
    @abstractmethod
    def enrich(self, data: Dict[str, Any]) -> Dict[str, Any]:
        pass
Copy after login
class KeywordDataEnricher(DataEnricherInterface):
    def __init__(self):
        self.keyword = os.getenv('IMPORTANT_KEYWORD', 'important')

    def enrich(self, data):
        return {k: v for k, v in data.items() if self.keyword in str(v).lower()}
Copy after login

After modification (Section 3)

@dataclass
class ScrapedData:
    content: Dict[str, Any]
    source_url: str

    def is_important(self) -> bool:
        important_keyword = os.getenv('IMPORTANT_KEYWORD', 'important')
        return any(important_keyword in str(v).lower() for v in self.content.values())
Copy after login
class DataEnrichmentService:
    def __init__(self):
        self.important_keyword = os.getenv('IMPORTANT_KEYWORD', 'important')

    def enrich(self, data: ScrapedData) -> ScrapedData:
        if data.is_important():
            enriched_content = {k: v for k, v in data.content.items() if self.important_keyword in str(v).lower()}
            return ScrapedData(content=enriched_content, source_url=data.source_url)
        return data
Copy after login

This change improves the following.

  1. business logic has been moved to the domain layer, eliminating the need for a DataEnricherInterface.

  2. the KeywordDataEnricher functionality has been merged into the DataEnrichmentService, centralizing the business logic in one place.

  3. The is_important method has been added to the ScrapedData model. This makes the domain model itself responsible for determining the importance of data and makes the domain concept clearer.

  4. DataEnrichmentService now handles ScrapedData objects directly, improving type safety.

The WebScraper class will also be updated to reflect this change.

from data.interfaces.data_fetcher_interface import DataFetcherInterface
from data.interfaces.data_parser_interface import DataParserInterface
from data.interfaces.data_saver_interface import DataSaverInterface
from domain.scraped_data import ScrapedData
from domain.data_enrichment_service import DataEnrichmentService

class WebScraper:
    def __init__(self, fetcher: DataFetcherInterface, parser: DataParserInterface, 
                 saver: DataSaverInterface, enrichment_service: DataEnrichmentService):
        self.fetcher = fetcher
        self.parser = parser
        self.saver = saver
        self.enrichment_service = enrichment_service

    def fetch_data(self, url: str) -> ScrapedData:
        raw_data = self.fetcher.fetch(url)
        parsed_data = self.parser.parse(raw_data)
        scraped_data = ScrapedData(content=parsed_data, source_url=url)
        enriched_data = self.enrichment_service.enrich(scraped_data)
        self.saver.save(enriched_data)
        return enriched_data

Copy after login

This change completely shifts the business logic from the data layer to the domain layer, giving the system a clearer structure. The removal of the DataEnricherInterface and the introduction of the DataEnrichmentService are not just interface replacements, but fundamental changes in the way business logic is handled.

Summary

This article has demonstrated how to improve code quality and apply design principles specifically through a step-by-step refactoring process for the data collection crawler system. The main areas of improvement are as follows.

  1. Separation of Responsibility: Applying the principle of single responsibility, we separated data acquisition, parsing, enrichment, and storage into separate classes.
  2. Introduction of interfaces and dependency injection: greatly increased the flexibility and scalability of the system, making it easier to switch to different implementations.
  3. Introduction of domain model and services: clearly separated the business logic and defined the core concepts of the system.
  4. Adoption of Layered Architecture: Clearly separated the domain, data, and application layers and defined the responsibilities of each layer. 5.Maintain interfaces: Maintained abstraction at the data layer to ensure flexibility in implementation.

These improvements have greatly enhanced the system's modularity, reusability, testability, maintainability, and scalability. In particular, by applying some concepts of domain-driven design, the business logic became clearer and the structure was more flexible to accommodate future changes in requirements. At the same time, by maintaining the interfaces, we ensured the flexibility to easily change and extend the data layer implementation.

It is important to note that this refactoring process is not a one-time event, but part of a continuous improvement process. Depending on the size and complexity of the project, it is important to adopt design principles and DDD concepts at the appropriate level and to make incremental improvements.

Finally, the approach presented in this article can be applied to a wide variety of software projects, not just data collection crawlers. We encourage you to use them as a reference as you work to improve code quality and design.

The above is the detailed content of Refactoring based on design principles: example of a data collection crawler system. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1268
29
C# Tutorial
1243
24
Python vs. C  : Applications and Use Cases Compared Python vs. C : Applications and Use Cases Compared Apr 12, 2025 am 12:01 AM

Python is suitable for data science, web development and automation tasks, while C is suitable for system programming, game development and embedded systems. Python is known for its simplicity and powerful ecosystem, while C is known for its high performance and underlying control capabilities.

Python: Games, GUIs, and More Python: Games, GUIs, and More Apr 13, 2025 am 12:14 AM

Python excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.

The 2-Hour Python Plan: A Realistic Approach The 2-Hour Python Plan: A Realistic Approach Apr 11, 2025 am 12:04 AM

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

Python vs. C  : Learning Curves and Ease of Use Python vs. C : Learning Curves and Ease of Use Apr 19, 2025 am 12:20 AM

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

How Much Python Can You Learn in 2 Hours? How Much Python Can You Learn in 2 Hours? Apr 09, 2025 pm 04:33 PM

You can learn the basics of Python within two hours. 1. Learn variables and data types, 2. Master control structures such as if statements and loops, 3. Understand the definition and use of functions. These will help you start writing simple Python programs.

Python and Time: Making the Most of Your Study Time Python and Time: Making the Most of Your Study Time Apr 14, 2025 am 12:02 AM

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python: Automation, Scripting, and Task Management Python: Automation, Scripting, and Task Management Apr 16, 2025 am 12:14 AM

Python excels in automation, scripting, and task management. 1) Automation: File backup is realized through standard libraries such as os and shutil. 2) Script writing: Use the psutil library to monitor system resources. 3) Task management: Use the schedule library to schedule tasks. Python's ease of use and rich library support makes it the preferred tool in these areas.

Python: Exploring Its Primary Applications Python: Exploring Its Primary Applications Apr 10, 2025 am 09:41 AM

Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

See all articles