Bevor wir uns mit technischen Details befassen, wollen wir verstehen, warum eine ordnungsgemäße Protokollierung wichtig ist:
Für diejenigen, die mit der Python-Protokollierung noch nicht vertraut sind, finden Sie hier ein einfaches Beispiel mit logging.basicConfig:
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
Dieses Beispiel demonstriert die Grundlagen des Protokollierungsmoduls in Python und zeigt, wie Sie die Python-Logger-Protokollierung in Ihrer Anwendung verwenden.
Beginnen wir mit einer einfachen Protokollierungskonfiguration:
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
Python-Protokollierung verfügt über fünf Standardstufen:
Level | Numeric Value | When to Use |
---|---|---|
DEBUG | 10 | Detailed information for diagnosing problems |
INFO | 20 | General operational events |
WARNING | 30 | Something unexpected happened |
ERROR | 40 | More serious problem |
CRITICAL | 50 | Program may not be able to continue |
Warum sollten Sie sich für die Protokollierung anstelle von Druckanweisungen entscheiden?
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
Für komplexere Anwendungen:
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
Strukturierte Protokollierung bietet ein konsistentes, maschinenlesbares Format, das für die Protokollanalyse und -überwachung unerlässlich ist. Einen umfassenden Überblick über strukturierte Protokollierungsmuster und Best Practices finden Sie im Leitfaden zur strukturierten Protokollierung. Lassen Sie uns die strukturierte Protokollierung in Python implementieren:
logging.basicConfig( filename='app.log', filemode='w', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.DEBUG, datefmt='%Y-%m-%d %H:%M:%S' )
Eine ordnungsgemäße Fehlerprotokollierung ist für die Fehlerbehebung bei Produktionsproblemen von entscheidender Bedeutung. Hier ist ein umfassender Ansatz:
config = { 'version': 1, 'formatters': { 'detailed': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', 'formatter': 'detailed' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'app.log', 'level': 'DEBUG', 'formatter': 'detailed' } }, 'loggers': { 'myapp': { 'handlers': ['console', 'file'], 'level': 'DEBUG', 'propagate': True } } } logging.config.dictConfig(config)
Beim Anmelden in Multithread-Anwendungen müssen Sie die Thread-Sicherheit gewährleisten:
import json import logging from datetime import datetime class JSONFormatter(logging.Formatter): def __init__(self): super().__init__() def format(self, record): # Create base log record log_obj = { "timestamp": self.formatTime(record, self.datefmt), "name": record.name, "level": record.levelname, "message": record.getMessage(), "module": record.module, "function": record.funcName, "line": record.lineno } # Add exception info if present if record.exc_info: log_obj["exception"] = self.formatException(record.exc_info) # Add custom fields from extra if hasattr(record, "extra_fields"): log_obj.update(record.extra_fields) return json.dumps(log_obj) # Usage Example logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger.addHandler(handler) # Log with extra fields logger.info("User logged in", extra={"extra_fields": {"user_id": "123", "ip": "192.168.1.1"}})
Unterschiedliche Anwendungsumgebungen erfordern spezifische Protokollierungsansätze. Unabhängig davon, ob Sie mit Webanwendungen, Microservices oder Hintergrundaufgaben arbeiten, hat jede Umgebung einzigartige Protokollierungsanforderungen und Best Practices. Lassen Sie uns untersuchen, wie Sie eine effektive Protokollierung in verschiedenen Bereitstellungsszenarien implementieren.
Hier ist ein umfassendes Django-Logging-Setup:
import traceback import sys from contextlib import contextmanager class ErrorLogger: def __init__(self, logger): self.logger = logger @contextmanager def error_context(self, operation_name, **context): """Context manager for error logging with additional context""" try: yield except Exception as e: # Capture the current stack trace exc_type, exc_value, exc_traceback = sys.exc_info() # Format error details error_details = { "operation": operation_name, "error_type": exc_type.__name__, "error_message": str(exc_value), "context": context, "stack_trace": traceback.format_exception(exc_type, exc_value, exc_traceback) } # Log the error with full context self.logger.error( f"Error in {operation_name}: {str(exc_value)}", extra={"error_details": error_details} ) # Re-raise the exception raise # Usage Example logger = logging.getLogger(__name__) error_logger = ErrorLogger(logger) with error_logger.error_context("user_authentication", user_id="123", attempt=2): # Your code that might raise an exception authenticate_user(user_id)
Flask bietet ein eigenes Protokollierungssystem, das angepasst werden kann:
import threading import logging from queue import Queue from logging.handlers import QueueHandler, QueueListener def setup_thread_safe_logging(): """Set up thread-safe logging with a queue""" # Create the queue log_queue = Queue() # Create handlers console_handler = logging.StreamHandler() file_handler = logging.FileHandler('app.log') # Create queue handler and listener queue_handler = QueueHandler(log_queue) listener = QueueListener( log_queue, console_handler, file_handler, respect_handler_level=True ) # Configure root logger root_logger = logging.getLogger() root_logger.addHandler(queue_handler) # Start the listener in a separate thread listener.start() return listener # Usage listener = setup_thread_safe_logging() def worker_function(): logger = logging.getLogger(__name__) logger.info(f"Worker thread {threading.current_thread().name} starting") # Do work... logger.info(f"Worker thread {threading.current_thread().name} finished") # Create and start threads threads = [ threading.Thread(target=worker_function) for _ in range(3) ] for thread in threads: thread.start()
FastAPI kann die Protokollierung von Python mit einigen Middleware-Verbesserungen nutzen:
# settings.py LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}', 'style': '{', }, 'simple': { 'format': '{levelname} {message}', 'style': '{', }, }, 'filters': { 'require_debug_true': { '()': 'django.utils.log.RequireDebugTrue', }, }, 'handlers': { 'console': { 'level': 'INFO', 'filters': ['require_debug_true'], 'class': 'logging.StreamHandler', 'formatter': 'simple' }, 'file': { 'level': 'ERROR', 'class': 'logging.FileHandler', 'filename': 'django-errors.log', 'formatter': 'verbose' }, 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True, } }, 'loggers': { 'django': { 'handlers': ['console'], 'propagate': True, }, 'django.request': { 'handlers': ['file', 'mail_admins'], 'level': 'ERROR', 'propagate': False, }, 'myapp': { 'handlers': ['console', 'file'], 'level': 'INFO', } } }
Für Microservices sind verteilte Ablaufverfolgung und Korrelations-IDs unerlässlich:
import logging from logging.handlers import RotatingFileHandler from flask import Flask, request app = Flask(__name__) def setup_logger(): # Create formatter formatter = logging.Formatter( '[%(asctime)s] %(levelname)s in %(module)s: %(message)s' ) # File Handler file_handler = RotatingFileHandler( 'flask_app.log', maxBytes=10485760, # 10MB backupCount=10 ) file_handler.setLevel(logging.INFO) file_handler.setFormatter(formatter) # Add request context class RequestFormatter(logging.Formatter): def format(self, record): record.url = request.url record.remote_addr = request.remote_addr return super().format(record) # Configure app logger app.logger.addHandler(file_handler) app.logger.setLevel(logging.INFO) return app.logger # Usage in routes @app.route('/api/endpoint') def api_endpoint(): app.logger.info(f'Request received from {request.remote_addr}') # Your code here return jsonify({'status': 'success'})
Für Hintergrundaufgaben müssen wir eine ordnungsgemäße Protokollverarbeitung und -rotation sicherstellen:
from fastapi import FastAPI, Request from typing import Callable import logging import time app = FastAPI() # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) logger = logging.getLogger(__name__) # Middleware for request logging @app.middleware("http") async def log_requests(request: Request, call_next: Callable): start_time = time.time() response = await call_next(request) duration = time.time() - start_time log_dict = { "url": str(request.url), "method": request.method, "client_ip": request.client.host, "duration": f"{duration:.2f}s", "status_code": response.status_code } logger.info(f"Request processed: {log_dict}") return response # Example endpoint with logging @app.get("/items/{item_id}") async def read_item(item_id: int): logger.info(f"Retrieving item {item_id}") # Your code here return {"item_id": item_id}
Implementieren der Anforderungsverfolgung in Ihrer gesamten Anwendung:
import logging import contextvars from uuid import uuid4 # Create context variable for trace ID trace_id_var = contextvars.ContextVar('trace_id', default=None) class TraceIDFilter(logging.Filter): def filter(self, record): trace_id = trace_id_var.get() record.trace_id = trace_id if trace_id else 'no_trace' return True def setup_microservice_logging(service_name): logger = logging.getLogger(service_name) # Create formatter with trace ID formatter = logging.Formatter( '%(asctime)s - %(name)s - [%(trace_id)s] - %(levelname)s - %(message)s' ) # Add handlers with trace ID filter handler = logging.StreamHandler() handler.setFormatter(formatter) handler.addFilter(TraceIDFilter()) logger.addHandler(handler) logger.setLevel(logging.INFO) return logger # Usage in microservice logger = setup_microservice_logging('order_service') def process_order(order_data): # Generate or get trace ID from request trace_id_var.set(str(uuid4())) logger.info("Starting order processing", extra={ 'order_id': order_data['id'], 'customer_id': order_data['customer_id'] }) # Process order... logger.info("Order processed successfully")
Benutzeraktionen sicher verfolgen:
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
Eine effektive Fehlerbehebung bei Protokollierungsproblemen erfordert das Verständnis häufiger Probleme und ihrer Lösungen. Dieser Abschnitt behandelt die häufigsten Herausforderungen, mit denen Entwickler bei der Implementierung der Protokollierung konfrontiert sind, und bietet praktische Lösungen zum Debuggen von Protokollierungskonfigurationen.
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
logging.basicConfig( filename='app.log', filemode='w', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.DEBUG, datefmt='%Y-%m-%d %H:%M:%S' )
config = { 'version': 1, 'formatters': { 'detailed': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', 'formatter': 'detailed' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'app.log', 'level': 'DEBUG', 'formatter': 'detailed' } }, 'loggers': { 'myapp': { 'handlers': ['console', 'file'], 'level': 'DEBUG', 'propagate': True } } } logging.config.dictConfig(config)
import json import logging from datetime import datetime class JSONFormatter(logging.Formatter): def __init__(self): super().__init__() def format(self, record): # Create base log record log_obj = { "timestamp": self.formatTime(record, self.datefmt), "name": record.name, "level": record.levelname, "message": record.getMessage(), "module": record.module, "function": record.funcName, "line": record.lineno } # Add exception info if present if record.exc_info: log_obj["exception"] = self.formatException(record.exc_info) # Add custom fields from extra if hasattr(record, "extra_fields"): log_obj.update(record.extra_fields) return json.dumps(log_obj) # Usage Example logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger.addHandler(handler) # Log with extra fields logger.info("User logged in", extra={"extra_fields": {"user_id": "123", "ip": "192.168.1.1"}})
import traceback import sys from contextlib import contextmanager class ErrorLogger: def __init__(self, logger): self.logger = logger @contextmanager def error_context(self, operation_name, **context): """Context manager for error logging with additional context""" try: yield except Exception as e: # Capture the current stack trace exc_type, exc_value, exc_traceback = sys.exc_info() # Format error details error_details = { "operation": operation_name, "error_type": exc_type.__name__, "error_message": str(exc_value), "context": context, "stack_trace": traceback.format_exception(exc_type, exc_value, exc_traceback) } # Log the error with full context self.logger.error( f"Error in {operation_name}: {str(exc_value)}", extra={"error_details": error_details} ) # Re-raise the exception raise # Usage Example logger = logging.getLogger(__name__) error_logger = ErrorLogger(logger) with error_logger.error_context("user_authentication", user_id="123", attempt=2): # Your code that might raise an exception authenticate_user(user_id)
import threading import logging from queue import Queue from logging.handlers import QueueHandler, QueueListener def setup_thread_safe_logging(): """Set up thread-safe logging with a queue""" # Create the queue log_queue = Queue() # Create handlers console_handler = logging.StreamHandler() file_handler = logging.FileHandler('app.log') # Create queue handler and listener queue_handler = QueueHandler(log_queue) listener = QueueListener( log_queue, console_handler, file_handler, respect_handler_level=True ) # Configure root logger root_logger = logging.getLogger() root_logger.addHandler(queue_handler) # Start the listener in a separate thread listener.start() return listener # Usage listener = setup_thread_safe_logging() def worker_function(): logger = logging.getLogger(__name__) logger.info(f"Worker thread {threading.current_thread().name} starting") # Do work... logger.info(f"Worker thread {threading.current_thread().name} finished") # Create and start threads threads = [ threading.Thread(target=worker_function) for _ in range(3) ] for thread in threads: thread.start()
# settings.py LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}', 'style': '{', }, 'simple': { 'format': '{levelname} {message}', 'style': '{', }, }, 'filters': { 'require_debug_true': { '()': 'django.utils.log.RequireDebugTrue', }, }, 'handlers': { 'console': { 'level': 'INFO', 'filters': ['require_debug_true'], 'class': 'logging.StreamHandler', 'formatter': 'simple' }, 'file': { 'level': 'ERROR', 'class': 'logging.FileHandler', 'filename': 'django-errors.log', 'formatter': 'verbose' }, 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True, } }, 'loggers': { 'django': { 'handlers': ['console'], 'propagate': True, }, 'django.request': { 'handlers': ['file', 'mail_admins'], 'level': 'ERROR', 'propagate': False, }, 'myapp': { 'handlers': ['console', 'file'], 'level': 'INFO', } } }
import logging from logging.handlers import RotatingFileHandler from flask import Flask, request app = Flask(__name__) def setup_logger(): # Create formatter formatter = logging.Formatter( '[%(asctime)s] %(levelname)s in %(module)s: %(message)s' ) # File Handler file_handler = RotatingFileHandler( 'flask_app.log', maxBytes=10485760, # 10MB backupCount=10 ) file_handler.setLevel(logging.INFO) file_handler.setFormatter(formatter) # Add request context class RequestFormatter(logging.Formatter): def format(self, record): record.url = request.url record.remote_addr = request.remote_addr return super().format(record) # Configure app logger app.logger.addHandler(file_handler) app.logger.setLevel(logging.INFO) return app.logger # Usage in routes @app.route('/api/endpoint') def api_endpoint(): app.logger.info(f'Request received from {request.remote_addr}') # Your code here return jsonify({'status': 'success'})
from fastapi import FastAPI, Request from typing import Callable import logging import time app = FastAPI() # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) logger = logging.getLogger(__name__) # Middleware for request logging @app.middleware("http") async def log_requests(request: Request, call_next: Callable): start_time = time.time() response = await call_next(request) duration = time.time() - start_time log_dict = { "url": str(request.url), "method": request.method, "client_ip": request.client.host, "duration": f"{duration:.2f}s", "status_code": response.status_code } logger.info(f"Request processed: {log_dict}") return response # Example endpoint with logging @app.get("/items/{item_id}") async def read_item(item_id: int): logger.info(f"Retrieving item {item_id}") # Your code here return {"item_id": item_id}
import logging import contextvars from uuid import uuid4 # Create context variable for trace ID trace_id_var = contextvars.ContextVar('trace_id', default=None) class TraceIDFilter(logging.Filter): def filter(self, record): trace_id = trace_id_var.get() record.trace_id = trace_id if trace_id else 'no_trace' return True def setup_microservice_logging(service_name): logger = logging.getLogger(service_name) # Create formatter with trace ID formatter = logging.Formatter( '%(asctime)s - %(name)s - [%(trace_id)s] - %(levelname)s - %(message)s' ) # Add handlers with trace ID filter handler = logging.StreamHandler() handler.setFormatter(formatter) handler.addFilter(TraceIDFilter()) logger.addHandler(handler) logger.setLevel(logging.INFO) return logger # Usage in microservice logger = setup_microservice_logging('order_service') def process_order(order_data): # Generate or get trace ID from request trace_id_var.set(str(uuid4())) logger.info("Starting order processing", extra={ 'order_id': order_data['id'], 'customer_id': order_data['customer_id'] }) # Process order... logger.info("Order processed successfully")
Loguru bietet eine einfachere Protokollierungsoberfläche mit sofort einsatzbereiten leistungsstarken Funktionen:
from logging.handlers import RotatingFileHandler import logging import threading from datetime import datetime class BackgroundTaskLogger: def __init__(self, task_name): self.logger = logging.getLogger(f'background_task.{task_name}') self.setup_logging() def setup_logging(self): # Create logs directory if it doesn't exist import os os.makedirs('logs', exist_ok=True) # Setup rotating file handler handler = RotatingFileHandler( filename=f'logs/task_{datetime.now():%Y%m%d}.log', maxBytes=5*1024*1024, # 5MB backupCount=5 ) # Create formatter formatter = logging.Formatter( '%(asctime)s - [%(threadName)s] - %(levelname)s - %(message)s' ) handler.setFormatter(formatter) self.logger.addHandler(handler) self.logger.setLevel(logging.INFO) def log_task_status(self, status, **kwargs): """Log task status with additional context""" extra = { 'thread_id': threading.get_ident(), 'timestamp': datetime.now().isoformat(), **kwargs } self.logger.info(f"Task status: {status}", extra=extra) # Usage example def background_job(): logger = BackgroundTaskLogger('data_processing') try: logger.log_task_status('started', job_id=123) # Do some work... logger.log_task_status('completed', records_processed=1000) except Exception as e: logger.logger.error(f"Task failed: {str(e)}", exc_info=True)
Structlog eignet sich hervorragend für die strukturierte Protokollierung mit Kontext:
import logging from contextlib import contextmanager import threading import uuid # Store request ID in thread-local storage _request_id = threading.local() class RequestIDFilter(logging.Filter): def filter(self, record): record.request_id = getattr(_request_id, 'id', 'no_request_id') return True @contextmanager def request_context(request_id=None): """Context manager for request tracking""" if request_id is None: request_id = str(uuid.uuid4()) old_id = getattr(_request_id, 'id', None) _request_id.id = request_id try: yield request_id finally: if old_id is None: del _request_id.id else: _request_id.id = old_id # Setup logging with request ID def setup_request_logging(): logger = logging.getLogger() formatter = logging.Formatter( '%(asctime)s - [%(request_id)s] - %(levelname)s - %(message)s' ) handler = logging.StreamHandler() handler.setFormatter(formatter) handler.addFilter(RequestIDFilter()) logger.addHandler(handler) return logger # Usage example logger = setup_request_logging() def process_request(data): with request_context() as request_id: logger.info("Processing request", extra={ 'data': data, 'operation': 'process_request' }) # Process the request... logger.info("Request processed successfully")
Für JSON-formatierte Protokollierung:
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
logging.basicConfig( filename='app.log', filemode='w', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.DEBUG, datefmt='%Y-%m-%d %H:%M:%S' )
config = { 'version': 1, 'formatters': { 'detailed': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', 'formatter': 'detailed' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'app.log', 'level': 'DEBUG', 'formatter': 'detailed' } }, 'loggers': { 'myapp': { 'handlers': ['console', 'file'], 'level': 'DEBUG', 'propagate': True } } } logging.config.dictConfig(config)
import json import logging from datetime import datetime class JSONFormatter(logging.Formatter): def __init__(self): super().__init__() def format(self, record): # Create base log record log_obj = { "timestamp": self.formatTime(record, self.datefmt), "name": record.name, "level": record.levelname, "message": record.getMessage(), "module": record.module, "function": record.funcName, "line": record.lineno } # Add exception info if present if record.exc_info: log_obj["exception"] = self.formatException(record.exc_info) # Add custom fields from extra if hasattr(record, "extra_fields"): log_obj.update(record.extra_fields) return json.dumps(log_obj) # Usage Example logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger.addHandler(handler) # Log with extra fields logger.info("User logged in", extra={"extra_fields": {"user_id": "123", "ip": "192.168.1.1"}})
import traceback import sys from contextlib import contextmanager class ErrorLogger: def __init__(self, logger): self.logger = logger @contextmanager def error_context(self, operation_name, **context): """Context manager for error logging with additional context""" try: yield except Exception as e: # Capture the current stack trace exc_type, exc_value, exc_traceback = sys.exc_info() # Format error details error_details = { "operation": operation_name, "error_type": exc_type.__name__, "error_message": str(exc_value), "context": context, "stack_trace": traceback.format_exception(exc_type, exc_value, exc_traceback) } # Log the error with full context self.logger.error( f"Error in {operation_name}: {str(exc_value)}", extra={"error_details": error_details} ) # Re-raise the exception raise # Usage Example logger = logging.getLogger(__name__) error_logger = ErrorLogger(logger) with error_logger.error_context("user_authentication", user_id="123", attempt=2): # Your code that might raise an exception authenticate_user(user_id)
Diese Anleitung behandelt die wesentlichen Aspekte der Python-Protokollierung, von der Grundeinrichtung bis hin zu erweiterten Implementierungen. Denken Sie daran, dass die Protokollierung ein wesentlicher Bestandteil der Beobachtbarkeit und Wartung von Anwendungen ist. Setzen Sie es sorgfältig um und pflegen Sie es regelmäßig, um die besten Ergebnisse zu erzielen.
Denken Sie daran, Ihre Protokollierungsimplementierung regelmäßig zu überprüfen und zu aktualisieren, wenn sich Ihre Anwendung weiterentwickelt und neue Anforderungen entstehen.
Das obige ist der detaillierte Inhalt vonVollständiger Leitfaden zur Python-Protokollierung: Best Practices und Implementierung. Für weitere Informationen folgen Sie bitte anderen verwandten Artikeln auf der PHP chinesischen Website!