在深入研究技术细节之前,让我们了解为什么正确的日志记录很重要:
对于那些刚接触 Python 日志记录的人来说,这里有一个使用 logging.basicConfig:
的基本示例
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
此示例演示了 python 中日志记录模块的基础知识,并展示了如何在应用程序中使用 python 记录器日志记录。
让我们从简单的日志配置开始:
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
Python 日志记录有五个标准级别:
Level | Numeric Value | When to Use |
---|---|---|
DEBUG | 10 | Detailed information for diagnosing problems |
INFO | 20 | General operational events |
WARNING | 30 | Something unexpected happened |
ERROR | 40 | More serious problem |
CRITICAL | 50 | Program may not be able to continue |
为什么选择记录而不是打印语句?
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
对于更复杂的应用:
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
结构化日志记录提供了一致的、机器可读的格式,这对于日志分析和监控至关重要。有关结构化日志记录模式和最佳实践的全面概述,请查看结构化日志记录指南。让我们用 Python 实现结构化日志记录:
logging.basicConfig( filename='app.log', filemode='w', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.DEBUG, datefmt='%Y-%m-%d %H:%M:%S' )
正确的错误记录对于调试生产问题至关重要。这是一个全面的方法:
config = { 'version': 1, 'formatters': { 'detailed': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', 'formatter': 'detailed' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'app.log', 'level': 'DEBUG', 'formatter': 'detailed' } }, 'loggers': { 'myapp': { 'handlers': ['console', 'file'], 'level': 'DEBUG', 'propagate': True } } } logging.config.dictConfig(config)
登录多线程应用时,需要保证线程安全:
import json import logging from datetime import datetime class JSONFormatter(logging.Formatter): def __init__(self): super().__init__() def format(self, record): # Create base log record log_obj = { "timestamp": self.formatTime(record, self.datefmt), "name": record.name, "level": record.levelname, "message": record.getMessage(), "module": record.module, "function": record.funcName, "line": record.lineno } # Add exception info if present if record.exc_info: log_obj["exception"] = self.formatException(record.exc_info) # Add custom fields from extra if hasattr(record, "extra_fields"): log_obj.update(record.extra_fields) return json.dumps(log_obj) # Usage Example logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger.addHandler(handler) # Log with extra fields logger.info("User logged in", extra={"extra_fields": {"user_id": "123", "ip": "192.168.1.1"}})
不同的应用程序环境需要特定的日志记录方法。无论您使用的是 Web 应用程序、微服务还是后台任务,每个环境都有独特的日志记录要求和最佳实践。让我们探讨如何在各种部署场景中实现有效的日志记录。
这是一个全面的 Django 日志记录设置:
import traceback import sys from contextlib import contextmanager class ErrorLogger: def __init__(self, logger): self.logger = logger @contextmanager def error_context(self, operation_name, **context): """Context manager for error logging with additional context""" try: yield except Exception as e: # Capture the current stack trace exc_type, exc_value, exc_traceback = sys.exc_info() # Format error details error_details = { "operation": operation_name, "error_type": exc_type.__name__, "error_message": str(exc_value), "context": context, "stack_trace": traceback.format_exception(exc_type, exc_value, exc_traceback) } # Log the error with full context self.logger.error( f"Error in {operation_name}: {str(exc_value)}", extra={"error_details": error_details} ) # Re-raise the exception raise # Usage Example logger = logging.getLogger(__name__) error_logger = ErrorLogger(logger) with error_logger.error_context("user_authentication", user_id="123", attempt=2): # Your code that might raise an exception authenticate_user(user_id)
Flask 提供了自己的可以定制的日志系统:
import threading import logging from queue import Queue from logging.handlers import QueueHandler, QueueListener def setup_thread_safe_logging(): """Set up thread-safe logging with a queue""" # Create the queue log_queue = Queue() # Create handlers console_handler = logging.StreamHandler() file_handler = logging.FileHandler('app.log') # Create queue handler and listener queue_handler = QueueHandler(log_queue) listener = QueueListener( log_queue, console_handler, file_handler, respect_handler_level=True ) # Configure root logger root_logger = logging.getLogger() root_logger.addHandler(queue_handler) # Start the listener in a separate thread listener.start() return listener # Usage listener = setup_thread_safe_logging() def worker_function(): logger = logging.getLogger(__name__) logger.info(f"Worker thread {threading.current_thread().name} starting") # Do work... logger.info(f"Worker thread {threading.current_thread().name} finished") # Create and start threads threads = [ threading.Thread(target=worker_function) for _ in range(3) ] for thread in threads: thread.start()
FastAPI 可以利用 Python 的日志记录和一些中间件增强功能:
# settings.py LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}', 'style': '{', }, 'simple': { 'format': '{levelname} {message}', 'style': '{', }, }, 'filters': { 'require_debug_true': { '()': 'django.utils.log.RequireDebugTrue', }, }, 'handlers': { 'console': { 'level': 'INFO', 'filters': ['require_debug_true'], 'class': 'logging.StreamHandler', 'formatter': 'simple' }, 'file': { 'level': 'ERROR', 'class': 'logging.FileHandler', 'filename': 'django-errors.log', 'formatter': 'verbose' }, 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True, } }, 'loggers': { 'django': { 'handlers': ['console'], 'propagate': True, }, 'django.request': { 'handlers': ['file', 'mail_admins'], 'level': 'ERROR', 'propagate': False, }, 'myapp': { 'handlers': ['console', 'file'], 'level': 'INFO', } } }
对于微服务,分布式跟踪和关联 ID 至关重要:
import logging from logging.handlers import RotatingFileHandler from flask import Flask, request app = Flask(__name__) def setup_logger(): # Create formatter formatter = logging.Formatter( '[%(asctime)s] %(levelname)s in %(module)s: %(message)s' ) # File Handler file_handler = RotatingFileHandler( 'flask_app.log', maxBytes=10485760, # 10MB backupCount=10 ) file_handler.setLevel(logging.INFO) file_handler.setFormatter(formatter) # Add request context class RequestFormatter(logging.Formatter): def format(self, record): record.url = request.url record.remote_addr = request.remote_addr return super().format(record) # Configure app logger app.logger.addHandler(file_handler) app.logger.setLevel(logging.INFO) return app.logger # Usage in routes @app.route('/api/endpoint') def api_endpoint(): app.logger.info(f'Request received from {request.remote_addr}') # Your code here return jsonify({'status': 'success'})
对于后台任务,我们需要确保正确的日志处理和轮换:
from fastapi import FastAPI, Request from typing import Callable import logging import time app = FastAPI() # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) logger = logging.getLogger(__name__) # Middleware for request logging @app.middleware("http") async def log_requests(request: Request, call_next: Callable): start_time = time.time() response = await call_next(request) duration = time.time() - start_time log_dict = { "url": str(request.url), "method": request.method, "client_ip": request.client.host, "duration": f"{duration:.2f}s", "status_code": response.status_code } logger.info(f"Request processed: {log_dict}") return response # Example endpoint with logging @app.get("/items/{item_id}") async def read_item(item_id: int): logger.info(f"Retrieving item {item_id}") # Your code here return {"item_id": item_id}
在您的应用程序中实施请求跟踪:
import logging import contextvars from uuid import uuid4 # Create context variable for trace ID trace_id_var = contextvars.ContextVar('trace_id', default=None) class TraceIDFilter(logging.Filter): def filter(self, record): trace_id = trace_id_var.get() record.trace_id = trace_id if trace_id else 'no_trace' return True def setup_microservice_logging(service_name): logger = logging.getLogger(service_name) # Create formatter with trace ID formatter = logging.Formatter( '%(asctime)s - %(name)s - [%(trace_id)s] - %(levelname)s - %(message)s' ) # Add handlers with trace ID filter handler = logging.StreamHandler() handler.setFormatter(formatter) handler.addFilter(TraceIDFilter()) logger.addHandler(handler) logger.setLevel(logging.INFO) return logger # Usage in microservice logger = setup_microservice_logging('order_service') def process_order(order_data): # Generate or get trace ID from request trace_id_var.set(str(uuid4())) logger.info("Starting order processing", extra={ 'order_id': order_data['id'], 'customer_id': order_data['customer_id'] }) # Process order... logger.info("Order processed successfully")
安全地跟踪用户操作:
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
有效地排除日志记录问题需要了解常见问题及其解决方案。本节涵盖开发人员在实现日志记录时面临的最常见挑战,并提供调试日志记录配置的实用解决方案。
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
logging.basicConfig( filename='app.log', filemode='w', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.DEBUG, datefmt='%Y-%m-%d %H:%M:%S' )
config = { 'version': 1, 'formatters': { 'detailed': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', 'formatter': 'detailed' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'app.log', 'level': 'DEBUG', 'formatter': 'detailed' } }, 'loggers': { 'myapp': { 'handlers': ['console', 'file'], 'level': 'DEBUG', 'propagate': True } } } logging.config.dictConfig(config)
import json import logging from datetime import datetime class JSONFormatter(logging.Formatter): def __init__(self): super().__init__() def format(self, record): # Create base log record log_obj = { "timestamp": self.formatTime(record, self.datefmt), "name": record.name, "level": record.levelname, "message": record.getMessage(), "module": record.module, "function": record.funcName, "line": record.lineno } # Add exception info if present if record.exc_info: log_obj["exception"] = self.formatException(record.exc_info) # Add custom fields from extra if hasattr(record, "extra_fields"): log_obj.update(record.extra_fields) return json.dumps(log_obj) # Usage Example logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger.addHandler(handler) # Log with extra fields logger.info("User logged in", extra={"extra_fields": {"user_id": "123", "ip": "192.168.1.1"}})
import traceback import sys from contextlib import contextmanager class ErrorLogger: def __init__(self, logger): self.logger = logger @contextmanager def error_context(self, operation_name, **context): """Context manager for error logging with additional context""" try: yield except Exception as e: # Capture the current stack trace exc_type, exc_value, exc_traceback = sys.exc_info() # Format error details error_details = { "operation": operation_name, "error_type": exc_type.__name__, "error_message": str(exc_value), "context": context, "stack_trace": traceback.format_exception(exc_type, exc_value, exc_traceback) } # Log the error with full context self.logger.error( f"Error in {operation_name}: {str(exc_value)}", extra={"error_details": error_details} ) # Re-raise the exception raise # Usage Example logger = logging.getLogger(__name__) error_logger = ErrorLogger(logger) with error_logger.error_context("user_authentication", user_id="123", attempt=2): # Your code that might raise an exception authenticate_user(user_id)
import threading import logging from queue import Queue from logging.handlers import QueueHandler, QueueListener def setup_thread_safe_logging(): """Set up thread-safe logging with a queue""" # Create the queue log_queue = Queue() # Create handlers console_handler = logging.StreamHandler() file_handler = logging.FileHandler('app.log') # Create queue handler and listener queue_handler = QueueHandler(log_queue) listener = QueueListener( log_queue, console_handler, file_handler, respect_handler_level=True ) # Configure root logger root_logger = logging.getLogger() root_logger.addHandler(queue_handler) # Start the listener in a separate thread listener.start() return listener # Usage listener = setup_thread_safe_logging() def worker_function(): logger = logging.getLogger(__name__) logger.info(f"Worker thread {threading.current_thread().name} starting") # Do work... logger.info(f"Worker thread {threading.current_thread().name} finished") # Create and start threads threads = [ threading.Thread(target=worker_function) for _ in range(3) ] for thread in threads: thread.start()
# settings.py LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}', 'style': '{', }, 'simple': { 'format': '{levelname} {message}', 'style': '{', }, }, 'filters': { 'require_debug_true': { '()': 'django.utils.log.RequireDebugTrue', }, }, 'handlers': { 'console': { 'level': 'INFO', 'filters': ['require_debug_true'], 'class': 'logging.StreamHandler', 'formatter': 'simple' }, 'file': { 'level': 'ERROR', 'class': 'logging.FileHandler', 'filename': 'django-errors.log', 'formatter': 'verbose' }, 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True, } }, 'loggers': { 'django': { 'handlers': ['console'], 'propagate': True, }, 'django.request': { 'handlers': ['file', 'mail_admins'], 'level': 'ERROR', 'propagate': False, }, 'myapp': { 'handlers': ['console', 'file'], 'level': 'INFO', } } }
import logging from logging.handlers import RotatingFileHandler from flask import Flask, request app = Flask(__name__) def setup_logger(): # Create formatter formatter = logging.Formatter( '[%(asctime)s] %(levelname)s in %(module)s: %(message)s' ) # File Handler file_handler = RotatingFileHandler( 'flask_app.log', maxBytes=10485760, # 10MB backupCount=10 ) file_handler.setLevel(logging.INFO) file_handler.setFormatter(formatter) # Add request context class RequestFormatter(logging.Formatter): def format(self, record): record.url = request.url record.remote_addr = request.remote_addr return super().format(record) # Configure app logger app.logger.addHandler(file_handler) app.logger.setLevel(logging.INFO) return app.logger # Usage in routes @app.route('/api/endpoint') def api_endpoint(): app.logger.info(f'Request received from {request.remote_addr}') # Your code here return jsonify({'status': 'success'})
from fastapi import FastAPI, Request from typing import Callable import logging import time app = FastAPI() # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) logger = logging.getLogger(__name__) # Middleware for request logging @app.middleware("http") async def log_requests(request: Request, call_next: Callable): start_time = time.time() response = await call_next(request) duration = time.time() - start_time log_dict = { "url": str(request.url), "method": request.method, "client_ip": request.client.host, "duration": f"{duration:.2f}s", "status_code": response.status_code } logger.info(f"Request processed: {log_dict}") return response # Example endpoint with logging @app.get("/items/{item_id}") async def read_item(item_id: int): logger.info(f"Retrieving item {item_id}") # Your code here return {"item_id": item_id}
import logging import contextvars from uuid import uuid4 # Create context variable for trace ID trace_id_var = contextvars.ContextVar('trace_id', default=None) class TraceIDFilter(logging.Filter): def filter(self, record): trace_id = trace_id_var.get() record.trace_id = trace_id if trace_id else 'no_trace' return True def setup_microservice_logging(service_name): logger = logging.getLogger(service_name) # Create formatter with trace ID formatter = logging.Formatter( '%(asctime)s - %(name)s - [%(trace_id)s] - %(levelname)s - %(message)s' ) # Add handlers with trace ID filter handler = logging.StreamHandler() handler.setFormatter(formatter) handler.addFilter(TraceIDFilter()) logger.addHandler(handler) logger.setLevel(logging.INFO) return logger # Usage in microservice logger = setup_microservice_logging('order_service') def process_order(order_data): # Generate or get trace ID from request trace_id_var.set(str(uuid4())) logger.info("Starting order processing", extra={ 'order_id': order_data['id'], 'customer_id': order_data['customer_id'] }) # Process order... logger.info("Order processed successfully")
Loguru 提供了一个更简单的日志记录界面,具有开箱即用的强大功能:
from logging.handlers import RotatingFileHandler import logging import threading from datetime import datetime class BackgroundTaskLogger: def __init__(self, task_name): self.logger = logging.getLogger(f'background_task.{task_name}') self.setup_logging() def setup_logging(self): # Create logs directory if it doesn't exist import os os.makedirs('logs', exist_ok=True) # Setup rotating file handler handler = RotatingFileHandler( filename=f'logs/task_{datetime.now():%Y%m%d}.log', maxBytes=5*1024*1024, # 5MB backupCount=5 ) # Create formatter formatter = logging.Formatter( '%(asctime)s - [%(threadName)s] - %(levelname)s - %(message)s' ) handler.setFormatter(formatter) self.logger.addHandler(handler) self.logger.setLevel(logging.INFO) def log_task_status(self, status, **kwargs): """Log task status with additional context""" extra = { 'thread_id': threading.get_ident(), 'timestamp': datetime.now().isoformat(), **kwargs } self.logger.info(f"Task status: {status}", extra=extra) # Usage example def background_job(): logger = BackgroundTaskLogger('data_processing') try: logger.log_task_status('started', job_id=123) # Do some work... logger.log_task_status('completed', records_processed=1000) except Exception as e: logger.logger.error(f"Task failed: {str(e)}", exc_info=True)
Structlog 非常适合使用上下文进行结构化日志记录:
import logging from contextlib import contextmanager import threading import uuid # Store request ID in thread-local storage _request_id = threading.local() class RequestIDFilter(logging.Filter): def filter(self, record): record.request_id = getattr(_request_id, 'id', 'no_request_id') return True @contextmanager def request_context(request_id=None): """Context manager for request tracking""" if request_id is None: request_id = str(uuid.uuid4()) old_id = getattr(_request_id, 'id', None) _request_id.id = request_id try: yield request_id finally: if old_id is None: del _request_id.id else: _request_id.id = old_id # Setup logging with request ID def setup_request_logging(): logger = logging.getLogger() formatter = logging.Formatter( '%(asctime)s - [%(request_id)s] - %(levelname)s - %(message)s' ) handler = logging.StreamHandler() handler.setFormatter(formatter) handler.addFilter(RequestIDFilter()) logger.addHandler(handler) return logger # Usage example logger = setup_request_logging() def process_request(data): with request_context() as request_id: logger.info("Processing request", extra={ 'data': data, 'operation': 'process_request' }) # Process the request... logger.info("Request processed successfully")
对于 JSON 格式的日志记录:
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
logging.basicConfig( filename='app.log', filemode='w', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.DEBUG, datefmt='%Y-%m-%d %H:%M:%S' )
config = { 'version': 1, 'formatters': { 'detailed': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', 'formatter': 'detailed' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'app.log', 'level': 'DEBUG', 'formatter': 'detailed' } }, 'loggers': { 'myapp': { 'handlers': ['console', 'file'], 'level': 'DEBUG', 'propagate': True } } } logging.config.dictConfig(config)
import json import logging from datetime import datetime class JSONFormatter(logging.Formatter): def __init__(self): super().__init__() def format(self, record): # Create base log record log_obj = { "timestamp": self.formatTime(record, self.datefmt), "name": record.name, "level": record.levelname, "message": record.getMessage(), "module": record.module, "function": record.funcName, "line": record.lineno } # Add exception info if present if record.exc_info: log_obj["exception"] = self.formatException(record.exc_info) # Add custom fields from extra if hasattr(record, "extra_fields"): log_obj.update(record.extra_fields) return json.dumps(log_obj) # Usage Example logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger.addHandler(handler) # Log with extra fields logger.info("User logged in", extra={"extra_fields": {"user_id": "123", "ip": "192.168.1.1"}})
import traceback import sys from contextlib import contextmanager class ErrorLogger: def __init__(self, logger): self.logger = logger @contextmanager def error_context(self, operation_name, **context): """Context manager for error logging with additional context""" try: yield except Exception as e: # Capture the current stack trace exc_type, exc_value, exc_traceback = sys.exc_info() # Format error details error_details = { "operation": operation_name, "error_type": exc_type.__name__, "error_message": str(exc_value), "context": context, "stack_trace": traceback.format_exception(exc_type, exc_value, exc_traceback) } # Log the error with full context self.logger.error( f"Error in {operation_name}: {str(exc_value)}", extra={"error_details": error_details} ) # Re-raise the exception raise # Usage Example logger = logging.getLogger(__name__) error_logger = ErrorLogger(logger) with error_logger.error_context("user_authentication", user_id="123", attempt=2): # Your code that might raise an exception authenticate_user(user_id)
本指南涵盖了 Python 日志记录的基本方面,从基本设置到高级实现。请记住,日志记录是应用程序可观察性和维护的一个组成部分。深思熟虑地实施并定期维护以获得最佳结果。
请记住,随着应用程序的发展和新需求的出现,定期检查和更新您的日志记录实现。
以上是完整的 Python 日志记录指南:最佳实践和实施的详细内容。更多信息请关注PHP中文网其他相关文章!