From Python Scripts to Serverless AWS: My Investment Portfolio Journey
I started with simple Python scripts for AWS automation, gradually evolving into a more complex project. Three months ago, I barely understood metaclasses; now, I've built a full-fledged investment portfolio manager.
Years of using Python for AWS automation (including that infamous "does-everything" script) led me to build a proper application. Leveraging my past scripts, Stack Overflow, and Claude's AI assistance, I finally grasped software development principles.
App screenshot (seed data, not actual investments).
Tired of manual Excel spreadsheet updates for my investment portfolios, I automated the process. This Python application manages portfolios, tracks transactions, processes dividends, and even updates prices automatically. Initially, it ran beautifully in Docker on my home server (Flask backend, React frontend, SQLite database).
Running this on my home server felt inefficient. As an AWS professional, managing containers on my hardware seemed counterintuitive. The solution seemed obvious: ECS. I already had the docker-compose
file:
<code>services: backend: build: ./backend container_name: investment-portfolio-backend environment: - DB_DIR=/data/db - LOG_DIR=/data/logs - DOMAIN=${DOMAIN:-localhost} volumes: - /path/to/your/data:/data networks: - app-network frontend: build: context: ./frontend args: - DOMAIN=${DOMAIN:-localhost} - USE_HTTPS=${USE_HTTPS:-false} container_name: investment-portfolio-frontend environment: - DOMAIN=${DOMAIN:-localhost} - USE_HTTPS=${USE_HTTPS:-false} ports: - "80:80" depends_on: - backend networks: - app-network</code>
However, an AWS architect's perspective (and the pricing calculator) suggested a serverless approach:
This led me down the serverless rabbit hole. I had prior serverless experience – a temperature tracking project with my wife, using KNMI data and generating a color-coded table for a crafting project.
<code>| Date | Min.Temp | Min.Kleur | Max.Temp | Max.Kleur | ---------------------------------------------------------------- | 2023-03-01 | -4.1°C | darkblue | 7.1°C | lightblue | | 2023-03-02 | 1.3°C | blue | 6.8°C | lightblue | ...</code>
This project ran locally or via Lambda/API Gateway, taking date parameters. Scaling this to a full Flask application with SQLAlchemy, background jobs, and complex relationships proved challenging.
My containerized application worked well, but the allure of serverless services was strong. The potential for automatic scaling and the elimination of container management were tempting.
So, I re-architected my application for a serverless environment. The original project took two months; this would be a breeze... or so I thought.
SQLite's limitations with Lambda led me to consider PostgreSQL Aurora Serverless, maintaining compatibility with my SQLAlchemy knowledge. I created a dual-handler:
<code>services: backend: build: ./backend container_name: investment-portfolio-backend environment: - DB_DIR=/data/db - LOG_DIR=/data/logs - DOMAIN=${DOMAIN:-localhost} volumes: - /path/to/your/data:/data networks: - app-network frontend: build: context: ./frontend args: - DOMAIN=${DOMAIN:-localhost} - USE_HTTPS=${USE_HTTPS:-false} container_name: investment-portfolio-frontend environment: - DOMAIN=${DOMAIN:-localhost} - USE_HTTPS=${USE_HTTPS:-false} ports: - "80:80" depends_on: - backend networks: - app-network</code>
Converting my Flask application to Lambda functions was more complex than anticipated. My initial attempt was clumsy:
<code>| Date | Min.Temp | Min.Kleur | Max.Temp | Max.Kleur | ---------------------------------------------------------------- | 2023-03-01 | -4.1°C | darkblue | 7.1°C | lightblue | | 2023-03-02 | 1.3°C | blue | 6.8°C | lightblue | ...</code>
To improve maintainability, I created a decorator:
<code>@contextmanager def db_session(): # ... (code for environment-aware database session management) ...</code>
This improved Lambda function structure:
<code># ... (initial, inefficient Lambda handler code) ...</code>
However, this broke the original Flask routes. A new decorator enabled dual functionality:
<code>def lambda_response(func): # ... (decorator for cleaner Lambda responses) ...</code>
Supporting functions ensured consistent responses:
<code>@lambda_response def get_portfolios(event, context): # ... (simplified Lambda function) ...</code>
This allowed using the same routes for both Flask and Lambda:
<code>def dual_handler(route_path, methods=None): # ... (decorator for both Flask routes and Lambda handlers) ...</code>
The frontend was straightforward. S3 static website hosting and CloudFront provided easy deployment. A simple script uploaded the frontend to S3 and invalidated the CloudFront cache:
<code>def create_lambda_response(flask_response): # ... (function to convert Flask response to Lambda response format) ... def create_flask_request(event): # ... (function to convert Lambda event to Flask request) ...</code>
After weeks of work, my application was serverless. While I won't keep it online due to security concerns, I learned valuable lessons:
Would I repeat this? Probably not. But the journey was rewarding, teaching me about Python and dual-stack development. My investment portfolio manager now runs securely on my private network.
The above is the detailed content of From Docker to Lambda: An AWS Admins Journey into Python Applications. For more information, please follow other related articles on the PHP Chinese website!