Build an API to Keep Your Marketing Emails Out of Spam
When running email marketing campaigns, one of the biggest challenges is ensuring that your messages reach the inbox rather than the spam folder.
Apache SpamAssassin is a widely used tool for many email clients and email filtering tools to classify messages as spam. In this post, we’ll explore how to leverage SpamAssassin to validate if your email will be marked as spam and why it's marked so.
The logic will be packaged as an API and deployed online, so that it can be integrated into your workflow.
Why Apache SpamAssassin?
Apache SpamAssassin is an open-source spam detection platform maintained by the Apache Software Foundation. It uses a multitude of rules, Bayesian filtering, and network tests to assign a spam “score” to a given email. Generally, an email scoring 5 or above is at high risk of being flagged as spam.
Since that SpamAssassin’s scoring is transparent and well-documented, you can also use it to identify exactly which aspects of your email are causing high spam scores and improve your writing.
Getting Started with SpamAssassin
SpamAssassin is designed to run on Linux systems. You'll need a Linux OS or create a Docker VM to install and run it.
On Debian or Ubuntu systems, install SpamAssassin with:
apt-get update && apt-get install -y spamassassin sa-update
The sa-update command ensures that SpamAssassin’s rules are up-to-date.
Once installed, you can pipe an email message into SpamAssassin’s command-line tool. The output includes an annotated version of the email with spam scores and explains which rules are triggered.
A typical usage might look like this:
spamassassin -t < input_email.txt > results.txt
results.txt will then contain the processed email with SpamAssassin’s headers and scores.
Use FastAPI to Wrap SpamAssassin as an API
Next, let’s create a simple API that accepts two email fields: subject and html_body. It will pass the fields to SpamAssassin and return the validation result.
Example FastAPI Code
from fastapi import FastAPI from datetime import datetime, timezone from email.utils import format_datetime from pydantic import BaseModel import subprocess import re def extract_analysis_details(text): rules_section = re.search(r"Content analysis details:.*?(pts rule name.*?description.*?)\n\n", text, re.DOTALL) if not rules_section: return [] rules_text = rules_section.group(1) pattern = r"^\s*([-\d.]+)\s+(\S+)\s+(.+)$" rules = [] for line in rules_text.splitlines()[1:]: match = re.match(pattern, line) if match: score, rule, description = match.groups() rules.append({ "rule": rule, "score": float(score), "description": description.strip() }) return rules app = FastAPI() class Email(BaseModel): subject: str html_body: str @app.post("/spam_check") def spam_check(email: Email): # assemble the full email message = f"""From: example@example.com To: recipient@example.com Subject: {email.subject} Date: {format_datetime(datetime.now(timezone.utc))} Content-Type: text/html; charset="UTF-8" {email.html_body}""" # Run SpamAssassin and capture the output directly output = subprocess.run(["spamassassin", "-t"], input=message.encode('utf-8'), capture_output=True) output_str = output.stdout.decode('utf-8', errors='replace') details = extract_analysis_details(output_str) return {"result": details}
The response will contain the analysis details of SpamAssassin’s results.
Let's take this input as an example:
subject: Test Email html_body: <html> <body> <p>This is an <b>HTML</b> test email.</p> </body> </html>
The response would be like this:
[ { "rule": "MISSING_MID", "score": 0.1, "description": "Missing Message-Id: header" }, { "rule": "NO_RECEIVED", "score": -0.0, "description": "Informational: message has no Received headers" }, { "rule": "NO_RELAYS", "score": -0.0, "description": "Informational: message was not relayed via SMTP" }, { "rule": "HTML_MESSAGE", "score": 0.0, "description": "BODY: HTML included in message" }, { "rule": "MIME_HTML_ONLY", "score": 0.1, "description": "BODY: Message only has text/html MIME parts" }, { "rule": "MIME_HEADER_CTYPE_ONLY", "score": 0.1, "description": "'Content-Type' found without required MIME headers" } ]
Deploying the API Online
Running SpamAssassin requires a Linux environment with the software installed. Traditionally, you might need an EC2 instance or a DigitalOcean droplet to deploy, which can be costly and tedious, especially if your usage is low-volume.
As for serverless platforms, they often do not provide a straightforward way to run system packages like SpamAssassin.
Now with Leapcell, you can deploy any system packages like SpamAssassin, meanwhile keep the service serverless - you only pay for invocations, which is usually cheaper.
Deploying the API on Leapcell is very easy. You don't have to worry about how to set up a Linux environment or how to build a Dockerfile. Just select the Python image for deploying, and fill in the "Build Command" field properly.
Once deployed, you’ll have an endpoint you can call on-demand. Whenever your API is invoked, it will run SpamAssassin, score the email, and return the response.
The above is the detailed content of Build an API to Keep Your Marketing Emails Out of Spam. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

This article explains how to use Beautiful Soup, a Python library, to parse HTML. It details common methods like find(), find_all(), select(), and get_text() for data extraction, handling of diverse HTML structures and errors, and alternatives (Sel

This article compares TensorFlow and PyTorch for deep learning. It details the steps involved: data preparation, model building, training, evaluation, and deployment. Key differences between the frameworks, particularly regarding computational grap

Python's statistics module provides powerful data statistical analysis capabilities to help us quickly understand the overall characteristics of data, such as biostatistics and business analysis. Instead of looking at data points one by one, just look at statistics such as mean or variance to discover trends and features in the original data that may be ignored, and compare large datasets more easily and effectively. This tutorial will explain how to calculate the mean and measure the degree of dispersion of the dataset. Unless otherwise stated, all functions in this module support the calculation of the mean() function instead of simply summing the average. Floating point numbers can also be used. import random import statistics from fracti

The article discusses popular Python libraries like NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, Django, Flask, and Requests, detailing their uses in scientific computing, data analysis, visualization, machine learning, web development, and H

This article guides Python developers on building command-line interfaces (CLIs). It details using libraries like typer, click, and argparse, emphasizing input/output handling, and promoting user-friendly design patterns for improved CLI usability.

When using Python's pandas library, how to copy whole columns between two DataFrames with different structures is a common problem. Suppose we have two Dats...

The article discusses the role of virtual environments in Python, focusing on managing project dependencies and avoiding conflicts. It details their creation, activation, and benefits in improving project management and reducing dependency issues.
