Rate limiting protects and improves the availability of API-based services. If you are talking to an API and receive an HTTP 429 Too Many Requests response status code, you have been rate limited. This means you have exceeded the number of requests allowed in a given time. All you need to do is slow down, wait a moment, and try again.
Video tutorial recommendation: nodejs tutorial
When you consider limiting your own API-based services, you need to weigh the trade-offs between user experience, security, and performance.
#The most common reason to control data flow is to maintain the availability of API-based services. But there are also security benefits, as an accidental or intentional surge in inbound traffic can take up valuable resources and impact availability for other users.
By controlling the rate of incoming requests, you can:
Rate limiting can be implemented at the client level, application level, infrastructure level, or anywhere in between. There are several ways to control inbound traffic to an API service:
You can use any of these rate limits (or even a combination).
No matter how you choose to implement it, the goal of rate limiting is to establish a checkpoint that denies or passes requests to access your resources. Many programming languages and frameworks have built-in functionality or middleware to achieve this, as well as options for various rate limiting algorithms.
Here's one way to make your own rate limiter using Node and Redis:
Create a Node App
Adding a rate limiter using Redis
Testing in Postman
View code examples on GitHub.
Before you begin, make sure you have Node and Redis installed on your computer.
Set up a new Node application from the command line. Accept the default options via the CLI prompt, or add the —yes
flag.
$ npm init --yes
If you accepted the default options during project setup, create a file named index.js
for the entry point.
$ touch index.js
Install the Express Web framework, and then initialize the server in index.js
.
const express = require('express') const app = express() const port = process.env.PORT || 3000 app.get('/', (req, res) => res.send('Hello World!')) app.listen(port, () => console.log(`Example app listening at http://localhost:${port}`))
Start the server from the command line.
$ node index.js
Go back to index.js
, create a route, first check the rate limit, and then allow the user to access the resource if it does not exceed the limit.
app.post('/', async (req, res) => { async function isOverLimit(ip) { // to define } // 检查率限制 let overLimit = await isOverLimit(req.ip) if (overLimit) { res.status(429).send('Too many requests - try again later') return } // 允许访问资源 res.send("Accessed the precious resources!") })
In the next step, we will define the rate limiter function isOverLimit
.
Redis is an in-memory key-value database, so it can retrieve data very quickly. Implementing rate limiting with Redis is also very simple.
The rate limiting algorithm shown in the figure below is a Example of sliding window counter. A user will never hit the rate limit if they submit a moderate number of calls, or space them out over time. Users who exceed the maximum request within the 10 second window must wait sufficient time to resume their requests.
Install a Redis client named ioredis for Node from the command line.
$ npm install ioredis
Start the Redis server locally.
$ redis-server
Then require and initialize the Redis client in index.js
.
const redis = require('ioredis') const client = redis.createClient({ port: process.env.REDIS_PORT || 6379, host: process.env.REDIS_HOST || 'localhost', }) client.on('connect', function () { console.log('connected'); });
Define the isOverLimit function we started writing in the previous step. According to this mode of Redis, a counter is saved according to IP.
async function isOverLimit(ip) { let res try { res = await client.incr(ip) } catch (err) { console.error('isOverLimit: could not increment key') throw err } console.log(`${ip} has value: ${res}`) if (res > 10) { return true } client.expire(ip, 10) }
This is the rate limiter.
When a user calls the API, we check Redis to see if the user has exceeded the limit. If so, the API will immediately return an HTTP 429 status code with the message Too many requests — try again later
. If the user is within limits, we continue to the next block of code where we can allow access to a protected resource (such as a database).
During rate limit checking, we find the user's record in Redis and increment its request count, if there is no record for that user in Redis, then we create a new record. Finally, each record will expire within 10 seconds of the most recent activity.
In the next step, please make sure our speed limiter is functioning properly.
Save the changes and restart the server. We will use Postman to send POST
requests to our API server, which is running locally at http:// localhost:3000
.
Continue sending requests in rapid succession to reach your rate limit.
This is a simple example of a rate limiter for Node and Redis, and this is just the beginning. There are a bunch of strategies and tools you can use to structure and implement your rate limits. And there are other enhancements that can be explored with this example, like:
Retry-after
header to let the user know before retrying How long should you waitRemember that when you look at API limits, you are making trade-offs between performance, security, and user experience. Your ideal rate limiting solution will change over time, taking these factors into account.
English original address: https://codeburst.io/api-rate-limiting-with-node-and-redis-95354259c768
More programming related knowledge , please visit: Introduction to Programming! !
The above is the detailed content of API rate limiting via Node+Redi. For more information, please follow other related articles on the PHP Chinese website!