


Use nginx proxy_cache to configure website cache underpinning nginx fastcgi cache nginx cache configuration nginx file cach
As we all know, nginx proxy_cache can cache the requested response, playing a role similar to CDN. It even provides more functions than CDN. It can also be used to cache the underlying data. When the background tomcat hangs, nginx will directly cache it. The underlying data in is returned to the user.
Directly paste the detailed configuration
upstream tomcat_localhost {
//Be careful not to set the timeout too long here
server 127.0.0.1:8080 weight=10 max_fails=1 fail_timeout=1s;
}
1.nginx cache Space configuration, this configuration can be defined above the server in the nginx configuration file
#proxy_cache_path: The cache file path of the proxy cache, /export/Data/huishou.jd.local This directory must be created in advance, otherwise the configuration file detection will Failed, and the directory cannot be deleted. The cache disappears after deletion
# levels=1:2 nginx will create two more directories under the cache file path configured above. The first-level directory is named with one character, and the second-level directory is named It is 2 characters
#keys_z max_size=100m Define the cache name and cache size. The cache name will be used in the location below. The cache size can be set based on your own server memory and the size of the content that the system needs to cache
# inactive=1d is the validity time of the cache file. After the cache reaches the set time, it will be removed from the cache path regardless of whether it is in use, and then return to the source to obtain new data and generate the cache file
proxy_cache_path /export/Datal/cache levels=1 :2 keys_z max_size=300m inactive=1d;
2.nginx cache bottom interception configuration, which is configured in a specific location
location / {
#Configure the cache space name defined above, required configuration
proxy_cache my_cache ;
#Set the cache key, which can also be customized according to the parameters in the URL; $args represents all parameters, or you can use one of them, for example: $arg_name, which means only taking the name parameter in the parameter list, here Can be configured flexibly, required configuration
proxy_cache_key $host$uri$is_args$args;
#Set the request URL that bypasses the cache, that is, if the URL contains the value of this configuration, the request will not obtain data from the cache. Must be configured
proxy_cache_bypass $arg_noCache;
#Set the cache time based on the response code. After this time, even if there is cached data in the cache file, nginx will return to the source to request new data. The value here can also be controlled based on the Cache-Control cache time set in the header of the response. The setting in the header has the highest priority. If the cache time is not set here and in the header, nginx will not generate a cache file. Must be configured. If there is no configuration here, there must be settings in the header.
proxy_cache_valid 200 1m;
#This is the bottom configuration. The old one is better than the error. When nginx requests the background server to report an error, if the configuration is returned Error response code, nginx directly retrieves the old data in the cache file and returns it to the user, using the required configuration
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
#Cache concurrency lock, only when the nginx cache does not hit One request goes back to the source tomcat to request data, and other requests will wait. Non -optional configuration oProxy_cache_lock on;
Proxy_cache_lock_timeout 1s;
Proxy_pass
Localhost; }
3. Nginx's cache bottom The configuration is complete. During the validity period of the cache file, even if the background tomcat hangs, the cached pages or interfaces can still provide services normally.
The above introduces the use of nginx proxy_cache for website cache backing configuration, including nginx and cache aspects. I hope it will be helpful to friends who are interested in PHP tutorials.

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

To allow the Tomcat server to access the external network, you need to: modify the Tomcat configuration file to allow external connections. Add a firewall rule to allow access to the Tomcat server port. Create a DNS record pointing the domain name to the Tomcat server public IP. Optional: Use a reverse proxy to improve security and performance. Optional: Set up HTTPS for increased security.

The start and stop commands of Nginx are nginx and nginx -s quit respectively. The start command starts the server directly, while the stop command gracefully shuts down the server, allowing all current requests to be processed. Other available stop signals include stop and reload.

Steps to run ThinkPHP Framework locally: Download and unzip ThinkPHP Framework to a local directory. Create a virtual host (optional) pointing to the ThinkPHP root directory. Configure database connection parameters. Start the web server. Initialize the ThinkPHP application. Access the ThinkPHP application URL and run it.

To register for phpMyAdmin, you need to first create a MySQL user and grant permissions to it, then download, install and configure phpMyAdmin, and finally log in to phpMyAdmin to manage the database.

Server deployment steps for a Node.js project: Prepare the deployment environment: obtain server access, install Node.js, set up a Git repository. Build the application: Use npm run build to generate deployable code and dependencies. Upload code to the server: via Git or File Transfer Protocol. Install dependencies: SSH into the server and use npm install to install application dependencies. Start the application: Use a command such as node index.js to start the application, or use a process manager such as pm2. Configure a reverse proxy (optional): Use a reverse proxy such as Nginx or Apache to route traffic to your application

To solve the "Welcome to nginx!" error, you need to check the virtual host configuration, enable the virtual host, reload Nginx, if the virtual host configuration file cannot be found, create a default page and reload Nginx, then the error message will disappear and the website will be normal show.

nginx appears when accessing a website. The reasons may be: server maintenance, busy server, browser cache, DNS issues, firewall blocking, website misconfiguration, network connection issues, or the website is down. Try the following solutions: wait for maintenance to end, visit during off-peak hours, clear your browser cache, flush your DNS cache, disable firewall or antivirus software, contact the site administrator, check your network connection, or use a search engine or web archive to find another copy of the site. If the problem persists, please contact the site administrator.

There are five methods for container communication in the Docker environment: shared network, Docker Compose, network proxy, shared volume, and message queue. Depending on your isolation and security needs, choose the most appropriate communication method, such as leveraging Docker Compose to simplify connections or using a network proxy to increase isolation.
