The problem was finally solved. The cause of the problem was that the database access did not add transaction annotations, which resulted in the database access connections not being managed. As a result, the number of connections in the connection pool continued to increase and exceeded, and then the server became unresponsive. Normally, this should report an error. The database connection pool I initially chose was org.apache.commons.dbcp.BasicDataSource. I don’t know why no error was reported, which made me unable to locate the problem; then I checked the connection reset information in the tomcat log. I guess it is related to the number of database connections; then I changed a connection pool org.logicalcobwebs.proxool.ProxoolDataSource, which reported an obvious error. When my database access exceeded a certain number of times, an error was reported directly telling me that the number of connections was exceeded; then I found that the connection The data was not released, and then I discovered that there was no transaction annotation in the code (after adding it, spring should be responsible for the management of the database connection, right?). An annotation solved my problem. . . Gained something
The server’s failure to respond can be divided into the following situations
1: The server is down. The server is inaccessible at this time. The entire server has been down. The detection method is the simplest, just ping just click
2: The service is down. It means that the service you provide the response is down, and there is an internal exception in the system. This service may be a process. Detection is also relatively simple, just visit other services to see if they are available.
3: Network abnormality, just use the first step of detection
Back to your question为什么浏览器端频繁操作会导致服务器端无响应?
This may be due to 1: The service has too complex logic and the processing time is too long. As a result, subsequent requests are blocked and suspended. 2: An internal exception caused the process where the service is located to hang up, so subsequent requests cannot be responded to. PS: You can judge the possible error location through the return status code obtained by the request.
The problem was finally solved. The cause of the problem was that the database access did not add transaction annotations, which resulted in the database access connections not being managed. As a result, the number of connections in the connection pool continued to increase and exceeded, and then the server became unresponsive. Normally, this should report an error. The database connection pool I initially chose was org.apache.commons.dbcp.BasicDataSource. I don’t know why no error was reported, which made me unable to locate the problem; then I checked the connection reset information in the tomcat log. I guess it is related to the number of database connections; then I changed a connection pool org.logicalcobwebs.proxool.ProxoolDataSource, which reported an obvious error. When my database access exceeded a certain number of times, an error was reported directly telling me that the number of connections was exceeded; then I found that the connection The data was not released, and then I discovered that there was no transaction annotation in the code (after adding it, spring should be responsible for the management of the database connection, right?). An annotation solved my problem. . . Gained something
The server’s failure to respond can be divided into the following situations
1: The server is down. The server is inaccessible at this time. The entire server has been down. The detection method is the simplest, just
ping
just click2: The service is down. It means that the service you provide the response is down, and there is an internal exception in the system. This service may be a process. Detection is also relatively simple, just visit other services to see if they are available.
3: Network abnormality, just use the first step of detection
Back to your question
为什么浏览器端频繁操作会导致服务器端无响应?
This may be due to 1: The service has too complex logic and the processing time is too long. As a result, subsequent requests are blocked and suspended. 2: An internal exception caused the process where the service is located to hang up, so subsequent requests cannot be responded to. PS: You can judge the possible error location through the return status code obtained by the request.
Above