First of all, let’s continue to recall that there is an uninvolved content ngx_process_events_and_timers in the previous sub-thread execution operation. Today we will study this function.
This article comes from: http://blog.csdn.net/lengzijian/article/details/7601730
First, let’s take a look at some screenshots of Section 19:
Today we mainly explain the event-driven function, the red part in the picture:
[cpp] view
plaincopyprint?
- src/event/ngx_event.c
-
- void
- ngx_process_events_and_timers(ngx _cycle_t *cycle)
- {
- ngx_uint_t flags;
- ngx_msec_t timer, delta;
-
- (ngx_timer_resolution) time r = NGX_TIMER_INFINITE; flags = 0;
-
-
} else
- {
- timer = ngx_event_find_timer(); flags = NGX_UPDATE_TIME;
- }
-
-
/*
- The ngx_use_accept_mutex variable represents whether to use accept The mutex
- is used by default and can be turned off through the accept_mutex off; command; */
-
- ngx_accept_disabled variable is calculated in the ngx_event_accept function.
- If ngx_accept_disabled is greater than 0, it means that the process accepts too many links, Therefore, it gives up an opportunity to compete for the accept mutex and reduces itself by one.
- Then, continue to process events on existing connections.
-
nginx takes advantage of this to implement basic load balancing of inherited connections.
-
- ngx_accept_disabled--; /*
- Try to lock Accept Mutex. Only by successfully obtaining the lock can the Listen put word in EPOLL.
- Therefore, this ensures that only one process has the listening socket, so when all processes are blocked in epoll_wait,
- will not cause a group panic.
- If the process acquires the lock, a NGX_POST_EVENTS flag will be added.
- The function of this flag is to put all generated events into a queue, and then process the events slowly after they are released. Because the processing time may be time -consuming, if the lock is not applied first before processing, the process will occupy the lock for a long time. The efficiency is low.
-
} - else{
Not obtained The resulting process, of course, does not need the NGX_POST_EVENTS flag. -
But you need to set the delay time before fighting for the lock.
- */
- (timer == ngx_timer_infinite
- || timer & gt; ngx_accept_mutex_dlay) {
-
timer = ngx_acceth_mutex_delay;
-
delta = ngx_current_msec; - /*Next, epoll will start the wait event ,
-
The specific implementation of ngx_process_events corresponds to the ngx_epoll_process_events function in the epoll module
- Will be explained in detail here later
- */
-
( void
- ) ngx_process_events(cycle, timer, flags);
/ / Statistics of the time consumption of this wait event -
delta = ngx_current_msec-delta;
-
ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0,
-
/*
-
ngx_posted_accept_events is An event queue
, which temporarily stores the accept event epoll waits from the listening socket. -
- After the NGX_POST_EVENTS flag mentioned above is used, all accept events will be temporarily stored in this queue
- */
-
if-
(ngx_posted_accept_events ) { ngx_event_process_posted(cycle, &ngx_posted_accept_events);
-
}
//After all accept events are processed, If the lock is held, release it. - if (ngx_accept_mutex_held) {
-
ngx_shmtx_unlock(&ngx_accept_mutex);
- }
-
- /*
- delta is before For statistical time-consuming, if there is millisecond-level time-consuming, check the timers of all times.
-
If it is timeout, delete the expired timer from the time rbtree, and call the handler function of the corresponding event to process
- */
- if (delta) {
- ngx_event_expire_timers();
- }
-
- ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle-> ;log, 0,
- /*
- Processing ordinary events (read and write data obtained on the connection event),
- Because each event has its own handler method,
- */
-
if
- (ngx_posted_events) {
(ngx_threaded) { - ngx_wakeup_worker_thread(cycle); : }
- }
- I have talked about the accept event before. In fact, it is to monitor whether there are new events on the socket. Here is the handler method of the accept time:
- ngx_event_accept:
- [cpp] view
plaincopyprint?
- src/event/ngx_event_accept.c
-
- void
- ngx_event_accept(ngx_event_t *ev)
- {
- socklen_t socklen;
- ngx_err_t err;
- ngx_log_t *log;
- ngx_socket_t s;
- ngx_event_t *rev, *wev;
- ngx_listening_t *ls;
- ngx_connection_t *c, *lc;
- ngx_event_conf_t *ecf;
- u_char sa[NGX_SOCKADDRLEN];
-
- //省略部分代码
-
- lc = ev->data;
- ls = lc->listening;
- ev->ready = 0;
-
- ngx_log_debug2(NGX_LOG_DEBUG_EVENT, ev->log, 0,
- "accept on %V, ready: %d", &ls->addr_text, ev->available);
-
- do {
- socklen = NGX_SOCKADDRLEN;
- //accept一个新的连接
- s = accept(lc->fd, (struct sockaddr *) sa, &socklen);
- //省略部分代码
-
- /*
- accept到一个新的连接后,就重新计算ngx_accept_disabled的值,
- 它主要是用来做负载均衡,之前有提过。
-
- 这里,我们可以看到他的就只方式
- “总连接数的八分之一 - 剩余的连接数“
- 总连接指每个进程设定的最大连接数,这个数字可以再配置文件中指定。After the 7/8 of the total number of connections to the total number of connections, the NGX_ACCEPT_DISABLED is greater than zero, and the connection is overloaded
-
-
*/
-
- ngx_accept_disabled = ngx_cycle->connection_n / 8
- ngx_cycle->free_connection_n; c = ngx_get_connection(s, ev->log); - //Only released when the connection is closed pool
-
- if (c->pool == NULL ) {
ngx_close_accepted_connection(c); - );
- ngx_memcpy(c->sockaddr, sa, socklen);
-
log = ngx_palloc(c->pool, sizeof
- (ngx_log_t));
-
if (log == null) { ngx_close_accepted_connection (c);
-
Roturn
- ; }
-
- /* set a blocking mode for aio and non-blocking mode for others */
-
- if (ngx_inherited_nonblocking) {
- if (ngx_event_flags & NGX_USE_AIO_EVENT) {
- if (ngx_blocking(s) == -1) {
- ngx_log_error(NGX_LOG_ALERT, ev->log, ngx_socket_errno,
- ngx_blocking_n " failed");
- ngx_close_accepted_connection(c);
- return;
- }
- }
-
- } else {
- //我们使用epoll模型,这里我们设置连接为nonblocking
- if (!(ngx_event_flags & (NGX_USE_AIO_EVENT|NGX_USE_RTSIG_EVENT))) {
- if (ngx_nonblocking(s) == -1) {
- ngx_log_error(NGX_LOG_ALERT, ev->log, ngx_socket_errno,
- ngx_nonblocking_n " failed");
- ngx_close_accepted_connection(c);
- return;
- }
- }
- }
-
- *log = ls->log;
- //初始化新的连接
- c->recv = ngx_recv;
- c->send = ngx_send;
- c->recv_chain = ngx_recv_chain;
- c->send_chain = ngx_send_chain;
-
- c->log = log;
- c->pool->log = log;
-
- c->socklen = socklen;
- c->listening = ls;
- c->local_sockaddr = ls->sockaddr;
-
- c->unexpected_eof = 1;
-
- #if (NGX_HAVE_UNIX_DOMAIN)
- if (c->sockaddr->sa_family == AF_UNIX) {
- c->tcp_nopush = NGX_TCP_NOPUSH_DISABLED;
- c->tcp_nodelay = NGX_TCP_NODELAY_DISABLED;
- #if (NGX_SOLARIS)
- /* Solaris's sendfilev() supports AF_NCA, AF_INET, and AF_INET6 */
- c->sendfile = 0;
- #endif
- }
- #endif
-
- rev = c->read;
- wev = c->write;
-
- wev->ready = 1;
-
- if (ngx_event_flags & (NGX_USE_AIO_EVENT|NGX_USE_RTSIG_EVENT)) {
- /* rtsig, aio, iocp */
- rev->ready = 1;
- }
-
- if (ev->deferred_accept) {
- rev->ready = 1;
- #if (NGX_HAVE_KQUEUE)
- rev->available = 1;
- #endif
- }
-
- rev->log = log;
- wev->log = log;
-
- /*
- * TODO: MT: - ngx_atomic_fetch_add()
- * or protection by critical section or light mutex
- *
- * TODO: MP: - allocated in a shared memory
- * - ngx_atomic_fetch_add()
- * or protection by critical section or light mutex
- */
-
- c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1);
-
- if (ngx_add_conn && (ngx_event_flags & NGX_USE_EPOLL_EVENT) == 0) {
- if (ngx_add_conn(c) == NGX_ERROR) {
- ngx_close_accepted_connection(c);
- return;
- }
- }
-
- log->data = NULL;
- log->handler = NULL;
-
- /*
- 这里listen handler很重要,它将完成新连接的最后初始化工作,
- 同时将accept到的新的连接放入epoll中;挂在这个handler上的函数,
- 就是ngx_http_init_connection 在之后http模块中在详细介绍
- */
- ls->handler(c);
-
- if (ngx_event_flags & NGX_USE_KQUEUE_EVENT) {
- ev->available--;
- }
-
- } while (ev->available);
- }
accpt事件的handler方法也就是如此了。之后就是每个连接的读写事件handler方法,这一部分会直接将我们引入http模块,我们还不急,还要学习下nginx经典模块epoll。
The above introduces the nginx source code study notes (21) - event module 2 - the event-driven core ngx_process_events_and_timers, including queue content. I hope it will be helpful to friends who are interested in PHP tutorials.