How to obtain Node performance monitoring indicators? This article will talk to you about how to obtain Node performance monitoring indicators. I hope it will be helpful to you!
Recently I am learning the knowledge of nodejs monitoring. Although I don’t have the energy to learn to write a simple version of monitoring, I still can’t help but understand how to obtain these. Indicators (after consulting a lot of information, I feel that there is too little introduction to this content on the domestic Internet. I am also sorting out the server node knowledge points, so I summarized it in this article and shared it with you).
Some indicators in this article may have problems, welcome to exchange, in fact, you can organize these data and write it into a monitoring library, and use it in your own small and medium-sized projects. Then the front-end react has tools such as bizcharts and g2, and the front-end draws the large screen of data by itself. I think the data dimensions collected by esay monitor are not as complete as ours.
The performance bottlenecks of the server are usually the following:
CPU usage and CPU load, both of which can be used to a certain extent Reflects how busy a machine is.
CPU usage is the CPU resources occupied by running programs, indicating how the machine is running programs at a certain point in time. The higher the usage rate, it means that the machine is running a lot of programs at this time, and vice versa. The level of usage is directly related to the strength of the CPU. Let's first understand the relevant APIs and some terminology explanations to help us understand the code for obtaining CPU usage.
os.cpus()
Returns an array of objects containing information about each logical CPU core.
model: A string specifying the model of the CPU core.
speed: A number specifying the speed of the CPU core in MHz.
times: An object containing the following properties:
Note: The nice value of
is for POSIX only. On Windows operating systems, the nice
value is always 0 for all processors.
When you see the user and nice fields, some students are confused about the advantages, and so am I, so I carefully inquired about their meaning, please continue.
user indicates the proportion of time the CPU is running in user mode.
Application process execution is divided into User mode and Kernel mode: The CPU executes the application process's own code logic in user mode, usually some logic or Numerical calculation; The CPU executes the system call initiated by the process in the kernel mode, usually in response to the process's request for resources.
A user space program is any process that is not part of the kernel. Shells, compilers, databases, web servers, and desktop-related programs are all user-space processes. If the processor is not idle, it is normal that most of the CPU time should be spent running user-space processes.
nice represents the proportion of time the CPU is running in Low priority user mode. Low priority means that the process nice value is less than 0.
user indicates the proportion of time the CPU is running in kernel mode.
Generally speaking, Kernel mode CPU usage should not be too high unless the application process initiates a large number of system calls. If it is too high, it means that the system call takes a long time, such as frequent IO operations.
idle represents the proportion of time the CPU is in the idle state. In this state, the CPU has no tasks to execute.
irq represents the proportion of time that the CPU processes hardware interrupt.
Network card interrupt is a typical example: after the network card receives the data packet, it notifies the CPU through a hardware interrupt for processing. If the system network traffic is very heavy, you can observe a significant increase in irq usage.
The user state is less than 70%, the kernel state is less than 35% and the overall state is less than 70%, which can be counted as a healthy state.
The following example illustrates the use of the os.cpus() method in Node.js:
Example 1:
// Node.js program to demonstrate the // os.cpus() method // Allocating os module const os = require('os'); // Printing os.cpus() values console.log(os.cpus());
Output:
[ { model:'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed:2712, times: { user:900000, nice:0, sys:940265, idle:11928546, irq:147046 } }, { model:'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed:2712, times: { user:860875, nice:0, sys:507093, idle:12400500, irq:27062 } }, { model:'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed:2712, times: { user:1273421, nice:0, sys:618765, idle:11876281, irq:13125 } }, { model:'Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz', speed:2712, times: { user:943921, nice:0, sys:460109, idle:12364453, irq:12437 } } ]
The following is the code for how to obtain the cpu utilization
const os = require('os'); const sleep = ms => new Promise(resolve => setTimeout(resolve, ms)); class OSUtils { constructor() { this.cpuUsageMSDefault = 1000; // CPU 利用率默认时间段 } /** * 获取某时间段 CPU 利用率 * @param { Number } Options.ms [时间段,默认是 1000ms,即 1 秒钟] * @param { Boolean } Options.percentage [true(以百分比结果返回)|false] * @returns { Promise } */ async getCPUUsage(options={}) { const that = this; let { cpuUsageMS, percentage } = options; cpuUsageMS = cpuUsageMS || that.cpuUsageMSDefault; const t1 = that._getCPUInfo(); // t1 时间点 CPU 信息 await sleep(cpuUsageMS); const t2 = that._getCPUInfo(); // t2 时间点 CPU 信息 const idle = t2.idle - t1.idle; const total = t2.total - t1.total; let usage = 1 - idle / total; if (percentage) usage = (usage * 100.0).toFixed(2) + "%"; return usage; } /** * 获取 CPU 瞬时时间信息 * @returns { Object } CPU 信息 * user <number> CPU 在用户模式下花费的毫秒数。 * nice <number> CPU 在良好模式下花费的毫秒数。 * sys <number> CPU 在系统模式下花费的毫秒数。 * idle <number> CPU 在空闲模式下花费的毫秒数。 * irq <number> CPU 在中断请求模式下花费的毫秒数。 */ _getCPUInfo() { const cpus = os.cpus(); let user = 0, nice = 0, sys = 0, idle = 0, irq = 0, total = 0; for (let cpu in cpus) { const times = cpus[cpu].times; user += times.user; nice += times.nice; sys += times.sys; idle += times.idle; irq += times.irq; } total += user + nice + sys + idle + irq; return { user, sys, idle, total, } } } const cpuUsage = new OSUtils().getCPUUsage({ percentage: true }); console.log('cpuUsage: ', cpuUsage.then(data=>console.log(data))); // 我的电脑是6.15%
The CPU load (loadavg) is easy to understand and refers to the occupation within a certain period of time The number of processes waiting for CPU time and processes waiting for CPU time is the load average. The processes waiting for CPU time here refer to processes waiting to be awakened, excluding processes in the wait state.
Before this we need to learn a node API
os.loadavg()
Returns the load average of 1, 5 and 15 minutes array.
Load average is a measure of system activity calculated by the operating system and expressed as a decimal.
Load average is a concept unique to Unix. On Windows, the return value is always [0, 0, 0]
It is used to describe the current busyness of the operating system, which can be simply understood as the CPU being used per unit time and the average number of tasks waiting to use the CPU. The CPU load is too high, indicating that there are too many processes. In Node, it may be reflected in repeatedly starting new processes using the Forbidden City module.
const os = require('os'); // CPU线程数 const length = os.cpus().length; // 单核CPU的平均负载,返回一个包含 1、5、15 分钟平均负载的数组 os.loadavg().map(load => load / length);
Let’s explain an API first, or you can’t understand our code for obtaining memory indicators
This function returns 4 parameters, the meanings and differences are as follows:
Use the following code to print the memory usage of a child process. It can be seen that rss is roughly equal to the RES of the top command. In addition, the memory of the main process is only 33M, which is smaller than the memory of the child process. It can be seen that their memory usage is calculated independently.
var showMem = function(){ var mem = process.memoryUsage(); var format = function(bytes){ return (bytes / 1024 / 1024).toFixed(2) + ' MB'; }; console.log('Process: heapTotal ' + format(mem.heapTotal) + ' heapUsed ' + format(mem.heapUsed) + ' rss ' + format(mem.rss) + ' external:' + format(mem.external)); console.log('-----------------------------------------------------------'); };
For Node, once a memory leak occurs, it is not so easy to troubleshoot. If it is monitored that the memory only rises but does not fall, then there must be a memory leak problem. Healthy memory usage should go up and down. It rises when the access is large, and drops when the access falls back down
const os = require('os'); // 查看当前 Node 进程内存使用情况 const { rss, heapUsed, heapTotal } = process.memoryUsage(); // 获取系统空闲内存 const systemFree = os.freemem(); // 获取系统总内存 const systemTotal = os.totalmem(); module.exports = { memory: () => { return { system: 1 - systemFree / systemTotal, // 系统内存占用率 heap: heapUsed / headTotal, // 当前 Node 进程内存占用率 node: rss / systemTotal, // 当前 Node 进程内存占用系统内存的比例 } } }
Disk monitoring mainly monitors disk usage. Due to frequent log writing, disk space is gradually used up. Once the disk is not enough, it will cause various problems in the system. Set an upper limit for disk usage. Once disk usage exceeds the warning value, the server administrator should organize logs or clean up the disk.
The following code refers to easy monitor3.0
const { execSync } = require('child_process'); const result = execSync('df -P', { encoding: 'utf8'}) const lines = result.split('\n'); const metric = {}; lines.forEach(line => { if (line.startsWith('/')) { const match = line.match(/(\d+)%\s+(\/.*$)/); if (match) { const rate = parseInt(match[1] || 0); const mounted = match[2]; if (!mounted.startsWith('/Volumes/') && !mounted.startsWith('/private/')) { metric[mounted] = rate; } } } }); console.log(metric)
I/O load mainly refers to disk I/O. It reflects the read and write situation on the disk. For applications written in Node, which are mainly for network services, it is unlikely that the I/O load will be too high. The I/O pressure of many readings comes from the database.
To obtain I/O indicators, we need to understand a Linux command called iostat. If it is not installed, you need to install it. Let’s see why this command can reflect I/O indicators
iostat -dx
Attribute description
rrqm/s: 每秒进行 merge 的读操作数目。即 rmerge/s(每秒对该设备的读请求被合并次数,文件系统会对读取同块(block)的请求进行合并) wrqm/s: 每秒进行 merge 的写操作数目。即 wmerge/s(每秒对该设备的写请求被合并次数) r/s: 每秒完成的读 I/O 设备次数。即 rio/s w/s: 每秒完成的写 I/O 设备次数。即 wio/s rsec/s: 每秒读扇区数。即 rsect/s wsec/s: 每秒写扇区数。即 wsect/s rkB/s: 每秒读K字节数。是 rsect/s 的一半,因为每扇区大小为512字节。 wkB/s: 每秒写K字节数。是 wsect/s 的一半。 avgrq-sz: 平均每次设备I/O操作的数据大小 (扇区)。 avgqu-sz: 平均I/O队列长度。 await: 平均每次设备I/O操作的等待时间 (毫秒)。 svctm: 平均每次设备I/O操作的处理时间 (毫秒)。 %util: 一秒中有百分之多少的时间用于 I/O 操作,即被io消耗的cpu百分比
We only need to monitor %util
%util is close to 100%, explain Too many I/O requests are generated, the I/O system is already fully loaded, and there may be a bottleneck on the disk.
如果 await 远大于 svctm,说明 I/O 队列太长,应用得到的响应时间变慢,如果响应时间超过了用户可以容许的范围,这时可以考虑更换更快的磁盘,调整内核 elevator 算法,优化应用,或者升级 CPU。
监控Nodejs的页面响应时间, 方案选自廖雪峰老师的博客文章。
最近想监控一下Nodejs的性能。记录分析Log太麻烦,最简单的方式是记录每个HTTP请求的处理时间,直接在HTTP Response Header中返回。
记录HTTP请求的时间很简单,就是收到请求记一个时间戳,响应请求的时候再记一个时间戳,两个时间戳之差就是处理时间。
但是,res.send()代码遍布各个js文件,总不能把每个URL处理函数都改一遍吧。
正确的思路是用middleware实现。但是Nodejs没有任何拦截res.send()的方法,怎么破?
其实只要稍微转换一下思路,放弃传统的OOP方式,以函数对象看待res.send(),我们就可以先保存原始的处理函数res.send,再用自己的处理函数替换res.send:
app.use(function (req, res, next) { // 记录start time: var exec_start_at = Date.now(); // 保存原始处理函数: var _send = res.send; // 绑定我们自己的处理函数: res.send = function () { // 发送Header: res.set('X-Execution-Time', String(Date.now() - exec_start_at)); // 调用原始处理函数: return _send.apply(res, arguments); }; next(); });
只用了几行代码,就把时间戳搞定了。
对于res.render()方法不需要处理,因为res.render()内部调用了res.send()。
调用apply()函数时,传入res对象很重要,否则原始的处理函数的this指向undefined直接导致出错。
实测首页响应时间9毫秒
名词解释:
QPS:Queries Per Second意思是“每秒查询率”,是一台服务器每秒能够响应的查询次数,是对一个特定的查询服务器在规定时间内所处理流量多少的衡量标准。
互联网中,作为域名系统服务器的机器的性能经常用每秒查询率来衡量。
TPS:是TransactionsPerSecond的缩写,也就是事务数/秒。它是软件测试结果的测量单位。一个事务是指一个客户机向服务器发送请求然后服务器做出反应的过程。客户机在发送请求时开始计时,收到服务器响应后结束计时,以此来计算使用的时间和完成的事务个数。
QPS vs TPS:QPS基本类似于TPS,但是不同的是,对于一个页面的一次访问,形成一个TPS;但一次页面请求,可能产生多次对服务器的请求,服务器对这些请求,就可计入“QPS”之中。如,访问一个页面会请求服务器2次,一次访问,产生一个“T”,产生2个“Q”。
响应时间:执行一个请求从开始到最后收到响应数据所花费的总体时间,即从客户端发起请求到收到服务器响应结果的时间。
响应时间RT(Response-time),是一个系统最重要的指标之一,它的数值大小直接反应了系统的快慢。
并发数是指系统同时能处理的请求数量,这个也是反应了系统的负载能力。
系统的吞吐量(承压能力)与request对CPU的消耗、外部接口、IO等等紧密关联。单个request 对CPU消耗越高,外部系统接口、IO速度越慢,系统吞吐能力越低,反之越高。
系统吞吐量几个重要参数:QPS(TPS)、并发数、响应时间。
QPS(TPS):(Query Per Second)每秒钟request/事务 数量
并发数: 系统同时处理的request/事务数
响应时间: 一般取平均响应时间
理解了上面三个要素的意义之后,就能推算出它们之间的关系:
我们通过一个实例来把上面几个概念串起来理解。按二八定律来看,如果每天 80% 的访问集中在 20% 的时间里,这 20% 时间就叫做峰值时间。
1、每天300w PV 的在单台机器上,这台机器需要多少QPS?
( 3000000 * 0.8 ) / (86400 * 0.2 ) = 139 (QPS)
2、如果一台机器的QPS是58,需要几台机器来支持?
139 / 58 = 3
到这里,以后如果你做一般中小项目的前端架构,在部署自己的node服务,就知道需要多少机器组成集群来汇报ppt了吧,哈哈,有pv就能推算一个初略值。
我们需要了解一下压力测试(我们要靠压测获取qps),以ab命令为例:
命令格式:
ab [options] [http://]hostname[:port]/path
常用参数如下:
-n requests 总请求数 -c concurrency 并发数 -t timelimit 测试所进行的最大秒数, 可以当做请求的超时时间 -p postfile 包含了需要POST的数据的文件 -T content-type POST数据所使用的Content-type头信息复制代码
更多参数请查看官方文档。
http://httpd.apache.org/docs/2.2/programs/ab.html
例如测试某个GET请求接口:
ab -n 10000 -c 100 -t 10 "http://127.0.0.1:8080/api/v1/posts?size=10"
得到一下数据:
我们从中获取几个关键指标:
服务器并发处理能力的量化描述,单位是reqs/s,指的是在某个并发用户数下单位时间内处理的请求数。某个并发用户数下单位时间内能处理的最大请求数,称之为最大吞吐率。
记住:吞吐率是基于并发用户数的。这句话代表了两个含义:
计算公式:
总请求数/处理完成这些请求数所花费的时间
必须要说明的是,这个数值表示当前机器的整体性能,值越大越好。
2、QPS每秒查询率(Query Per Second)
每秒查询率QPS是对一个特定的查询服务器在规定时间内所处理流量多少的衡量标准,在因特网上,作为域名系统服务器的机器的性能经常用每秒查询率来衡量,即每秒的响应请求数,也即是最大吞吐能力。
计算公式
QPS(TPS)= 并发数/平均响应时间(Time per request)
在上图里有Time per request的值,然后我们也有并发数数据,就可以计算出QPS。
这个QPS是压测数据,真实的qps,可使用日志监控来获取。
通常情况下,随着系统的运行,我们的后台服务会产生各种日志,应用程序会产生应用程序的访问日志、错误日志,运行日志,网络日志,我们需要一个展示平台去展示这些日志。
后端一般都用比如ELk去展示,我们前端都是ui老手了,自己可以画定制的UI界面,不多说了,主要是日志本身要打印符合一定的规范,这样格式化的数据更利于分析和展示。
并且业务逻辑型的监控主要体现在日志上。通过监控异常日志文件的变动,将新增的异常按异常类型和数量反映出来。某些异常与具体的某个子系统相关,监控出现的某个异常也能反映出子系统的状态。
在体制监控里也能体现出实际业务的QPS值。观察QPS的表现能够检查业务在时间上的分部。
此外,从访问日志中也能实现PV和UV的监控。并且可以从中分析出使用者的习惯,预知访问高峰。
这个也可以通过访问日志来获取,并且真实响应时间是需要在controller上打log的。
监控进程一般是检查操作系统中运行的应用进程数,比如对于采用多进程架构的node应用,就需要检查工作进程的数量,如果低于预期值,就当发出报警。
查看进程数在linux下很简单,
假如我们通过Node 提供 child_process 模块来实现多核 CPU 的利用。child_process.fork() 函数来实现进程的复制。
worker.js 代码如下:
var http = require('http')\ http.createServer(function(req, res) {\ res.writeHead(200, { 'Content-Type': 'text/plain' })\ res.end('Hello World\n')\ }).listen(Math.round((1 + Math.random()) * 1000), '127.0.0.1')\
通过 node worker.js
启动它,会监听 1000 到 2000 之间的一个随机端口。
master.js 代码如下:
var fork = require('child_process').fork var cpus = require('os').cpus() for (var i = 0; i < cpus.length; i++) { fork('./worker.js') }
查看进程数的 命令如下:
ps aux | grep worker.js
$ ps aux | grep worker.js lizhen 1475 0.0 0.0 2432768 600 s003 S+ 3:27AM 0:00.00 grep worker.js\ lizhen 1440 0.0 0.2 3022452 12680 s003 S 3:25AM 0:00.14 /usr/local/bin/node ./worker.js\ lizhen 1439 0.0 0.2 3023476 12716 s003 S 3:25AM 0:00.14 /usr/local/bin/node ./worker.js\ lizhen 1438 0.0 0.2 3022452 12704 s003 S 3:25AM 0:00.14 /usr/local/bin/node ./worker.js\ lizhen 1437 0.0 0.2 3031668 12696 s003 S 3:25AM 0:00.15 /usr/local/bin/node ./worker.js\
更多node相关知识,请访问:nodejs 教程!
The above is the detailed content of How to obtain Node performance monitoring indicators? Get method sharing. For more information, please follow other related articles on the PHP Chinese website!