Home > Backend Development > Golang > what is go pprof

what is go pprof

青灯夜游
Release: 2023-02-01 17:05:15
forward
5329 people have browsed it

pprof is Go's performance analysis tool. During the running process of the program, it can record the running information of the program, which can be CPU usage, memory usage, goroutine running status, etc. When performance tuning or bug location is required, At times, these recorded information are quite important. There are many ways to use pprof. Go has already packaged a "net/http/pprof" ready-made. With a few simple lines of commands, you can open pprof, record running information, and provide web services.

what is go pprof

The operating environment of this tutorial: Windows 7 system, GO version 1.18, Dell G3 computer.

go pprof introduction

profile is generally called performance analysis. The translation in the dictionary is an overview (noun) or an overview that describes... verb). For a computer program, its profile is various general information about a program when it is running, including CPU occupancy, memory, thread status, thread blocking, etc. Knowing this information about the program, you can easily locate problems and causes of failures in the program.

pprof is Go's performance analysis tool. During the running process of the program, it can record the running information of the program, which can be CPU usage, memory usage, goroutine running status, etc. When performance tuning or bug location is required, At times, these recorded information are quite important.

golang has relatively good support for profiling. The standard library provides the profile libraries "runtime/pprof" and "net/http/pprof", and also provides many useful visual tools to assist developers. profiling.

For online services, for an HTTP Server, access the HTTP interface provided by pprof to obtain performance data. Of course, in fact, the bottom layer here is also called the function provided by runtime/pprof, which is encapsulated into an interface to provide external network access. This article mainly introduces the use of "net/http/pprof".

Basic use

There are many ways to use pprof. Go has already packaged one: net/http/pprof, which is easy to use. With a few lines of commands, you can start pprof, record operating information, and provide a web service that can obtain operating data through the browser and the command line.

How to enable monitoring in web services, let’s look at a simple example.

package main

import (
"fmt"
 "net/http"
 _ "net/http/pprof"
)

func main() {
// 开启pprof,监听请求
 ip := "0.0.0.0:8080"
 if err := http.ListenAndServe(ip, nil); err != nil {
 	fmt.Printf("start pprof failed on %s\n", ip)
 }
 dosomething()
}
Copy after login

Import the "net/http/pprof" package into the program and open the listening port. At this time, you can obtain the profile of the program. In actual production, we generally encapsulate this function into a goroutine. So how do you check it after opening it? There are three methods:

Browser method

Open a browser and enter ip:port/debug/pprof, Enter.

what is go pprof

#pprof will provide a lot of performance data. The specific meaning is:

  • allocs: Sampling information on memory allocation
  • blocks: Sampling information on blocking operations cmdline: Program startup command and its parameters
  • goroutine : Stack information of all current coroutines
  • heap: Sampling information of memory usage on the heap mutex: Sampling information of lock contention
  • profile: Sampling information of cpu occupancy
  • threadcreate: Sampling information about system thread creation
  • trace: Tracking information about program running

allocs is the memory allocation of all objects, and heap is the memory allocation of active objects , which will be described in detail later.

1. When CPU performance analysis is enabled, Go runtime will pause every 10ms to record the call stack and related data of the currently running goroutine. After the performance analysis data is saved to the hard disk, we can analyze the hot spots in the code.
2. Memory performance analysis is to record the call stack when the heap is allocated. By default, it is sampled once every 1000 allocations, this value can be changed. Stack allocation will not be recorded by memory analysis because it will be released at any time. Because memory analysis is a sampling method, and also because it records the allocation of memory, not the use of memory. Therefore, it is difficult to use memory performance analysis tools to accurately determine the specific memory usage of a program.
3. Blocking analysis is a very unique analysis. It is somewhat similar to CPU performance analysis, but what it records is the time spent by goroutine waiting for resources. Blocking analysis is very helpful in analyzing program concurrency bottlenecks. Blocking performance analysis can show when a large number of goroutines are blocked. Blocking performance analysis is a special analysis tool and should not be used for analysis before CPU and memory bottlenecks have been eliminated.

Of course, if you click on any link, you will find that it is poorly readable and almost impossible to analyze. As shown in the picture:

what is go pprof

Click on the heap and scroll to the bottom. You can see some interesting data. Sometimes it may be helpful for troubleshooting, but it is generally not used.

heap profile: 3190: 77516056 [54762: 612664248] @ heap/1048576
1: 29081600 [1: 29081600] @ 0x89368e 0x894cd9 0x8a5a9d 0x8a9b7c 0x8af578 0x8b4441 0x8b4c6d 0x8b8504 0x8b2bc3 0x45b1c1
#    0x89368d    github.com/syndtr/goleveldb/leveldb/memdb.(*DB).Put+0x59d
#    0x894cd8    xxxxx/storage/internal/memtable.(*MemTable).Set+0x88
#    0x8a5a9c    xxxxx/storage.(*snapshotter).AppendCommitLog+0x1cc
#    0x8a9b7b    xxxxx/storage.(*store).Update+0x26b
#    0x8af577    xxxxx/config.(*config).Update+0xa7
#    0x8b4440    xxxxx/naming.(*naming).update+0x120
#    0x8b4c6c    xxxxx/naming.(*naming).instanceTimeout+0x27c
#    0x8b8503    xxxxx/naming.(*naming).(xxxxx/naming.instanceTimeout)-fm+0x63

......

# runtime.MemStats
# Alloc = 2463648064
# TotalAlloc = 31707239480
# Sys = 4831318840
# Lookups = 2690464
# Mallocs = 274619648
# Frees = 262711312
# HeapAlloc = 2463648064
# HeapSys = 3877830656
# HeapIdle = 854990848
# HeapInuse = 3022839808
# HeapReleased = 0
# HeapObjects = 11908336
# Stack = 655949824 / 655949824
# MSpan = 63329432 / 72040448
# MCache = 38400 / 49152
# BuckHashSys = 1706593
# GCSys = 170819584
# OtherSys = 52922583
# NextGC = 3570699312
# PauseNs = [1052815 217503 208124 233034 ......]
# NumGC = 31
# DebugGC = false
Copy after login
  • Sys: 进程从系统获得的内存空间,虚拟地址空间
  • HeapAlloc:进程堆内存分配使用的空间,通常是用户new出来的堆对象,包含未被gc掉的。
  • HeapSys:进程从系统获得的堆内存,因为golang底层使用TCmalloc机制,会缓存一部分堆内存,虚拟地址空间
  • PauseNs:记录每次gc暂停的时间(纳秒),最多记录256个最新记录。
  • NumGC: 记录gc发生的次数

命令行方式

除了浏览器,Go还提供了命令行的方式,能够获取以上信息,这种方式用起来更方便。

使用命令go tool pprof url可以获取指定的profile文件,此命令会发起http请求,然后下载数据到本地,之后进入交互式模式,就像gdb一样,可以使用命令查看运行信息,以下为使用示例:

# 下载cpu profile,默认从当前开始收集30s的cpu使用情况,需要等待30s
go tool pprof http://localhost:8080/debug/pprof/profile # 30-second CPU profile
go tool pprof http://localhost:8080/debug/pprof/profile?seconds=120 # wait 120s

# 下载heap profile
go tool pprof http://localhost:8080/debug/pprof/heap # heap profile

# 下载goroutine profile
go tool pprof http://localhost:8080/debug/pprof/goroutine # goroutine profile

# 下载block profile
go tool pprof http://localhost:8080/debug/pprof/block # goroutine blocking profile

# 下载mutex profile
go tool pprof http://localhost:8080/debug/pprof/mutex
Copy after login

接下来用一个例子来说明最常用的四个命令:web、top、list、traces

接下来以内存分析举例,cpu和goroutine等分析同理,读者可以自行举一反三。

首先,我们通过命令go tool pprof url获取指定的profile/heap文件,随后自动进入命令行。如图:

2-what is go pprof

第一步,我们首先输入web命令,这时浏览器会弹出各个函数之间的调用图,以及内存的之间的关系。如图:

what is go pprof

这个图的具体读法,可参照:中文文档 或者英文文档 这里不多赘述。只需要了解越红越大的方块,有问题的可能性就越大,代表可能占用了更多的内存,如果在cpu的图中代表消耗了更多cpu资源,以此类推。
接下来 top、list、traces三步走可以看出很多想要的结果。

top 按指标大小列出前10个函数,比如内存是按内存占用多少,CPU是按执行时间多少。
top会列出5个统计数据:

  • flat: 本函数占用的内存量。
  • flat%: 本函数内存占使用中内存总量的百分比。
  • sum%: 之前函数flat的累计和。
  • cum:是累计量,假如main函数调用了函数f,函数f占用的内存量,也会记进来。
  • cum%: 是累计量占总量的百分比。

what is go pprof

这样我们可以看到到底是具体哪些函数占用了多少内存。

当然top后也可以接参数,top n可以列出前n个函数。

list可以查看某个函数的代码,以及该函数每行代码的指标信息,如果函数名不明确,会进行模糊匹配,比如list main会列出main.mainruntime.main。现在list sendToASR试一下。

what is go pprof

可以看到切片中增加元素时,占用了很多内存,左右2个数据分别是flatcum

traces 打印所有调用栈,以及调用栈的指标信息。使用方式为traces+函数名(模糊匹配)。

what is go pprof

在命令行之中,还有一个重要的参数 -base,假设我们已经通过命令行得到profile1与profile2,使用go tool pprof -base profile1 profile2,便可以以profile1为基础,得出profile2在profile1之上出现了哪些变化。通过两个时间切片的比较,我们可以清晰的了解到,两个时间节点之中发生的变化,方便我们定位问题(很重要!!!!)

可视化界面

打开可视化界面的方式为:go tool pprof -http=:1234 http://localhost:8080/debug/pprof/heap 其中1234是我们指定的端口

what is go pprof

Top

what is go pprof

该视图与前面所讲解的 top 子命令的作用和含义是一样的,因此不再赘述。

Graph

为函数调用图,不在赘述.

Peek

what is go pprof

Compared with the Top view, this view adds the display of contextual information, which is the output of the function. caller/callee.

Source

what is go pprof

This view mainly adds tracking and analysis for source code. You can see that its overhead is mainly consumed in where.

Flame Graph

1what is go pprof

The flame graph corresponding to resource consumption, the reading method of the flame graph, I will not go into details here, it is not the focus of this article .

The second drop-down menu is as shown in the figure:

1what is go pprof

alloc_objects, alloc_space represents the resources allocated by the application, regardless of whether Not released, inuse_objects,inuse_space indicates the application's resource allocation situation that has not yet been released.

##inuse_spaceamount of memory allocated and not released yetinuse_objectsamount of objects allocated and not released yetalloc_spacetotal amount of memory allocated (regardless of released)alloc_objectstotal amount of objects allocated (regardless of released)

第一个下拉菜单可以与第二个下拉菜单相组合,可以查看临时变量的火焰图,常驻变量的内存调用图等。

tips

  • 程序运行占用较大的内存,可以通过 inuse_space 来体现.
  • 存在非常频繁的 GC 活动,通常意味着 alloc_space非常高,而程序运行过程中并没有消耗太多的内存(体现为 inuse_space 并不高),当然也可能出现 GC 来不及回收,因此c出现inuse_space 也变高的情况。这种情况下同样会大量消耗CPU。
  • 内存泄漏,通常 alloc_space 较高,且
    inuse_space 也较高。

操作方法

上面我们已经看完了go pprof 的所有操作,接下来讲解一下go tool pprof 的具体使用流程。

  • 通过监控平台监测到内存或cpu问题。
  • 通过浏览器方式大致判断是哪些可能的问题。
  • 通过命令行方式抓取几个时间点的profile
  • 使用web命令查看函数调用图
  • 使用top 、traces、list 命令定位问题
  • 如果出现了goroutine泄漏或者内存泄漏等随着时间持续增长的问题,go tool pprof -base比较两个不同时间点的状态更方便我们定位问题。

具体案例

案例一:goroutine泄漏

启动程序后,用浏览器方式打开profile:

1what is go pprof

发现内存持续上升,同时goroutine也在持续上升,初步判断,内存泄漏是由于goroutine泄漏导致的。

接下来通过命令行方式抓取goroutine的情况:命令行输入:go tool pprof localhost:8080/debug/pprof/goroutine,获取结果。

分析的流程

一、使用web命令查看调用图,大概了解目前的goroutine的泄露情况:

1what is go pprof

通过观察,最引入注目的便是runtime.gopark这个函数,这个函数在所有goroutine泄漏时都会出现,并且是大头,接下来我们研究一下这个函数的作用:

func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceEv byte, traceskip int) {
 mp := acquirem()
 gp := mp.curg
 status := readgstatus(gp)
 mp.waitlock = lock
 mp.waitunlockf = unlockf
 gp.waitreason = reason
 mp.waittraceev = traceEv
 mp.waittraceskip = traceskip
 releasem(mp)
 
 mcall(park_m)
}
Copy after login

该函数的作用为:

1、调用acquirem函数:获取当前goroutine所绑定的m,设置各类所需参数。调用 releasem 函数将当前 goroutine 和其 m 的绑定关系解除。

2、调用 park_m 函数:将当前 goroutine 的状态从 _Grunning 切换为 _Gwaiting,也就是等待状态。删除 m 和当前 goroutine m→curg(简称gp)之间的关联。

3、调用 mcall 函数,仅会在需要进行 goroutiine 切换时会被调用:切换当前线程的堆栈,从 g 的堆栈切换到 g0 的堆栈并调用 fn(g) 函数。将 g 的当前 PC/SP 保存在 g->sched 中,以便后续调用 goready 函数时可以恢复运行现场。

综上:该函数的关键作用就是将当前的 goroutine 放入等待状态,这意味着 goroutine 被暂时被搁置了,也就是被运行时调度器暂停了。
所以出现goroutine泄漏一定会调用这个函数,这个函数不是goroutine泄漏的原因,而是goroutine泄漏的结果。

此外,我们发现有两个函数的goroutine的达到了67,很可疑,在我们接下来的验证中要格外留意。

二、使用top命令,获取更加具体的函数信息:

1what is go pprof

与上面分析的结论相似,我们要将关注点放在三个开启了67个goroutine的函数。

三、traces+函数名,查看调用栈,这一步在函数调用很复杂,无法从调用图里面清晰的看出时使用,帮助我们更清晰的了解函数的调用堆栈:

1what is go pprof

四、使用list+函数名,查看具体代码的问题。

1what is go pprof

1what is go pprof

通过list命令我们可以清楚的看出问题代码是堵塞在哪里。接下来的篇幅我们来分析一下这个问题:

goroutine泄漏知识

什么是goroutine泄漏:如果你启动了一个 goroutine,但并没有符合预期的退出,直到程序结束,此goroutine才退出,这种情况就是 goroutine 泄露。当 goroutine 泄露发生时,该 goroutine 的栈(一般 2k 内存空间起)一直被占用不能释放,goroutine 里的函数在堆上申请的空间也不能被 垃圾回收器 回收。这样,在程序运行期间,内存占用持续升高,可用内存越来也少,最终将导致系统崩溃。
什么时候出现goroutine泄漏:goroutine泄露一般是因为channel操作阻塞而导致整个routine一直阻塞等待或者 goroutine 里有死循环。
具体细分一下:

  • 从 channel 里读,但是没有写。
  • 向 unbuffered channel 写,但是没有读。
  • 向已满的 buffered channel 写,但是没有读。
  • select操作在所有case上阻塞。
  • goroutine进入死循环中,导致资源一直无法释放。

select底层也是channel实现的,如果所有case上的操作阻塞,select内部的channel便会阻塞,goroutine也无法继续执行。所以我们使用channel时一定要格外小心。

通过分析上面两幅图的情况,可以判断是因为select在所有case上死锁了,再深入代码分析,是因为项目中的的语音模型并发能力弱,在语音发包速度快起来的时候无法处理,导致select不满足条件,导致goroutine泄漏,应该在for循环之外加一个asr←nil,保证func2的select一定会满足,同时提高模型的并发能力,使得func1的不会阻塞。
防止goroutine泄漏的建议:

  • 创建goroutine时就要想好该goroutine该如何结束。
  • 使用channel时,要考虑到 channel阻塞时协程可能的行为,是否会创建大量的goroutine。
  • goroutine中不可以存在死循环。

案例二:内存泄漏

我们通过grafana发现内存出现泄漏:

what is go pprof

这一次我们不使用命令行,而是使用图形化界面来定位问题。
输入 go tool pprof -http=:1234 localhost:8080/debug/pprof/heap:

2what is go pprof

发现内存占用很有可能是byte.makeSlice()导致的,火焰图看的更加清晰:

2what is go pprof

而调用byte.makeSlice()的函数为标准库中的ioutil.ReadAll(),接下来我们只需要研究这个标准库函数的实现即可。

func readAll(r io.Reader, capacity int64) (b []byte, err error) {
	buf := bytes.NewBuffer(make([]byte, 0, capacity))
	defer func() {
		e := recover()
		if e == nil {
			return
		}
		if panicErr, ok := e.(error); ok && panicErr == bytes.ErrTooLarge {
			err = panicErr
		} else {
			panic(e)
		}
	}()
	_, err = buf.ReadFrom(r)
	return buf.Bytes(), err
}

// bytes.MinRead = 512
func ReadAll(r io.Reader) ([]byte, error) {
	return readAll(r, bytes.MinRead)
}
Copy after login

可以看到 ioutil.ReadAll 每次都会分配初始化一个大小为 bytes.MinRead 的 buffer ,bytes.MinRead 在 Golang 里是一个常量,值为 512 。就是说每次调用 ioutil.ReadAll 都先会分配一块大小为 512 字节的内存。
接下来看一下ReadFrom函数的实现:

func (b *Buffer) ReadFrom(r io.Reader) (n int64, err error) {
	b.lastRead = opInvalid
	// If buffer is empty, reset to recover space.
	if b.off >= len(b.buf) {
		b.Truncate(0)
	}
	for {
		if free := cap(b.buf) - len(b.buf); free < MinRead {
			// not enough space at end
			newBuf := b.buf
			if b.off+free < MinRead {
				// not enough space using beginning of buffer;
				// double buffer capacity
				newBuf = makeSlice(2*cap(b.buf) + MinRead)
			}
			copy(newBuf, b.buf[b.off:])
			b.buf = newBuf[:len(b.buf)-b.off]
			b.off = 0
		}
		m, e := r.Read(b.buf[len(b.buf):cap(b.buf)])
		b.buf = b.buf[0 : len(b.buf)+m]
		n += int64(m)
		if e == io.EOF {
			break
		}
		if e != nil {
			return n, e
		}
	}
	return n, nil // err is EOF, so return nil explicitly
}
Copy after login

ReadFrom函数主要作用就是从 io.Reader 里读取的数据放入 buffer 中,如果 buffer 空间不够,就按照每次 2x + MinRead 的算法递增,这里 MinRead 的大小也是 512 Bytes 。
项目读取的音频文件一般很大,buffer不够用,会一直调用makeSlice扩容,消耗大量内存,但是仅仅这样,只是程序执行时消耗了比较多的内存,并未有内存泄露的情况,那服务器又是如何内存不足的呢?这就不得不扯到 Golang 的 GC 机制。

GC算法的触发时机

golang的GC算法为三色算法,按理说会回收临时变量,但是触发GC的时机导致了这个问题:

  • 已分配的 Heap 到达某个阈值,会触发 GC, 该阈值由上一次 GC 时的 HeapAlloc 和 GCPercent 共同决定
  • 每 2 分钟会触发一次强制的 GC,将未 mark 的对象释放,但并不还给 OS
  • 每 5 分钟会扫描一个 Heap, 对于一直没有被访问的 Heap,归还给 OS

ioutil.ReadAll会将全部的数据加载到内存,一个大请求会多次调用makeSlice 分配很多内存空间,并发的时候,会在很短时间内占用大量的系统内存,然后将 GC 阈值增加到一个很高的值,这个时候要 GC 就只有等 2 分钟一次的强制 GC。这样内存中的数据无法及时GC,同时阈值还在不停的升高,导致GC的效率越来越低,最终导致缓慢的内存泄漏。

解决方法

//req.AduioPack,err=ioutil.ReadAll(c.Resquest.Body)

buffer:=bytes.NewBuffer(make[]byte,0,6400)
_,err:=io.Copy(buffer,c.Resquest.Body)
temp:=buffer.Bytes()
req.AduioPack=temp
Copy after login

不是一次性把文件读入内存,而是申请固定的内存大小。

【Related recommendations: Go video tutorial, Programming teaching

The above is the detailed content of what is go pprof. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:csdn.net
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
NameMeaning