Analysis and processing of Java thread leaks
Nov 02, 2016 pm 02:31 PM1. 生产环境的异常现象及初步分析
最近发现系统程序内存消耗越来越大,开始并没特别注意,就简单调了一下jvm参数。但直到前些天内存爆满,持续Full GC,这肯定出现了内存泄露。
原以为哪里出现了比较低级的错误,所以很直接想到先去看看程序是在跑哪段代码。jstack -l
I/O dispatcher 125" #739 prio=5 os_prio=0 tid=0x0000000002394800 nid=0x1e2a runnable [0x00007f5c2125b000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x00000007273401d0> (a sun.nio.ch.Util$2) - locked <0x00000007273401c0> (a java.util.Collections$UnmodifiableSet) - locked <0x00000007273401e0> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:257) at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:590) at java.lang.Thread.run(Thread.java:745) Locked ownable synchronizers: - None "pool-224-thread-1" #738 prio=5 os_prio=0 tid=0x00007f5c463f4000 nid=0x1e29 runnable [0x00007f5c2024b000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x0000000727340478> (a sun.nio.ch.Util$2) - locked <0x0000000727340468> (a java.util.Collections$UnmodifiableSet) - locked <0x0000000727340488> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:342) at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:191) at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) at java.lang.Thread.run(Thread.java:745) Locked ownable synchronizers: - None
我以下的思考路径都未能解决(自己记录一下,看官可以跳过...)
查看线程的stack,看调用处是否有问题。这个一般都能解决问题,但是上面的异常线程栈确实没什么信息量,无法定位。
Google了一下有关大量这个线程停在epollwait的资料,发现这个现象和epoll nio的bug是一样的,还以为碰到了一个无法处理的高级问题。第一反应就是去HttpClient的官网查bug日志,结果还真发现了最近的升级有解决类似问题的,然后升级到最新版问题依旧。但是最后仔细想想,也确实不太可能,毕竟应用场景还是比较普通的。
jmap -histo
查了调用栈和异常对象的package,发现是HttpClient的,把本地所有相关调用都查了一遍,看起来写的也都是对的。
搬出jvirtualvm的性能分析工具,发现只能看到泄露现象,无法定位问题。
这下懵逼了,刚好忙其他事,就放了几天顺带考虑一下,还好泄露比较慢,问题处理不着急。。。
2. 线程泄露的分析方法
处理这个问题的关键:必须准确知道是什么泄露了线程!
在Google过程中突然受到启发,JDK中的工具是应该可以分析引用的。最后发现jhat - Java Heap Analysis Tool正是我要的。
最终解决方式:
jmap -F -dump:format=b,file=tomcat.bin
jhat -J-Xmx4g
查看相关对象的reference,OQL也可以用,但是网页版直接点链接也够用了。
3. 锁定原因并解决
从之前异常heap中发现存在的问题对象有如下这些:
$ cat histo | grep org.apache.http. | grep 1944 | less 197: 1944 217728 org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl 232: 1944 171072 org.apache.http.impl.nio.conn.CPool 233: 1944 171072 org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor 248: 1944 155520 org.apache.http.impl.nio.reactor.BaseIOReactor 249: 1944 155520 org.apache.http.impl.nio.reactor.IOSessionImpl 276: 1944 139968 org.apache.http.impl.nio.client.InternalHttpAsyncClient 277: 1944 139968 org.apache.http.impl.nio.conn.CPoolEntry 323: 1944 108864 org.apache.http.impl.nio.client.MainClientExec 363: 1944 93312 org.apache.http.impl.nio.codecs.DefaultHttpResponseParser 401: 1944 77760 org.apache.http.impl.nio.reactor.SessionInputBufferImpl 402: 1944 77760 org.apache.http.impl.nio.reactor.SessionOutputBufferImpl 403: 1944 77760 org.apache.http.nio.protocol.HttpAsyncRequestExecutor$State 442: 1944 62208 org.apache.http.impl.cookie.DefaultCookieSpecProvider 443: 1944 62208 org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager 444: 1944 62208 org.apache.http.nio.conn.ssl.SSLIOSessionStrategy 445: 1944 62208 org.apache.http.nio.pool.AbstractNIOConnPool$2 511: 1944 46656 [Lorg.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker; 512: 1944 46656 [Lorg.apache.http.impl.nio.reactor.BaseIOReactor; 513: 1944 46656 org.apache.http.conn.ssl.DefaultHostnameVerifier 514: 1944 46656 org.apache.http.impl.cookie.DefaultCookieSpec 515: 1944 46656 org.apache.http.impl.cookie.NetscapeDraftSpecProvider 516: 1944 46656 org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1 517: 1944 46656 org.apache.http.impl.nio.client.InternalIODispatch 518: 1944 46656 org.apache.http.impl.nio.codecs.DefaultHttpRequestWriter 519: 1944 46656 org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager$ConfigData 520: 1944 46656 org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager$InternalAddressResolver 521: 1944 46656 org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager$InternalConnectionFactory 522: 1944 46656 org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker 523: 1944 46656 org.apache.http.nio.protocol.HttpAsyncRequestExecutor 603: 1944 31104 org.apache.http.client.protocol.RequestExpectContinue 604: 1944 31104 org.apache.http.conn.routing.BasicRouteDirector 605: 1944 31104 org.apache.http.impl.auth.HttpAuthenticator 606: 1944 31104 org.apache.http.impl.conn.DefaultRoutePlanner 607: 1944 31104 org.apache.http.impl.cookie.IgnoreSpecProvider 608: 1944 31104 org.apache.http.impl.nio.SessionHttpContext 609: 1944 31104 org.apache.http.impl.nio.reactor.AbstractIOReactor$1 610: 1944 31104 org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReacto
接下来要找出到底谁new了这些对象,这些异常Object中很多是内部field,所以要先找出最外层的对象。这个就只是边猜边看了,结果发现就是InternalHttpAsyncClient。点开进去看了一下,发现有一堆Instance,最后了发现泄露的对象。也可以用OQL select referrers(c) from org.apache.http.impl.nio.client.InternalHttpAsyncClient c
instance of org.apache.http.impl.nio.client.InternalHttpAsyncClient@0x932be638 (128 bytes) Class: class org.apache.http.impl.nio.client.InternalHttpAsyncClientInstance data members: ... References to this object: org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1@0x932be6c8 (40 bytes) : field this$0 com.aliyun.mqs.common.http.DefaultServiceClient@0x931cc588 (32 bytes) : field httpClient
这里的信息就是阿里云的mqs创建了这些对象。去看了一下代码,书写看似没有问题,实际上,连接压根忘记关了。有问题的阿里云MQS文档是这个,但是最新版本的官网文档已经改用了org.eclipse.jetty.client.HttpClient,也是没有显式调用stop函数,希望这个类库不会出现此问题。
@Service public class AliyunService implements IAliyunService { private static Logger logger = Logger.getLogger(AliyunService.class.getName()); @Autowired private AliyunConfig aliyunConfig; @Override public void sendMessage(String content) { MQSClient client = new DefaultMQSClient(aliyunConfig.mqEndpoint, aliyunConfig.mqAccessId, aliyunConfig.mqAccessKey); String queueName = aliyunConfig.mqQueue; try { CloudQueue queue = client.getQueueRef(queueName); // queue没做关闭处理,应该最后加上 // finally{ queue.close(); } Message message = new Message(); message.setMessageBody(content); queue.putMessage(message); } catch (Exception e) { logger.warning(e.getMessage()); } } }
以下是MQS给的jar中相应关闭的源码
public final class CloudQueue { private ServiceClient serviceClient; ... public void close() { if(this.serviceClient != null) { this.serviceClient.close(); } } }
真相大白!至此修改后,问题顺利解决。
4. 总结
首先,这个问题的解决确实还是要善用并熟悉JDK工具*,之前对jhat的理解不深,导致第一时间没有想到这个解决方案。日后再有内存问题,会有更犀利的解决方法了。
其次,熟悉了线程泄露的现象,解决方式还是去找线程的对象,说到底,还是对象的泄露。
最后,真的要吐槽一下阿里,我之前接阿里百川就恶心的不行,这次又出现低级错误。我一直认为阿里是中国软件技术最好的公司,但基层研发的是水平真心一般,研发质量控制有问题啊

Hot Article

Hot tools Tags

Hot Article

Hot Article Tags

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)
