摘要:在中,一個線程可以處理多個,但是一個只能綁定到一個,這是基于線程安全和同步考慮而設計的。線程阻塞再次進行壓力測試,結果如下最終結果沒有任何提升,利用率依然不超過,也還是在單個利用率最高不超過,說明這次的瓶頸不是。但是其中出現了軟中斷。
1. 問題
spring-cloud-gateway 網關新增了一個限流功能,使用的是模塊自帶的限流過濾器 RequestRateLimiterGatewayFilterFactory,基于令牌桶算法,通過 redis 實現。
其原理是 redis 中針對每個限流要素(比如針對接口限流),保存 2 個 key:tokenKey(令牌數量),timeKey(調用時間)。每次接口調用時,更新 tokenKey 的值為:原先的值 + (當前時間 - 原先時間)* 加入令牌的速度,如果新的 tokenKey 的值大于 1,那么允許調用,否則不允許;同時更新 redis 中 tokenKey,timeKey 的值。整個過程通過 lua 腳本實現。
在加入限流功能之前,500 客戶端并發訪問,tps 為 6800 req/s,50% 時延為 70ms;加入限流功能之后,tps 為 2300 req/s,50% 時延為 205ms,同時,原先 cpu 占用率幾乎 600%(6 核) 變成不到 400%(cpu 跑不滿了)。
2. 排查和解決過程查看單個線程的 cpu 占用:
[root@auth-service imf2]# top -Hp 29360 top - 15:16:27 up 102 days, 18:04, 1 user, load average: 1.61, 0.72, 0.34 Threads: 122 total, 9 running, 113 sleeping, 0 stopped, 0 zombie %Cpu(s): 42.0 us, 7.0 sy, 0.0 ni, 49.0 id, 0.0 wa, 0.0 hi, 2.0 si, 0.0 st KiB Mem : 7678384 total, 126844 free, 3426148 used, 4125392 buff/cache KiB Swap: 6291452 total, 2212552 free, 4078900 used. 3347956 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 29415 root 20 0 6964708 1.1g 14216 R 97.9 15.1 3:01.65 java 29392 root 20 0 6964708 1.1g 14216 R 27.0 15.1 0:45.42 java 29391 root 20 0 6964708 1.1g 14216 R 24.8 15.1 0:43.95 java 29387 root 20 0 6964708 1.1g 14216 R 23.8 15.1 0:46.38 java 29388 root 20 0 6964708 1.1g 14216 R 23.4 15.1 0:48.21 java 29390 root 20 0 6964708 1.1g 14216 R 23.0 15.1 0:45.93 java 29389 root 20 0 6964708 1.1g 14216 R 22.3 15.1 0:44.36 java
線程 29415 幾乎跑滿了 cpu,查看是什么線程:
[root@auth-service imf2]# printf "%x " 29415 72e7 [root@auth-service imf2]# jstack 29360 | grep 72e7 "lettuce-nioEventLoop-4-1" #40 daemon prio=5 os_prio=0 tid=0x00007f604cc92000 nid=0x72e7 runnable [0x00007f606ce90000]
果然是操作 redis 的線程,和預期一致。
查看 redis:cpu 占用率不超過 15%,沒有 10ms 以上的慢查詢。應該不會是 redis 的問題。
查看線程棧信息:
通過以下腳本每秒記錄一次 jstack:
[root@eureka2 jstack]# cat jstack.sh #!/bin/sh i=0 while [ $i -lt 30 ]; do /bin/sleep 1 i=`expr $i + 1` jstack 29360 > "$i".txt done
查看 lettuce 線程主要執行哪些函數:
"lettuce-nioEventLoop-4-1" #36 daemon prio=5 os_prio=0 tid=0x00007f1eb07ab800 nid=0x4476 runnable [0x00007f1eec8fb000] java.lang.Thread.State: RUNNABLE at sun.misc.URLClassPath$Loader.findResource(URLClassPath.java:715) at sun.misc.URLClassPath.findResource(URLClassPath.java:215) at java.net.URLClassLoader$2.run(URLClassLoader.java:569) at java.net.URLClassLoader$2.run(URLClassLoader.java:567) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findResource(URLClassLoader.java:566) at org.springframework.boot.loader.LaunchedURLClassLoader.findResource(LaunchedURLClassLoader.java:57) at java.lang.ClassLoader.getResource(ClassLoader.java:1096) at org.springframework.core.io.ClassPathResource.resolveURL(ClassPathResource.java:155) at org.springframework.core.io.ClassPathResource.getURL(ClassPathResource.java:193) at org.springframework.core.io.AbstractFileResolvingResource.lastModified(AbstractFileResolvingResource.java:220) at org.springframework.scripting.support.ResourceScriptSource.retrieveLastModifiedTime(ResourceScriptSource.java:119) at org.springframework.scripting.support.ResourceScriptSource.isModified(ResourceScriptSource.java:109) - locked <0x000000008c074d00> (a java.lang.Object) at org.springframework.data.redis.core.script.DefaultRedisScript.getSha1(DefaultRedisScript.java:89) - locked <0x000000008c074c10> (a java.lang.Object) at org.springframework.data.redis.core.script.DefaultReactiveScriptExecutor.eval(DefaultReactiveScriptExecutor.java:113) at org.springframework.data.redis.core.script.DefaultReactiveScriptExecutor.lambda$execute$0(DefaultReactiveScriptExecutor.java:105) at org.springframework.data.redis.core.script.DefaultReactiveScriptExecutor$$Lambda$1268/1889039573.doInRedis(Unknown Source) at org.springframework.data.redis.core.script.DefaultReactiveScriptExecutor.lambda$execute$6(DefaultReactiveScriptExecutor.java:167) at org.springframework.data.redis.core.script.DefaultReactiveScriptExecutor$$Lambda$1269/1954779522.get(Unknown Source) at reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:46)
可知該線程主要在執行 ReactiveRedisTemplate 類的 execute(RedisScript
猜想:既然是因為 lettuce-nioEventLoop 線程跑滿了 cpu,那么通過創建多個 lettuce-nioEventLoop 線程,以充分利用多核的特點,是否可以解決呢?
以下為源碼分析階段:
// 1. RedisConnectionFactory bean 的創建依賴 ClientResources @Bean @ConditionalOnMissingBean(RedisConnectionFactory.class) public LettuceConnectionFactory redisConnectionFactory( ClientResources clientResources) throws UnknownHostException { LettuceClientConfiguration clientConfig = getLettuceClientConfiguration( clientResources, this.properties.getLettuce().getPool()); return createLettuceConnectionFactory(clientConfig); } // 2. ClientResources bean 的創建如下 @Bean(destroyMethod = "shutdown") @ConditionalOnMissingBean(ClientResources.class) public DefaultClientResources lettuceClientResources() { return DefaultClientResources.create(); } public static DefaultClientResources create() { return builder().build(); } // 3. 創建 EventLoopGroupProvider 對象 protected DefaultClientResources(Builder builder) { this.builder = builder; // 默認為 null,執行這塊代碼 if (builder.eventLoopGroupProvider == null) { // 設置處理 redis 連接的線程數:默認為 // Math.max(1, // SystemPropertyUtil.getInt("io.netty.eventLoopThreads", // Math.max(MIN_IO_THREADS, Runtime.getRuntime().availableProcessors()))); // 針對多核處理器,該值一般等于 cpu 的核的數量 int ioThreadPoolSize = builder.ioThreadPoolSize; if (ioThreadPoolSize < MIN_IO_THREADS) { logger.info("ioThreadPoolSize is less than {} ({}), setting to: {}", MIN_IO_THREADS, ioThreadPoolSize, MIN_IO_THREADS); ioThreadPoolSize = MIN_IO_THREADS; } this.sharedEventLoopGroupProvider = false; // 創建 EventLoopGroupProvider 對象 this.eventLoopGroupProvider = new DefaultEventLoopGroupProvider(ioThreadPoolSize); } else { this.sharedEventLoopGroupProvider = true; this.eventLoopGroupProvider = builder.eventLoopGroupProvider; } // 以下代碼省略 ... } // 4. 通過 EventLoopGroupProvider 創建 EventExecutorGroup 對象 public staticEventExecutorGroup createEventLoopGroup(Class type, int numberOfThreads) { if (DefaultEventExecutorGroup.class.equals(type)) { return new DefaultEventExecutorGroup(numberOfThreads, new DefaultThreadFactory("lettuce-eventExecutorLoop", true)); } // 我們采用的是 Nio 模式,會執行這個分支 if (NioEventLoopGroup.class.equals(type)) { return new NioEventLoopGroup(numberOfThreads, new DefaultThreadFactory("lettuce-nioEventLoop", true)); } if (EpollProvider.isAvailable() && EpollProvider.isEventLoopGroup(type)) { return EpollProvider.newEventLoopGroup(numberOfThreads, new DefaultThreadFactory("lettuce-epollEventLoop", true)); } if (KqueueProvider.isAvailable() && KqueueProvider.isEventLoopGroup(type)) { return KqueueProvider.newEventLoopGroup(numberOfThreads, new DefaultThreadFactory("lettuce-kqueueEventLoop", true)); } throw new IllegalArgumentException(String.format("Type %s not supported", type.getName())); } // 5. NioEventLoopGroup 繼承了 MultithreadEventLoopGroup; // 創建了多個 NioEventLoop; // 每個 NioEventLoop 都是單線程; // 每個 NioEventLoop 都可以處理多個連接。 public class NioEventLoopGroup extends MultithreadEventLoopGroup { ... } public abstract class MultithreadEventLoopGroup extends MultithreadEventExecutorGroup implements EventLoopGroup { ... } public final class NioEventLoop extends SingleThreadEventLoop { ... }
以上分析可知,默認創建的 RedisConnectionFactory bean 其實是支持多線程的,但通過 jstack 等方式查看 lettuce-nioEventLoop 線程卻只有一個。
[root@ ~]# ss | grep 6379 tcp ESTAB 0 0 ::ffff:10.201.0.27:36184 ::ffff:10.201.0.30:6379
查看 redis 連接,發現只有一個。在 Netty 中,一個 EventLoop 線程可以處理多個 Channel,但是一個 Channel 只能綁定到一個 EventLoop,這是基于線程安全和同步考慮而設計的。這解釋了為什么只有一個 lettuce-nioEventLoop。
下面繼續分析為什么會只有一個連接呢?繼續源碼分析:
// 1. 創建 RedisConnectionFactory bean @Bean @ConditionalOnMissingBean(RedisConnectionFactory.class) public LettuceConnectionFactory redisConnectionFactory( ClientResources clientResources) throws UnknownHostException { LettuceClientConfiguration clientConfig = getLettuceClientConfiguration( clientResources, this.properties.getLettuce().getPool()); return createLettuceConnectionFactory(clientConfig); } // 2. 查看 createLettuceConnectionFactory(clientConfig) 方法 private LettuceConnectionFactory createLettuceConnectionFactory( LettuceClientConfiguration clientConfiguration) { if (getSentinelConfig() != null) { return new LettuceConnectionFactory(getSentinelConfig(), clientConfiguration); } if (getClusterConfiguration() != null) { return new LettuceConnectionFactory(getClusterConfiguration(), clientConfiguration); } // 沒有哨兵模式,沒有集群,執行這塊代碼 return new LettuceConnectionFactory(getStandaloneConfig(), clientConfiguration); } // 3. 獲取 redis 連接 private boolean shareNativeConnection = true; public LettuceReactiveRedisConnection getReactiveConnection() { // 默認為 true return getShareNativeConnection() ? new LettuceReactiveRedisConnection(getSharedReactiveConnection(), reactiveConnectionProvider) : new LettuceReactiveRedisConnection(reactiveConnectionProvider); } LettuceReactiveRedisConnection(StatefulConnectionsharedConnection, LettuceConnectionProvider connectionProvider) { Assert.notNull(sharedConnection, "Shared StatefulConnection must not be null!"); Assert.notNull(connectionProvider, "LettuceConnectionProvider must not be null!"); this.dedicatedConnection = new AsyncConnect(connectionProvider, StatefulConnection.class); this.pubSubConnection = new AsyncConnect(connectionProvider, StatefulRedisPubSubConnection.class); // 包裝 sharedConnection this.sharedConnection = Mono.just(sharedConnection); } protected Mono extends StatefulConnection > getConnection() { // 直接返回 sharedConnection if (sharedConnection != null) { return sharedConnection; } return getDedicatedConnection(); } // 4. shareNativeConnection 是怎么來的 protected StatefulConnection getSharedReactiveConnection() { return shareNativeConnection ? getOrCreateSharedReactiveConnection().getConnection() : null; } private SharedConnection getOrCreateSharedReactiveConnection() { synchronized (this.connectionMonitor) { if (this.reactiveConnection == null) { this.reactiveConnection = new SharedConnection<>(reactiveConnectionProvider, true); } return this.reactiveConnection; } } StatefulConnection getConnection() { synchronized (this.connectionMonitor) { // 第一次通過 getNativeConnection() 獲取連接;之后直接返回該連接 if (this.connection == null) { this.connection = getNativeConnection(); } if (getValidateConnection()) { validateConnection(); } return this.connection; } }
分析以上源碼,關鍵就在于 shareNativeConnection 默認為 true,導致只有一個連接。
更改 shareNativeConnection 的值為 true,并開啟 lettuce 連接池,最大連接數設置為 6;再次測試,
[root@eureka2 jstack]# ss | grep 6379 tcp ESTAB 0 0 ::ffff:10.201.0.27:48937 ::ffff:10.201.0.30:6379 tcp ESTAB 0 0 ::ffff:10.201.0.27:35842 ::ffff:10.201.0.30:6379 tcp ESTAB 0 0 ::ffff:10.201.0.27:48932 ::ffff:10.201.0.30:6379 tcp ESTAB 0 0 ::ffff:10.201.0.27:48930 ::ffff:10.201.0.30:6379 tcp ESTAB 0 0 ::ffff:10.201.0.27:48936 ::ffff:10.201.0.30:6379 tcp ESTAB 0 0 ::ffff:10.201.0.27:48934 ::ffff:10.201.0.30:6379 [root@eureka2 jstack]# jstack 23080 | grep lettuce-epollEventLoop "lettuce-epollEventLoop-4-6" #69 daemon prio=5 os_prio=0 tid=0x00007fcfa4012000 nid=0x5af2 runnable [0x00007fcfa81ef000] "lettuce-epollEventLoop-4-5" #67 daemon prio=5 os_prio=0 tid=0x00007fcf94003800 nid=0x5af0 runnable [0x00007fcfa83f1000] "lettuce-epollEventLoop-4-4" #60 daemon prio=5 os_prio=0 tid=0x00007fcfa0003000 nid=0x5ae9 runnable [0x00007fcfa8af8000] "lettuce-epollEventLoop-4-3" #59 daemon prio=5 os_prio=0 tid=0x00007fcfb00b8000 nid=0x5ae8 runnable [0x00007fcfa8bf9000] "lettuce-epollEventLoop-4-2" #58 daemon prio=5 os_prio=0 tid=0x00007fcf6c00f000 nid=0x5ae7 runnable [0x00007fcfa8cfa000] "lettuce-epollEventLoop-4-1" #43 daemon prio=5 os_prio=0 tid=0x00007fcfac248800 nid=0x5a64 runnable [0x00007fd00c2b9000]
可以看到已經建立了 6 個 redis 連接,并且創建了 6 個 eventLoop 線程。
再次進行壓力測試,結果如下:
[root@hystrix-dashboard wrk]# wrk -t 10 -c 500 -d 30s --latency -T 3s -s post-test.lua "http://10.201.0.27:8888/api/v1/json" Running 30s test @ http://10.201.0.27:8888/api/v1/json 10 threads and 500 connections Thread Stats Avg Stdev Max +/- Stdev Latency 215.83ms 104.38ms 1.00s 75.76% Req/Sec 234.56 49.87 434.00 71.45% Latency Distribution 50% 210.63ms 75% 281.30ms 90% 336.78ms 99% 519.51ms 69527 requests in 30.04s, 22.43MB read Requests/sec: 2314.14 Transfer/sec: 764.53KB [root@eureka2 jstack]# top -Hp 23080 top - 10:08:10 up 162 days, 12:31, 2 users, load average: 2.92, 1.19, 0.53 Threads: 563 total, 9 running, 554 sleeping, 0 stopped, 0 zombie %Cpu(s): 50.5 us, 10.2 sy, 0.0 ni, 36.2 id, 0.1 wa, 0.0 hi, 2.9 si, 0.0 st KiB Mem : 7677696 total, 215924 free, 3308248 used, 4153524 buff/cache KiB Swap: 6291452 total, 6291452 free, 0 used. 3468352 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23280 root 20 0 7418804 1.3g 7404 R 42.7 17.8 0:54.75 java 23272 root 20 0 7418804 1.3g 7404 S 31.1 17.8 0:44.63 java 23273 root 20 0 7418804 1.3g 7404 S 31.1 17.8 0:44.45 java 23271 root 20 0 7418804 1.3g 7404 R 30.8 17.8 0:44.63 java 23282 root 20 0 7418804 1.3g 7404 S 30.5 17.8 0:44.96 java 23119 root 20 0 7418804 1.3g 7404 R 24.8 17.8 1:27.30 java 23133 root 20 0 7418804 1.3g 7404 R 23.8 17.8 1:29.55 java 23123 root 20 0 7418804 1.3g 7404 S 23.5 17.8 1:28.98 java 23138 root 20 0 7418804 1.3g 7404 S 23.5 17.8 1:44.19 java 23124 root 20 0 7418804 1.3g 7404 R 22.8 17.8 1:32.21 java 23139 root 20 0 7418804 1.3g 7404 R 22.5 17.8 1:29.49 java
最終結果沒有任何提升,cpu 利用率依然不超過 400%,tps 也還是在 2300 request/s;單個 cpu 利用率最高不超過 50%,說明這次的瓶頸不是 cpu。
通過 jstack 查看線程狀態,
"lettuce-epollEventLoop-4-3" #59 daemon prio=5 os_prio=0 tid=0x00007fcfb00b8000 nid=0x5ae8 waiting for monitor entry [0x00007fcfa8bf8000] java.lang.Thread.State: BLOCKED (on object monitor) at org.springframework.data.redis.core.script.DefaultRedisScript.getSha1(DefaultRedisScript.java:88) - waiting to lock <0x000000008c1da690> (a java.lang.Object) at org.springframework.data.redis.core.script.DefaultReactiveScriptExecutor.eval(DefaultReactiveScriptExecutor.java:113) at org.springframework.data.redis.core.script.DefaultReactiveScriptExecutor.lambda$execute$0(DefaultReactiveScriptExecutor.java:105) at org.springframework.data.redis.core.script.DefaultReactiveScriptExecutor$$Lambda$1317/1912229933.doInRedis(Unknown Source) at org.springframework.data.redis.core.script.DefaultReactiveScriptExecutor.lambda$execute$6(DefaultReactiveScriptExecutor.java:167) at org.springframework.data.redis.core.script.DefaultReactiveScriptExecutor$$Lambda$1318/1719274268.get(Unknown Source) at reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:46) at reactor.core.publisher.FluxDoFinally.subscribe(FluxDoFinally.java:73) at reactor.core.publisher.FluxOnErrorResume.subscribe(FluxOnErrorResume.java:47) at reactor.core.publisher.MonoReduceSeed.subscribe(MonoReduceSeed.java:65) at reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:59) at reactor.core.publisher.MonoFlatMap.subscribe(MonoFlatMap.java:60) at reactor.core.publisher.Mono.subscribe(Mono.java:3608) at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:169) at reactor.core.publisher.MonoFlatMap.subscribe(MonoFlatMap.java:53) at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150) at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:67) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1476) at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:241) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1476) at reactor.core.publisher.MonoProcessor.subscribe(MonoProcessor.java:457) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1476) at reactor.core.publisher.MonoHasElement$HasElementSubscriber.onNext(MonoHasElement.java:74) at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1476) at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:389) at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:76) at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:123) at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114) at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114) at reactor.core.publisher.FluxFilter$FilterSubscriber.onNext(FluxFilter.java:107) at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:76) at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73) at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:238) at reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.onNext(FluxDefaultIfEmpty.java:92) at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114) at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:76) at io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:270) at io.lettuce.core.RedisPublisher$SubscriptionCommand.complete(RedisPublisher.java:754) at io.lettuce.core.protocol.CommandWrapper.complete(CommandWrapper.java:59) at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646) at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604) at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799) at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:433) at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:330) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748)
有 4 個 lettuce-epollEventLoop 線程都處于 BLOCKED 狀態,繼續查看源碼:
public class DefaultRedisScriptimplements RedisScript , InitializingBean { private @Nullable ScriptSource scriptSource; private @Nullable String sha1; private @Nullable Class resultType; public String getSha1() { // 1. 線程需要先獲取 shaModifiedMonitor 鎖 synchronized (shaModifiedMonitor) { // 第一次調用時或者 lua 腳本文件被修改時,需要重新計算 sha1 的值 // 否則直接返回sha1 if (sha1 == null || scriptSource.isModified()) { this.sha1 = DigestUtils.sha1DigestAsHex(getScriptAsString()); } return sha1; } } public String getScriptAsString() { try { return scriptSource.getScriptAsString(); } catch (IOException e) { throw new ScriptingException("Error reading script text", e); } } } public class ResourceScriptSource implements ScriptSource { // 只有第一次調用或者 lua 腳本文件被修改時,才會執行這個方法 @Override public String getScriptAsString() throws IOException { synchronized (this.lastModifiedMonitor) { this.lastModified = retrieveLastModifiedTime(); } Reader reader = this.resource.getReader(); return FileCopyUtils.copyToString(reader); } @Override public boolean isModified() { // 2. 每次都需要判斷 lua 腳本是否被修改 // 線程需要再獲取 lastModifiedMonitor 鎖 synchronized (this.lastModifiedMonitor) { return (this.lastModified < 0 || retrieveLastModifiedTime() > this.lastModified); } } }
對于限流操作,重要性并沒有那么高,而且計算接口調用次數的 lua 腳本,一般也不會經常改動,所以沒必要獲取 sha1 的值的時候都查看下腳本是否有改動;如果偶爾改動的話,可以通過新增一個刷新接口,在改動腳本文件后通過手動刷新接口來改變 sha1 的值。
所以這里,可以把同步操作去掉;我改成了這樣:
public class CustomRedisScriptextends DefaultRedisScript { private @Nullable String sha1; CustomRedisScript(ScriptSource scriptSource, Class resultType) { setScriptSource(scriptSource); setResultType(resultType); this.sha1 = DigestUtils.sha1DigestAsHex(getScriptAsString()); } @Override public String getSha1() { return sha1; } }
繼續測試,結果如下:
[root@hystrix-dashboard wrk]# wrk -t 10 -c 500 -d 30s -T 3s -s post-test.lua --latency "http://10.201.0.27:8888/api/v1/json" Running 30s test @ http://10.201.0.27:8888/api/v1/json 10 threads and 500 connections Thread Stats Avg Stdev Max +/- Stdev Latency 155.60ms 110.40ms 1.07s 67.68% Req/Sec 342.90 64.88 570.00 70.35% Latency Distribution 50% 139.14ms 75% 211.03ms 90% 299.74ms 99% 507.03ms 102462 requests in 30.02s, 33.15MB read Requests/sec: 3413.13 Transfer/sec: 1.10MB
cpu 利用率 500% 左右,tps 達到了 3400 req/s,性能大幅度提升。查看 cpu 狀態:
[root@eureka2 imf2]# top -Hp 19021 top - 16:24:09 up 163 days, 18:47, 2 users, load average: 3.03, 1.08, 0.47 Threads: 857 total, 7 running, 850 sleeping, 0 stopped, 0 zombie %Cpu0 : 60.2 us, 10.0 sy, 0.0 ni, 4.3 id, 0.0 wa, 0.0 hi, 25.4 si, 0.0 st %Cpu1 : 64.6 us, 16.3 sy, 0.0 ni, 19.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 65.7 us, 15.8 sy, 0.0 ni, 18.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 54.5 us, 15.8 sy, 0.0 ni, 29.5 id, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu4 : 55.0 us, 17.8 sy, 0.0 ni, 27.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu5 : 53.2 us, 16.4 sy, 0.0 ni, 30.0 id, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 7677696 total, 174164 free, 3061892 used, 4441640 buff/cache KiB Swap: 6291452 total, 6291452 free, 0 used. 3687692 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19075 root 20 0 7722156 1.2g 14488 S 41.4 15.9 0:55.71 java 19363 root 20 0 7722156 1.2g 14488 R 40.1 15.9 0:41.33 java 19071 root 20 0 7722156 1.2g 14488 R 37.1 15.9 0:56.38 java 19060 root 20 0 7722156 1.2g 14488 S 35.4 15.9 0:52.74 java 19073 root 20 0 7722156 1.2g 14488 R 35.1 15.9 0:55.83 java
cpu0 利用率達到了 95.7%,幾乎跑滿。但是其中出現了 si(軟中斷): 25.4%。
查看軟中斷類型:
[root@eureka2 imf2]# watch -d -n 1 "cat /proc/softirqs" CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 HI: 0 0 0 0 0 0 TIMER: 1629142082 990710808 852299786 606344269 586896512 566624764 NET_TX: 291570 833710 9616 5295 5358 2012064 NET_RX: 2563401537 32502894 31370533 6886869 6530120 6490002 BLOCK: 18130 1681 41404591 8751054 8695636 8763338 BLOCK_IOPOLL: 0 0 0 0 0 0 TASKLET: 39225643 0 0 817 17304 2516988 SCHED: 782335782 442142733 378856479 248794679 238417109 259695794 HRTIMER: 0 0 0 0 0 0 RCU: 690827224 504025610 464412234 246695846 254062933 248859132
其中 NET_RX,CPU0 的中斷次數遠遠大于其他 CPU,初步判斷是網卡問題。
我這邊網卡是 ens32,查看網卡的中斷號:
[root@eureka2 imf2]# cat /proc/interrupts | grep ens 18: 2524017495 0 0 0 0 7 IO-APIC-fasteoi ens32 [root@eureka2 imf2]# cat /proc/irq/18/smp_affinity 01 [root@eureka2 imf2]# cat /proc/irq/18/smp_affinity_list 0
網卡的中斷配置到了 CPU0。(01:表示 cpu0,02:cpu1,04:cpu2,08:cpu3,10:cpu4,20:cpu5)
smp_affinity:16 進制;smp_affinity_list:配置到了哪些 cpu。
查看網卡隊列模式:
[root@eureka2 ~]# lspci -vvv 02:00.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01) Subsystem: VMware PRO/1000 MT Single Port Adapter Physical Slot: 32 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort-SERR- 由于是單隊列模式,所以通過修改 /proc/irq/18/smp_affinity 的值不能生效。
可以通過 RPS/RFS 在軟件層面模擬多隊列網卡功能。
[root@eureka2 ~]# echo 3e > /sys/class/net/ens32/queues/rx-0/rps_cpus [root@eureka2 rx-0]# sysctl net.core.rps_sock_flow_entries=32768 [root@eureka2 rx-0]# echo 32768 > /sys/class/net/ens32/queues/rx-0/rps_flow_cnt/sys/class/net/ens32/queues/rx-0/rps_cpus: 1e,設置模擬網卡中斷分配到 cpu1-5 上。
繼續測試,
[root@hystrix-dashboard wrk]# wrk -t 10 -c 500 -d 30s -T 3s -s post-test.lua --latency "http://10.201.0.27:8888/api/v1/json" Running 30s test @ http://10.201.0.27:8888/api/v1/json 10 threads and 500 connections Thread Stats Avg Stdev Max +/- Stdev Latency 146.75ms 108.45ms 1.01s 65.53% Req/Sec 367.80 64.55 575.00 67.93% Latency Distribution 50% 130.93ms 75% 200.72ms 90% 290.32ms 99% 493.84ms 109922 requests in 30.02s, 35.56MB read Requests/sec: 3661.21 Transfer/sec: 1.18MB [root@eureka2 rx-0]# top -Hp 19021 top - 09:39:49 up 164 days, 12:03, 1 user, load average: 2.76, 2.02, 1.22 Threads: 559 total, 9 running, 550 sleeping, 0 stopped, 0 zombie %Cpu0 : 55.1 us, 13.0 sy, 0.0 ni, 17.5 id, 0.0 wa, 0.0 hi, 14.4 si, 0.0 st %Cpu1 : 60.1 us, 14.0 sy, 0.0 ni, 22.5 id, 0.0 wa, 0.0 hi, 3.4 si, 0.0 st %Cpu2 : 59.5 us, 14.3 sy, 0.0 ni, 22.4 id, 0.0 wa, 0.0 hi, 3.7 si, 0.0 st %Cpu3 : 58.6 us, 15.2 sy, 0.0 ni, 22.2 id, 0.0 wa, 0.0 hi, 4.0 si, 0.0 st %Cpu4 : 59.1 us, 14.8 sy, 0.0 ni, 22.7 id, 0.0 wa, 0.0 hi, 3.4 si, 0.0 st %Cpu5 : 57.7 us, 16.2 sy, 0.0 ni, 23.0 id, 0.0 wa, 0.0 hi, 3.1 si, 0.0 st KiB Mem : 7677696 total, 373940 free, 3217180 used, 4086576 buff/cache KiB Swap: 6291452 total, 6291452 free, 0 used. 3533812 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19060 root 20 0 7415812 1.2g 13384 S 40.7 16.7 3:23.05 java 19073 root 20 0 7415812 1.2g 13384 R 40.1 16.7 3:20.56 java 19365 root 20 0 7415812 1.2g 13384 R 40.1 16.7 2:36.65 java可以看到軟中斷也分配到了 cpu1-5 上;至于為什么還是 cpu0 上軟中斷比例最高,猜測是因為還有一些其他中斷并且默認配置在 cpu0 上?
同時,tps 也從 3400 -> 3600,提升不大。
2.4 增加 redis 連接
經過以上修改,cup 利用率還是不超過 500%,說明在某些地方還是存在瓶頸。
嘗試修改了下 lettuce 連接池,
spring: redis: database: x host: x.x.x.x port: 6379 lettuce: pool: max-active: 18 min-idle: 1 max-idle: 18主要是把 max-active 參數 6 增大到了 18,繼續測試:
[root@hystrix-dashboard wrk]# wrk -t 10 -c 500 -d 120s -T 3s -s post-test.lua --latency "http://10.201.0.27:8888/api/v1/json" Running 2m test @ http://10.201.0.27:8888/api/v1/json 10 threads and 500 connections Thread Stats Avg Stdev Max +/- Stdev Latency 117.66ms 96.72ms 1.34s 86.48% Req/Sec 485.42 90.41 790.00 70.80% Latency Distribution 50% 90.04ms 75% 156.01ms 90% 243.63ms 99% 464.04ms 578298 requests in 2.00m, 187.01MB read Requests/sec: 4815.57 Transfer/sec: 1.56MB6 核 cpu 幾乎跑滿,同時 tps 也從 3600 -> 4800,提升明顯!
這說明之前的瓶頸出在 redis 連接上,那么如何判斷 tcp 連接是瓶頸呢?(嘗試通過 ss、netstat 等命令查看 tcp 發送緩沖區、接收緩沖區、半連接隊列、全連接隊列等,未發現問題。先放著,以后在研究)
文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。
轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/73051.html
摘要:哪吒社區技能樹打卡打卡貼函數式接口簡介領域優質創作者哪吒公眾號作者架構師奮斗者掃描主頁左側二維碼,加入群聊,一起學習一起進步歡迎點贊收藏留言前情提要無意間聽到領導們的談話,現在公司的現狀是碼農太多,但能獨立帶隊的人太少,簡而言之,不缺干 ? 哪吒社區Java技能樹打卡?【打卡貼 day2...
摘要:結構型模式適配器模式橋接模式裝飾模式組合模式外觀模式享元模式代理模式。行為型模式模版方法模式命令模式迭代器模式觀察者模式中介者模式備忘錄模式解釋器模式模式狀態模式策略模式職責鏈模式責任鏈模式訪問者模式。 主要版本 更新時間 備注 v1.0 2015-08-01 首次發布 v1.1 2018-03-12 增加新技術知識、完善知識體系 v2.0 2019-02-19 結構...
摘要:作為面試官,我是如何甄別應聘者的包裝程度語言和等其他語言的對比分析和主從復制的原理詳解和持久化的原理是什么面試中經常被問到的持久化與恢復實現故障恢復自動化詳解哨兵技術查漏補缺最易錯過的技術要點大掃盲意外宕機不難解決,但你真的懂數據恢復嗎每秒 作為面試官,我是如何甄別應聘者的包裝程度Go語言和Java、python等其他語言的對比分析 Redis和MySQL Redis:主從復制的原理詳...
摘要:作為面試官,我是如何甄別應聘者的包裝程度語言和等其他語言的對比分析和主從復制的原理詳解和持久化的原理是什么面試中經常被問到的持久化與恢復實現故障恢復自動化詳解哨兵技術查漏補缺最易錯過的技術要點大掃盲意外宕機不難解決,但你真的懂數據恢復嗎每秒 作為面試官,我是如何甄別應聘者的包裝程度Go語言和Java、python等其他語言的對比分析 Redis和MySQL Redis:主從復制的原理詳...
閱讀 2435·2021-10-09 09:44
閱讀 3792·2021-09-22 15:43
閱讀 2924·2021-09-02 09:47
閱讀 2539·2021-08-12 13:29
閱讀 3871·2019-08-30 15:43
閱讀 1680·2019-08-30 13:06
閱讀 2189·2019-08-29 16:07
閱讀 2745·2019-08-29 15:23