0%

K8S问题排查-Pod内存占用高

问题背景

如下所示,用户使用kubectl top命令看到其中一个节点上的Harbor占用内存约3.7G(其他业务Pod也存在类似现象),整体上来说,有点偏高。

1
2
3
4
5
6
7
8
9
10
[root@node02 ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
node01 Ready master 10d v1.15.12 100.1.0.10 <none>
node02 Ready master 12d v1.15.12 100.1.0.11 <none>
node03 Ready master 10d v1.15.12 100.1.0.12 <none>

[root@node02 ~]# kubectl top pod -A |grep harbor
kube-system harbor-master1-sxg2l 15m 150Mi
kube-system harbor-master2-ncvb8 8m 3781Mi
kube-system harbor-master3-2gdsn 14m 227Mi

原因分析

我们知道,查看容器的内存占用,可以使用kubectl top命令,也可以使用docker stats命令,并且理论上来说,docker stats命令查的结果应该比kubectl top查到的更准确。查看并统计发现,实际上Harbor总内存占用约为140M左右,远没有达到3.7G:

1
2
3
4
5
6
7
8
9
10
[root@node02 ~]# docker stats |grep harbor
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM %
10a230bee3c7 k8s_nginx_harbor-master2-xxx 0.02% 14.15MiB / 94.26GiB 0.01%
6ba14a04fd77 k8s_harbor-portal_harbor-master2-xxx 0.01% 13.73MiB / 94.26GiB 0.01%
324413da20a9 k8s_harbor-jobservice_harbor-master2-xxx 0.11% 21.54MiB / 94.26GiB 0.02%
d880b61cf4cb k8s_harbor-core_harbor-master2-xxx 0.12% 33.2MiB / 94.26GiB 0.03%
186c064d0930 k8s_harbor-registryctl_harbor-master2-xxx 0.01% 8.34MiB / 94.26GiB 0.01%
52a50204a962 k8s_harbor-registry_harbor-master2-xxx 0.06% 29.99MiB / 94.26GiB 0.03%
86031ddd0314 k8s_harbor-redis_harbor-master2-xxx 0.14% 11.51MiB / 94.26GiB 0.01%
6366207680f2 k8s_harbor-database_harbor-master2-xxx 0.45% 8.859MiB / 94.26GiB 0.01%

这是什么情况?两个命令查到的结果差距也太大了。查看资料[1]可以知道:

  1. kubectl top命令的计算公式:memory.usage_in_bytes - inactive_file
  2. docker stats命令的计算公式:memory.usage_in_bytes - cache

可以看出,两种方式收集机制不一样,如果cache比较大,kubectl top命令看到的结果会偏高。根据上面的计算公式验证看看是否正确:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
curl -s --unix-socket /var/run/docker.sock http:/v1.24/containers/xxx/stats | jq ."memory_stats"
"memory_stats": {
"usage": 14913536,
"max_usage": 15183872,
"stats": {
"active_anon": 14835712,
"active_file": 0,
"cache": 77824,
"dirty": 0,
"hierarchical_memory_limit": 101205622784,
"hierarchical_memsw_limit": 9223372036854772000,
"inactive_anon": 4096,
"inactive_file": 73728,
...
}

"memory_stats": {
"usage": 14405632,
"max_usage": 14508032,
"stats": {
"active_anon": 14397440,
"active_file": 0,
"cache": 8192,
"dirty": 0,
"hierarchical_memory_limit": 101205622784,
"hierarchical_memsw_limit": 9223372036854772000,
"inactive_anon": 4096,
"inactive_file": 4096,
...
}

"memory_stats": {
"usage": 26644480,
"max_usage": 31801344,
"stats": {
"active_anon": 22810624,
"active_file": 790528,
"cache": 3833856,
"dirty": 0,
"hierarchical_memory_limit": 101205622784,
"hierarchical_memsw_limit": 9223372036854772000,
"inactive_anon": 0,
"inactive_file": 3043328,
...
}

"memory_stats": {
"usage": 40153088,
"max_usage": 90615808,
"stats": {
"active_anon": 35123200,
"active_file": 1372160,
"cache": 5029888,
"dirty": 0,
"hierarchical_memory_limit": 101205622784,
"hierarchical_memsw_limit": 9223372036854772000,
"inactive_anon": 0,
"inactive_file": 3657728,
...
}

"memory_stats": {
"usage": 10342400,
"max_usage": 12390400,
"stats": {
"active_anon": 8704000,
"active_file": 241664,
"cache": 1638400,
"dirty": 0,
"hierarchical_memory_limit": 101205622784,
"hierarchical_memsw_limit": 9223372036854772000,
"inactive_anon": 0,
"inactive_file": 1396736,
...
}

"memory_stats": {
"usage": 5845127168,
"max_usage": 22050988032,
"stats": {
"active_anon": 31576064,
"active_file": 3778052096,
"cache": 5813551104,
"dirty": 0,
"hierarchical_memory_limit": 101205622784,
"hierarchical_memsw_limit": 9223372036854772000,
"inactive_anon": 0,
"inactive_file": 2035499008,
...
}

"memory_stats": {
"usage": 13250560,
"max_usage": 34791424,
"stats": {
"active_anon": 12070912,
"active_file": 45056,
"cache": 1179648,
"dirty": 0,
"hierarchical_memory_limit": 101205622784,
"hierarchical_memsw_limit": 9223372036854772000,
"inactive_anon": 0,
"inactive_file": 1134592,
...
}

"memory_stats": {
"usage": 50724864,
"max_usage": 124682240,
"stats": {
"active_anon": 23502848,
"active_file": 13864960,
"cache": 41435136,
"dirty": 0,
"hierarchical_memory_limit": 101205622784,
"hierarchical_memsw_limit": 9223372036854772000,
"inactive_anon": 6836224,
"inactive_file": 6520832,
...
}

根据上面提供的计算公式和实际获取的memory_stats数据,验证kubectl top结果和docker stats结果符合预期。那为什么Harbor缓存会占用那么高呢?

通过实际环境分析看,Harbor中占用缓存较高的组件是registry(如下所示,缓存有5.4G),考虑到registry负责docker镜像的存储,在处理镜像时会有大量的镜像层文件的读写操作,所以正常情况下这些操作确实会比较耗缓存;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
"memory_stats": {
"usage": 5845127168,
"max_usage": 22050988032,
"stats": {
"active_anon": 31576064,
"active_file": 3778052096,
"cache": 5813551104,
"dirty": 0,
"hierarchical_memory_limit": 101205622784,
"hierarchical_memsw_limit": 9223372036854772000,
"inactive_anon": 0,
"inactive_file": 2035499008,
...
}

解决方案

与用户沟通,说明kubectl top看到的结果包含了容器内使用的cache,结果会偏高,这部分缓存在内存紧张情况下会被系统回收,或者手工操作也可以释放,建议使用docker stats命令查看实际内存使用率。

参考资料

  1. https://blog.csdn.net/xyclianying/article/details/108513122