0%

工具分享-使用阿里开源的Sealer快速部署K8S集群

什么是Sealer

引用官方文档的介绍[1]:

  • sealer[ˈsiːlər]是一款分布式应用打包交付运行的解决方案,通过把分布式应用及其数据库中间件等依赖一起打包以解决复杂应用的交付问题。
  • sealer构建出来的产物我们称之为“集群镜像”, 集群镜像里内嵌了一个kubernetes,解决了分布式应用的交付一致性问题。
  • 集群镜像可以push到registry中共享给其他用户使用,也可以在官方仓库中找到非常通用的分布式软件直接使用。
  • Docker可以把一个操作系统的rootfs+应用 build成一个容器镜像,sealer把kubernetes看成操作系统,在这个更高的抽象纬度上做出来的镜像就是集群镜像。 实现整个集群的Build Share Run !!!

快速部署K8S集群

准备一个节点,先下载并安装Sealer:

1
2
3
4
[root@node1]# wget https://github.com/alibaba/sealer/releases/download/v0.1.5/sealer-v0.1.5-linux-amd64.tar.gz && tar zxvf sealer-v0.1.5-linux-amd64.tar.gz && mv sealer /usr/bin

[root@node1]# sealer version
{"gitVersion":"v0.1.5","gitCommit":"9143e60","buildDate":"2021-06-04 07:41:03","goVersion":"go1.14.15","compiler":"gc","platform":"linux/amd64"}

根据官方文档,如果要在一个已存在的机器上部署kubernetes,直接执行以下命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
[root@node1]# sealer run kubernetes:v1.19.9 --masters xx.xx.xx.xx --passwd xxxx
2021-06-19 17:22:14 [WARN] [registry_client.go:37] failed to get auth info for registry.cn-qingdao.aliyuncs.com, err: auth for registry.cn-qingdao.aliyuncs.com doesn't exist
2021-06-19 17:22:15 [INFO] [current_cluster.go:39] current cluster not found, will create a new cluster new kube build config failed: stat /root/.kube/config: no such file or directory
2021-06-19 17:22:15 [WARN] [default_image.go:89] failed to get auth info, err: auth for registry.cn-qingdao.aliyuncs.com doesn't exist
Start to Pull Image kubernetes:v1.19.9
191908a896ce: pull completed
2021-06-19 17:22:49 [INFO] [filesystem.go:88] image name is registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes:v1.19.9.alpha.1
2021-06-19 17:22:49 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /var/lib/sealer/data/my-cluster || true
copying files to 10.10.11.49: 198/198
2021-06-19 17:25:22 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : cd /var/lib/sealer/data/my-cluster/rootfs && chmod +x scripts/* && cd scripts && sh init.sh
+ storage=/var/lib/docker
+ mkdir -p /var/lib/docker
+ command_exists docker
+ command -v docker
+ systemctl daemon-reload
+ systemctl restart docker.service
++ docker info
++ grep Cg
+ cgroupDriver=' Cgroup Driver: cgroupfs'
+ driver=cgroupfs
+ echo 'driver is cgroupfs'
driver is cgroupfs
+ export criDriver=cgroupfs
+ criDriver=cgroupfs
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-sysctl.conf ...
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 1
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.conf.all.rp_filter = 0
* Applying /etc/sysctl.conf ...
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
2021-06-19 17:25:26 [INFO] [runtime.go:107] metadata version v1.19.9
2021-06-19 17:25:26 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : cd /var/lib/sealer/data/my-cluster/rootfs && echo "
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.19.9
controlPlaneEndpoint: "apiserver.cluster.local:6443"
imageRepository: sea.hub:5000/library
networking:
# dnsDomain: cluster.local
podSubnet: 100.64.0.0/10
serviceSubnet: 10.96.0.0/22
apiServer:
certSANs:
- 127.0.0.1
- apiserver.cluster.local
- 10.10.11.49
- aliyun-inc.com
- 10.0.0.2
- 127.0.0.1
- apiserver.cluster.local
- 10.103.97.2
- 10.10.11.49
- 10.103.97.2
extraArgs:
etcd-servers: https://10.10.11.49:2379
feature-gates: TTLAfterFinished=true,EphemeralContainers=true
audit-policy-file: "/etc/kubernetes/audit-policy.yml"
audit-log-path: "/var/log/kubernetes/audit.log"
audit-log-format: json
audit-log-maxbackup: '"10"'
audit-log-maxsize: '"100"'
audit-log-maxage: '"7"'
enable-aggregator-routing: '"true"'
extraVolumes:
- name: "audit"
hostPath: "/etc/kubernetes"
mountPath: "/etc/kubernetes"
pathType: DirectoryOrCreate
- name: "audit-log"
hostPath: "/var/log/kubernetes"
mountPath: "/var/log/kubernetes"
pathType: DirectoryOrCreate
- name: localtime
hostPath: /etc/localtime
mountPath: /etc/localtime
readOnly: true
pathType: File
controllerManager:
extraArgs:
feature-gates: TTLAfterFinished=true,EphemeralContainers=true
experimental-cluster-signing-duration: 876000h
extraVolumes:
- hostPath: /etc/localtime
mountPath: /etc/localtime
name: localtime
readOnly: true
pathType: File
scheduler:
extraArgs:
feature-gates: TTLAfterFinished=true,EphemeralContainers=true
extraVolumes:
- hostPath: /etc/localtime
mountPath: /etc/localtime
name: localtime
readOnly: true
pathType: File
etcd:
local:
extraArgs:
listen-metrics-urls: http://0.0.0.0:2381
" > kubeadm-config.yaml
2021-06-19 17:25:27 [INFO] [kube_certs.go:234] APIserver altNames : {map[aliyun-inc.com:aliyun-inc.com apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost node1:node1] map[10.0.0.2:10.0.0.2 10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 10.10.11.49:10.10.11.49]}
2021-06-19 17:25:27 [INFO] [kube_certs.go:254] Etcd altnames : {map[localhost:localhost node1:node1] map[127.0.0.1:127.0.0.1 10.10.11.49:10.10.11.49 ::1:::1]}, commonName : node1
2021-06-19 17:25:30 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes || true
copying files to 10.10.11.49: 22/22
2021-06-19 17:25:43 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "admin.conf" kubeconfig file
2021-06-19 17:25:43 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
2021-06-19 17:25:43 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "scheduler.conf" kubeconfig file
2021-06-19 17:25:43 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "kubelet.conf" kubeconfig file
2021-06-19 17:25:44 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes && cp -f /var/lib/sealer/data/my-cluster/rootfs/statics/audit-policy.yml /etc/kubernetes/audit-policy.yml
2021-06-19 17:25:44 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : cd /var/lib/sealer/data/my-cluster/rootfs/scripts && sh init-registry.sh 5000 /var/lib/sealer/data/my-cluster/rootfs/registry
++ dirname init-registry.sh
+ cd .
+ REGISTRY_PORT=5000
+ VOLUME=/var/lib/sealer/data/my-cluster/rootfs/registry
+ container=sealer-registry
+ mkdir -p /var/lib/sealer/data/my-cluster/rootfs/registry
+ docker load -q -i ../images/registry.tar
Loaded image: registry:2.7.1
+ docker run -d --restart=always --name sealer-registry -p 5000:5000 -v /var/lib/sealer/data/my-cluster/rootfs/registry:/var/lib/registry registry:2.7.1
e35aeefcfb415290764773f28dd843fc53dab8d1210373ca2c0f1f4773391686
2021-06-19 17:25:45 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes || true
copying files to 10.10.11.49: 1/1
2021-06-19 17:25:46 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes || true
copying files to 10.10.11.49: 1/1
2021-06-19 17:25:47 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes || true
copying files to 10.10.11.49: 1/1
2021-06-19 17:25:48 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes || true
copying files to 10.10.11.49: 1/1
2021-06-19 17:25:49 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : echo 10.10.11.49 apiserver.cluster.local >> /etc/hosts
2021-06-19 17:25:50 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : echo 10.10.11.49 sea.hub >> /etc/hosts
2021-06-19 17:25:50 [INFO] [init.go:211] start to init master0...
[ssh][10.10.11.49]failed to run command [kubeadm init --config=/var/lib/sealer/data/my-cluster/rootfs/kubeadm-config.yaml --upload-certs -v 0 --ignore-preflight-errors=SystemVerification],output is: W0619 17:25:50.649054 122163 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.

W0619 17:25:50.702549 122163 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.9
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
[WARNING Hostname]: hostname "node1" could not be reached
[WARNING Hostname]: hostname "node1": lookup node1 on 10.72.66.37:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:

[ERROR ImagePull]: failed to pull image sea.hub:5000/library/kube-apiserver:v1.19.9: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1
[ERROR ImagePull]: failed to pull image sea.hub:5000/library/kube-controller-manager:v1.19.9: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1
[ERROR ImagePull]: failed to pull image sea.hub:5000/library/kube-scheduler:v1.19.9: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1
[ERROR ImagePull]: failed to pull image sea.hub:5000/library/kube-proxy:v1.19.9: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1
[ERROR ImagePull]: failed to pull image sea.hub:5000/library/pause:3.2: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1
[ERROR ImagePull]: failed to pull image sea.hub:5000/library/etcd:3.4.13-0: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1
[ERROR ImagePull]: failed to pull image sea.hub:5000/library/coredns:1.7.0: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
2021-06-19 17:25:52 [EROR] [run.go:55] init master0 failed, error: [ssh][10.10.11.49]run command failed [kubeadm init --config=/var/lib/sealer/data/my-cluster/rootfs/kubeadm-config.yaml --upload-certs -v 0 --ignore-preflight-errors=SystemVerification]. Please clean and reinstall

部署报错,从错误日志看,是尝试访问Sealer自己搭建的私有registry异常。从报错信息server gave HTTP response to HTTPS client可以知道,应该是docker中没有配置insecure-registries字段导致的。查看docker的配置文件确认一下:

1
2
3
4
5
6
7
8
[root@node1]# cat /etc/docker/daemon.json 
{
"max-concurrent-downloads": 10,
"log-driver": "json-file",
"log-level": "warn",
"insecure-registries":["127.0.0.1"],
"data-root":"/var/lib/docker"
}

可以看出,insecure-registries字段配置的不对,考虑到该节点在部署之前已经安装过docker,所以不确定这个配置是之前就存在,还是Sealer配置错了,那就自己修改一下吧:

1
2
3
4
5
6
7
8
[root@node1]# cat /etc/docker/daemon.json 
{
"max-concurrent-downloads": 10,
"log-driver": "json-file",
"log-level": "warn",
"insecure-registries":["sea.hub:5000"],
"data-root":"/var/lib/docker"
}

再次执行部署命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
sealer run kubernetes:v1.19.9 --masters xx.xx.xx.xx --passwd xxxx
...
2021-06-19 17:43:56 [INFO] [kubeconfig.go:277] [kubeconfig] Using existing kubeconfig file: "/var/lib/sealer/data/my-cluster/admin.conf"
2021-06-19 17:43:57 [INFO] [kubeconfig.go:277] [kubeconfig] Using existing kubeconfig file: "/var/lib/sealer/data/my-cluster/controller-manager.conf"
2021-06-19 17:43:57 [INFO] [kubeconfig.go:277] [kubeconfig] Using existing kubeconfig file: "/var/lib/sealer/data/my-cluster/scheduler.conf"
2021-06-19 17:43:57 [INFO] [kubeconfig.go:277] [kubeconfig] Using existing kubeconfig file: "/var/lib/sealer/data/my-cluster/kubelet.conf"
2021-06-19 17:43:57 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes && cp -f /var/lib/sealer/data/my-cluster/rootfs/statics/audit-policy.yml /etc/kubernetes/audit-policy.yml
2021-06-19 17:43:57 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : cd /var/lib/sealer/data/my-cluster/rootfs/scripts && sh init-registry.sh 5000 /var/lib/sealer/data/my-cluster/rootfs/registry
++ dirname init-registry.sh
+ cd .
+ REGISTRY_PORT=5000
+ VOLUME=/var/lib/sealer/data/my-cluster/rootfs/registry
+ container=sealer-registry
+ mkdir -p /var/lib/sealer/data/my-cluster/rootfs/registry
+ docker load -q -i ../images/registry.tar
Loaded image: registry:2.7.1
+ docker run -d --restart=always --name sealer-registry -p 5000:5000 -v /var/lib/sealer/data/my-cluster/rootfs/registry:/var/lib/registry registry:2.7.1
docker: Error response from daemon: Conflict. The container name "/sealer-registry" is already in use by container "e35aeefcfb415290764773f28dd843fc53dab8d1210373ca2c0f1f4773391686". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
+ true
2021-06-19 17:43:58 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes || true
copying files to 10.10.11.49: 1/1
2021-06-19 17:43:59 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes || true
copying files to 10.10.11.49: 1/1
2021-06-19 17:44:00 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes || true
copying files to 10.10.11.49: 1/1
2021-06-19 17:44:01 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : mkdir -p /etc/kubernetes || true
copying files to 10.10.11.49: 1/1
2021-06-19 17:44:02 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : echo 10.10.11.49 apiserver.cluster.local >> /etc/hosts
2021-06-19 17:44:02 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : echo 10.10.11.49 sea.hub >> /etc/hosts
2021-06-19 17:44:03 [INFO] [init.go:211] start to init master0...
2021-06-19 17:46:53 [INFO] [init.go:286] [globals]join command is: apiserver.cluster.local:6443 --token comygj.c0kj18d7fh2h4xta \
--discovery-token-ca-cert-hash sha256:cd8988f9a061765914dddb24d4e578ad446d8d31b0e30dba96a89e0c4f1e7240 \
--control-plane --certificate-key b27f10340d2f89790f7e980af72cf9d54d790b53bfd4da823947d914359d6e81

2021-06-19 17:46:53 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : rm -rf .kube/config && mkdir -p /root/.kube && cp /etc/kubernetes/admin.conf /root/.kube/config
2021-06-19 17:46:53 [INFO] [init.go:230] start to install CNI
2021-06-19 17:46:53 [INFO] [init.go:250] render cni yaml success
2021-06-19 17:46:54 [INFO] [sshcmd.go:48] [ssh][10.10.11.49] : echo '
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"

# Configure the MTU to use
veth_mtu: "1550"

# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration
---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamblocks.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMBlock
plural: ipamblocks
singular: ipamblock

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BlockAffinity
plural: blockaffinities
singular: blockaffinity

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamhandles.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMHandle
plural: ipamhandles
singular: ipamhandle

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMConfig
plural: ipamconfigs
singular: ipamconfig

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networksets.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkSet
plural: networksets
singular: networkset
---
# Source: calico/templates/rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Nodes are watched to monitor for deletions.
- apiGroups: [""]
resources:
- nodes
verbs:
- watch
- list
- get
# Pods are queried to check for existence.
- apiGroups: [""]
resources:
- pods
verbs:
- get
# IPAM resources are manipulated when nodes are deleted.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
verbs:
- list
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
# Needs access to update clusterinformations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- clusterinformations
verbs:
- get
- create
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are only required for upgrade from v2.6, and can
# be removed after upgrade or on fresh installations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- bgpconfigurations
- bgppeers
verbs:
- create
- update
# These permissions are required for Calico CNI to perform IPAM allocations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups: ["crd.projectcalico.org"]
resources:
- ipamconfigs
verbs:
- get
# Block affinities must also be watchable by confd for route aggregation.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
verbs:
- watch
# The Calico IPAM migration needs to get daemonsets. These permissions can be
# removed if not upgrading from an installation using host-local IPAM.
- apiGroups: ["apps"]
resources:
- daemonsets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
spec:
nodeSelector:
beta.kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container performs upgrade from host-local IPAM to calico-ipam.
# It can be deleted if this is a fresh installation, or if you have already
# upgraded to use calico-ipam.
- name: upgrade-ipam
image: sea.hub:5000/calico/cni:v3.8.2
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
volumeMounts:
- mountPath: /var/lib/cni/networks
name: host-local-net-dir
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: sea.hub:5000/calico/cni:v3.8.2
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: sea.hub:5000/calico/pod2daemon-flexvol:v3.8.2
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: sea.hub:5000/calico/node:v3.8.2
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
- name: IP_AUTODETECTION_METHOD
value: "interface=eth0"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Off"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
- name: CALICO_IPV4POOL_CIDR
value: "100.64.0.0/10"
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
host: localhost
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -bird-ready
- -felix-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Mount in the directory for host-local IPAM allocations. This is
# used when upgrading from host-local to calico-ipam, and can be removed
# if not using the upgrade-ipam init container.
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---

apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml

# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
annotations:
spec:
nodeSelector:
beta.kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: sea.hub:5000/calico/kube-controllers:v3.8.2
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r

---

apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
' | kubectl apply -f -
configmap/calico-config created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

至此,kubernetes集群部署完成,查看集群状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@node1]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready master 2m50s v1.19.9 10.10.11.49 <none> CentOS Linux 7 (Core) 3.10.0-862.11.6.el7.x86_64 docker://19.3.0

[root@node1]# kubectl get pod -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-5565b777b6-w9mhw 1/1 Running 0 2m32s 100.76.153.65 node1
kube-system calico-node-mwkg2 1/1 Running 0 2m32s 10.10.11.49 node1
kube-system coredns-597c5579bc-dpqbx 1/1 Running 0 2m32s 100.76.153.64 node1
kube-system coredns-597c5579bc-fjnmq 1/1 Running 0 2m32s 100.76.153.66 node1
kube-system etcd-node1 1/1 Running 0 2m51s 10.10.11.49 node1
kube-system kube-apiserver-node1 1/1 Running 0 2m51s 10.10.11.49 node1
kube-system kube-controller-manager-node1 1/1 Running 0 2m51s 10.10.11.49 node1
kube-system kube-proxy-qgt9w 1/1 Running 0 2m32s 10.10.11.49 node1
kube-system kube-scheduler-node1 1/1 Running 0 2m51s 10.10.11.49 node1

参考资料

  1. https://github.com/alibaba/sealer/blob/main/docs/README_zh.md