[root@node1 ~]# kubectl get pod -A -owide|grep server ss server-xxx-xxx 1/1 Running 0 20h 177.177.176.150 node1 ss server-xxx-xxx 1/1 Running 0 20h 177.177.254.245 node2 ss server-xxx-xxx 1/1 Running 0 20h 177.177.18.152 node3
1 2
[root@node1 ~]# kubectl get svc -A -owide|grep server ss server-svc ClusterIP 10.96.182.195 <none> 50300/UDP
W0619 17:25:50.702549 122163 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.9 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "node1" could not be reached [WARNING Hostname]: hostname "node1": lookup node1 on 10.72.66.37:53: no such host [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image sea.hub:5000/library/kube-apiserver:v1.19.9: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1 [ERROR ImagePull]: failed to pull image sea.hub:5000/library/kube-controller-manager:v1.19.9: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1 [ERROR ImagePull]: failed to pull image sea.hub:5000/library/kube-scheduler:v1.19.9: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1 [ERROR ImagePull]: failed to pull image sea.hub:5000/library/kube-proxy:v1.19.9: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1 [ERROR ImagePull]: failed to pull image sea.hub:5000/library/pause:3.2: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1 [ERROR ImagePull]: failed to pull image sea.hub:5000/library/etcd:3.4.13-0: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1 [ERROR ImagePull]: failed to pull image sea.hub:5000/library/coredns:1.7.0: output: Error response from daemon: Get https://sea.hub:5000/v2/: http: server gave HTTP response to HTTPS client, error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher 2021-06-19 17:25:52 [EROR] [run.go:55] init master0 failed, error: [ssh][10.10.11.49]run command failed [kubeadm init --config=/var/lib/sealer/data/my-cluster/rootfs/kubeadm-config.yaml --upload-certs -v 0 --ignore-preflight-errors=SystemVerification]. Please clean and reinstall
部署报错,从错误日志看,是尝试访问Sealer自己搭建的私有registry异常。从报错信息server gave HTTP response to HTTPS client可以知道,应该是docker中没有配置insecure-registries字段导致的。查看docker的配置文件确认一下:
apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networksets.crd.projectcalico.org spec: scope: Namespaced group: crd.projectcalico.org version: v1 names: kind: NetworkSet plural: networksets singular: networkset --- # Source: calico/templates/rbac.yaml # Include a clusterrole for the kube-controllers component, # and bind it to the calico-kube-controllers serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers rules: # Nodes are watched to monitor for deletions. - apiGroups: [""] resources: - nodes verbs: - watch - list - get # Pods are queried to check for existence. - apiGroups: [""] resources: - pods verbs: - get # IPAM resources are manipulated when nodes are deleted. - apiGroups: ["crd.projectcalico.org"] resources: - ippools verbs: - list - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities - ipamblocks - ipamhandles verbs: - get - list - create - update - delete # Needs access to update clusterinformations. - apiGroups: ["crd.projectcalico.org"] resources: - clusterinformations verbs: - get - create - update --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-kube-controllers subjects: - kind: ServiceAccount name: calico-kube-controllers namespace: kube-system --- # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules: # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: - pods - nodes - namespaces verbs: - get - apiGroups: [""] resources: - endpoints - services verbs: # Used to discover service IPs for advertisement. - watch - list # Used to discover Typhas. - get - apiGroups: [""] resources: - nodes/status verbs: # Needed for clearing NodeNetworkUnavailable flag. - patch # Calico stores some configuration information in node annotations. - update # Watch for changes to Kubernetes NetworkPolicies. - apiGroups: ["networking.k8s.io"] resources: - networkpolicies verbs: - watch - list # Used by Calico for policy information. - apiGroups: [""] resources: - pods - namespaces - serviceaccounts verbs: - list - watch # The CNI plugin patches pods/status. - apiGroups: [""] resources: - pods/status verbs: - patch # Calico monitors various CRDs for config. - apiGroups: ["crd.projectcalico.org"] resources: - globalfelixconfigs - felixconfigurations - bgppeers - globalbgpconfigs - bgpconfigurations - ippools - ipamblocks - globalnetworkpolicies - globalnetworksets - networkpolicies - networksets - clusterinformations - hostendpoints verbs: - get - list - watch # Calico must create and update some CRDs on startup. - apiGroups: ["crd.projectcalico.org"] resources: - ippools - felixconfigurations - clusterinformations verbs: - create - update # Calico stores some configuration information on the node. - apiGroups: [""] resources: - nodes verbs: - get - list - watch # These permissions are only required for upgrade from v2.6, and can # be removed after upgrade or on fresh installations. - apiGroups: ["crd.projectcalico.org"] resources: - bgpconfigurations - bgppeers verbs: - create - update # These permissions are required for Calico CNI to perform IPAM allocations. - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities - ipamblocks - ipamhandles verbs: - get - list - create - update - delete - apiGroups: ["crd.projectcalico.org"] resources: - ipamconfigs verbs: - get # Block affinities must also be watchable by confd for route aggregation. - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities verbs: - watch # The Calico IPAM migration needs to get daemonsets. These permissions can be # removed if not upgrading from an installation using host-local IPAM. - apiGroups: ["apps"] resources: - daemonsets verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system
--- # Source: calico/templates/calico-node.yaml # This manifest installs the calico-node container, as well # as the CNI plugins and network config on # each master and worker node in a Kubernetes cluster. kind: DaemonSet apiVersion: apps/v1 metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: calico-node annotations: # This, along with the CriticalAddonsOnly toleration below, # marks the pod as a critical add-on, ensuring it gets # priority scheduling and that its resources are reserved # if it ever gets evicted. spec: nodeSelector: beta.kubernetes.io/os: linux hostNetwork: true tolerations: # Make sure calico-node gets scheduled on all nodes. - effect: NoSchedule operator: Exists # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists serviceAccountName: calico-node # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. terminationGracePeriodSeconds: 0 priorityClassName: system-node-critical initContainers: # This container performs upgrade from host-local IPAM to calico-ipam. # It can be deleted if this is a fresh installation, or if you have already # upgraded to use calico-ipam. - name: upgrade-ipam image: sea.hub:5000/calico/cni:v3.8.2 command: ["/opt/cni/bin/calico-ipam", "-upgrade"] env: - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend volumeMounts: - mountPath: /var/lib/cni/networks name: host-local-net-dir - mountPath: /host/opt/cni/bin name: cni-bin-dir # This container installs the CNI binaries # and CNI network config file on each node. - name: install-cni image: sea.hub:5000/calico/cni:v3.8.2 command: ["/install-cni.sh"] env: # Name of the CNI config file to create. - name: CNI_CONF_NAME value: "10-calico.conflist" # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config # Set the hostname based on the k8s node name. - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName # CNI MTU Config variable - name: CNI_MTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Prevents the container from sleeping forever. - name: SLEEP value: "false" volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes # to communicate with Felix over the Policy Sync API. - name: flexvol-driver image: sea.hub:5000/calico/pod2daemon-flexvol:v3.8.2 volumeMounts: - name: flexvol-driver-host mountPath: /host/driver containers: # Runs calico-node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: sea.hub:5000/calico/node:v3.8.2 env: # Use Kubernetes API as the backing datastore. - name: DATASTORE_TYPE value: "kubernetes" # Wait for the datastore. - name: WAIT_FOR_DATASTORE value: "true" # Set based on the k8s node name. - name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Cluster type to identify the deployment type - name: CLUSTER_TYPE value: "k8s,bgp" # Auto-detect the BGP IP address. - name: IP value: "autodetect" - name: IP_AUTODETECTION_METHOD value: "interface=eth0" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Off" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_IPINIPMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have - name: CALICO_IPV4POOL_CIDR value: "100.64.0.0/10" - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "false" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN value: "info" - name: FELIX_HEALTHENABLED value: "true" securityContext: privileged: true resources: requests: cpu: 250m livenessProbe: httpGet: path: /liveness port: 9099 host: localhost periodSeconds: 10 initialDelaySeconds: 10 failureThreshold: 6 readinessProbe: exec: command: - /bin/calico-node - -bird-ready - -felix-ready periodSeconds: 10 volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /var/lib/calico name: var-lib-calico readOnly: false - name: policysync mountPath: /var/run/nodeagent volumes: # Used by calico-node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico - name: var-lib-calico hostPath: path: /var/lib/calico - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d # Mount in the directory for host-local IPAM allocations. This is # used when upgrading from host-local to calico-ipam, and can be removed # if not using the upgrade-ipam init container. - name: host-local-net-dir hostPath: path: /var/lib/cni/networks # Used to create per-pod Unix Domain Sockets - name: policysync hostPath: type: DirectoryOrCreate path: /var/run/nodeagent # Used to install Flex Volume Driver - name: flexvol-driver-host hostPath: type: DirectoryOrCreate path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds ---
--- # Source: calico/templates/calico-kube-controllers.yaml # See https://github.com/projectcalico/kube-controllers apiVersion: apps/v1 kind: Deployment metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: # The controllers can only have a single active instance. replicas: 1 selector: matchLabels: k8s-app: calico-kube-controllers strategy: type: Recreate template: metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers annotations: spec: nodeSelector: beta.kubernetes.io/os: linux tolerations: # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - key: node-role.kubernetes.io/master effect: NoSchedule serviceAccountName: calico-kube-controllers priorityClassName: system-cluster-critical containers: - name: calico-kube-controllers image: sea.hub:5000/calico/kube-controllers:v3.8.2 env: # Choose which controllers to run. - name: ENABLED_CONTROLLERS value: node - name: DATASTORE_TYPE value: kubernetes readinessProbe: exec: command: - /usr/bin/check-status - -r
---
apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system ' | kubectl apply -f - configmap/calico-config created Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created
至此,kubernetes集群部署完成,查看集群状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
[root@node1]# kubectl get node -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node1 Ready master 2m50s v1.19.9 10.10.11.49 <none> CentOS Linux 7 (Core) 3.10.0-862.11.6.el7.x86_64 docker://19.3.0 [root@node1]# kubectl get pod -A -owide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-5565b777b6-w9mhw 1/1 Running 0 2m32s 100.76.153.65 node1 kube-system calico-node-mwkg2 1/1 Running 0 2m32s 10.10.11.49 node1 kube-system coredns-597c5579bc-dpqbx 1/1 Running 0 2m32s 100.76.153.64 node1 kube-system coredns-597c5579bc-fjnmq 1/1 Running 0 2m32s 100.76.153.66 node1 kube-system etcd-node1 1/1 Running 0 2m51s 10.10.11.49 node1 kube-system kube-apiserver-node1 1/1 Running 0 2m51s 10.10.11.49 node1 kube-system kube-controller-manager-node1 1/1 Running 0 2m51s 10.10.11.49 node1 kube-system kube-proxy-qgt9w 1/1 Running 0 2m32s 10.10.11.49 node1 kube-system kube-scheduler-node1 1/1 Running 0 2m51s 10.10.11.49 node1
[root@node02 ~]# kubectl descroiebe pod -n kube-system calico-node-b8w2b ... Events: Type Reason Age From Message ------------------------- Warning Unhealthy 58m (x111 over 3h12m) kubelet, node01 (combined from similar events): Liveness probe failed: Get http://localhost:9099/liveness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Normal Pulled 43m (x36 over 3d19h) kubelet, node01 Container image "calico/node:v3.15.1" already present on machine Warning Unhealthy 8m16s (x499 over 3h43m) kubelet, node01 Liveness probe failed: Get http://localhost:9099/liveness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning BackOff 3m31s (x437 over 3h3m) kubelet, node01 Back-off restarting failed container
从Event日志可以看出,是calico的健康检查没通过导致的重启,出错原因也比较明显:net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),这个错误的含义是建立连接超时[1],并且手动在控制台执行健康检查命令,发现确实响应慢(正常环境是毫秒级别):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
[root@node01 ~]# time curl -i http://localhost:9099/liveness HTTP/1.1 204 No Content Date: Tue, 15 Jun 2021 06:24:35 GMT real0m1.012s user0m0.003s sys0m0.005s [root@node01 ~]# time curl -i http://localhost:9099/liveness HTTP/1.1 204 No Content Date: Tue, 15 Jun 2021 06:24:39 GMT real0m3.014s user0m0.002s sys0m0.005s [root@node01 ~]# time curl -i http://localhost:9099/liveness real1m52.510s user0m0.002s sys0m0.013s [root@node01 ~]# time curl -i http://localhost:9099/liveness ^C
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6754 root 20066.8g 25.1g 290100 S 700.017.32971:49 java 25214 root 200630907617999237016 S 36.80.1439:06.29 kubelet 20331 root 200319666017236424908 S 21.10.1349:56.64 dockerd
[root@node01 ~]# time curl -i http://localhost:9099/liveness HTTP/1.1204 No Content Date: Tue, 15 Jun 202114:48:38 GMT real 0m0.011s user 0m0.004s sys 0m0.004s [root@node01 ~]# time curl -i http://localhost:9099/liveness HTTP/1.1204 No Content Date: Tue, 15 Jun 202114:48:39 GMT real 0m0.010s user 0m0.001s sys 0m0.005s [root@node01 ~]# time curl -i http://localhost:9099/liveness HTTP/1.1204 No Content Date: Tue, 15 Jun 202114:48:40 GMT real 0m0.011s user 0m0.002s
代码位置:https://github.com/kubernetes/kubernetes/blob/v1.15.12/pkg/proxy/iptables/proxier.go#L667-L1446 // Sync rules. // NOTE: NoFlushTables is used so we don't flush non-kubernetes chains in the table proxier.iptablesData.Reset() proxier.iptablesData.Write(proxier.filterChains.Bytes()) proxier.iptablesData.Write(proxier.filterRules.Bytes()) proxier.iptablesData.Write(proxier.natChains.Bytes()) proxier.iptablesData.Write(proxier.natRules.Bytes())
NAME: guestbook-demo LAST DEPLOYED: Mon Feb 24 18:08:02 2020 NAMESPACE: helm-demo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get the application URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of by running 'kubectl get svc -w guestbook-demo --namespace helm-demo' export SERVICE_IP=$(kubectl get svc --namespace helm-demo guestbook-demo -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo http://$SERVICE_IP:3000
$ helm list -n helm-demo NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION guestbook-demo helm-demo 1 2020-02-24 18:08:02.017401264 +0000 UTC deployed guestbook-0.2.0
当我们使用helm list -n helm-demo命令检查Helm版本时,可以看到revision和日期已更新:
1 2 3
$ helm list -n helm-demo NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION guestbook-demo helm-demo 2 2020-02-25 14:23:27.06732381 +0000 UTC deployed guestbook-0.2.0
$kubectl create namespace repo-demo $helm install guestbook-demo helm101/guestbook --namespace repo-demo $helm install guestbook-demo helm101/guestbook --namespace repo-demo NAME: guestbook-demo LAST DEPLOYED: Tue Feb 25 15:40:17 2020 NAMESPACE: repo-demo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get the application URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of by running 'kubectl get svc -w guestbook-demo --namespace repo-demo' export SERVICE_IP=$(kubectl get svc --namespace repo-demo guestbook-demo -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo http://$SERVICE_IP:3000
检查是否按预期部署了该版本,如下所示:
1 2 3
$ helm list -n repo-demo NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION guestbook-demo repo-demo 1 2020-02-25 15:40:17.627745329 +0000 UTC deployed guestbook-0.2.1
Health Probe要求每个容器都应该实现特定的API,以帮助平台以最健康的方式观察和管理应用程序。为了完全自动化,云本地应用程序必须具有高度的可观察性,允许推断其状态,以便Kubernetes可以检测应用程序是否已启动并准备好为请求提供服务。这些观察结果会影响Pods的生命周期管理以及将流量路由到应用程序的方式。