[root@node1 kubesphere]# ./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz -h Create a Kubernetes or KubeSphere cluster
Usage: kk create cluster [flags]
Flags: -a, --artifact string Path to a KubeKey artifact --container-manager string Container runtime: docker, crio, containerd and isula. (default "docker") --debug Print detailed information --download-cmd string The user defined command to download the necessary binary files. The first param '%s' is output path, the second param '%s', is the URL (default "curl -L -o %s %s") -f, --filename string Path to a configuration file -h, --helphelpfor cluster --ignore-err Ignore the error message, remove the host which reported error and force to continue --namespace string KubeKey namespace to use (default "kubekey-system") --skip-pull-images Skip pre pull images --skip-push-images Skip pre push images --with-kubernetes string Specify a supported version of kubernetes --with-kubesphere Deploy a specific version of kubesphere (default v3.4.1) --with-local-storage Deploy a local PV provisioner --with-packages install operation system packages by artifact --with-security-enhancement Security enhancement -y, --yes
[root@node1 kubesphere]# ./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-kubesphere 3.4.0 W1205 00:36:57.266052 1453 utils.go:69] The recommended value for"clusterDNS"in"KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.25.10] [init] Using Kubernetes version: v1.23.15 [preflight] Running pre-flight checks [WARNING FileExisting-socat]: socat not found in system path [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 20.10 error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileExisting-conntrack]: conntrack not found in system path [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher 00:36:58 CST stdout: [master] [preflight] Running pre-flight checks W1205 00:36:58.323079 1534 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in"/var/lib/kubelet" W1205 00:36:58.327376 1534 cleanupnode.go:109] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables"command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. 00:36:58 CST message: [master] init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
##################################################### ### Welcome to KubeSphere! ### #####################################################
Console: http://10.10.10.30:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login.