$ kubectl get nodes NAME STATUS ROLES AGE VERSION k3s-external-ip-master Ready master 7m24s v1.16.3-k3s.2 k3s-external-ip-worker Ready <none> 2m21s v1.16.3-k3s.2
$ ip r ... 10.49.0.0/16 nexthop via 192.168.1.10 dev eth2 weight 1 nexthop via 192.168.1.11 dev eth2 weight 1 nexthop via 192.168.1.12 dev eth2 weight 1 ...
$ ip r ... 10.49.0.0/16 nexthop via 192.168.2.10 dev eth2 weight 1 nexthop via 192.168.2.11 dev eth2 weight 1 nexthop via 192.168.2.12 dev eth2 weight 1 ... 10.49.62.131 nexthop via 192.168.2.11 dev eth2 weight 1 nexthop via 192.168.2.12 dev eth2 weight 1
$ ip r ... 192.168.3.0/24 nexthop via 192.168.1.10 dev eth2 weight 1 nexthop via 192.168.1.11 dev eth2 weight 1 nexthop via 192.168.1.12 dev eth2 weight 1 ...
该服务的群集IP不再公开可见。
1 2
external :~$ curl -m 10 10.49.62.131 curl: (28) Connection timed out after 10001 milliseconds
我们现在在发布外部IP范围,但我们还需要为服务分配一个外部IP:
1 2 3 4 5 6
control:~$ kubectl patch svc nginx \ -p '{"spec": {"externalIPs": ["192.168.3.180"]}}' control:~$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.49.0.1 <none> 443/TCP 152m nginx NodePort 10.49.62.131 192.168.3.180 80:31890/TCP 109m
到此为止,我们快速完成了Calico如何在Kuberentes群集之外发布服务和外部服务IP的快速浏览。对于具有启用BGP路由的本地云,这是一种简单的解决方案,无需提供安装和维护自定义Kubernetes负载均衡器或Ingress控制器的额外工作即可访问Kubernetes服务。如果您想了解更多有关此功能的信息,请查阅官方的Calico项目”Advertise Kubernetes Service IPs“。
但是bond和team device之间也存在一些功能差异。例如,team device支持LACP负载平衡,NS/NA(IPV6)链接监视,D-Bus接口等,而这些功能在bond中是不存在的。有关bond和team device之间差异的更多详细信息,请参见 bond vs team device。
综上,如果想使用bond无法提供的某些功能,请使用team device。
创建team device的方法如下:
1 2 3 4 5
# teamd -o -n -U -d -t team0 -c '{"runner": {"name": "activebackup"},"link_watch": {"name": "ethtool"}}' # ip linkset eth0 down # ip linkset eth1 down # teamdctl team0 port add eth0 # teamdctl team0 port add eth1
# ip link add macvlan1 link eth0 type macvlan mode bridge # ip link add macvlan2 link eth0 type macvlan mode bridge # ip netns add net1 # ip netns add net2 # ip linkset macvlan1 netns net1 # ip linkset macvlan2 netns net2
# ip link add ifb0 type ifb # ip linkset ifb0 up # tc qdisc add dev ifb0 root sfq # tc qdisc add dev eth0 handle ffff: ingress # tc filter add dev eth0 parent ffff: u32 match u32 0 0 action mirred egress redirect dev ifb0
Linux支持多种隧道,但是新用户可能会因为它们之间的差异而感到困惑,无法确定哪种隧道适合给定的场景。本文将简要介绍Linux内核中常用的隧道接口。内容没有代码分析,仅简要介绍了接口及其在Linux上的用法。感兴趣的话,可以通过iproute2的命令ip link help获得隧道接口列表以及特定隧道配置的帮助。
这篇文章介绍了以下常用接口:
IPIP
SIT
ip6tnl
VTI和VTI6
GRE和GRETAP
IP6GRE和IP6GRETAP
FOU
GUE
GENEVE
ERSPAN和IP6ERSPAN
阅读本文之后,我们将了解这些接口是什么,它们之间的区别,何时使用它们以及如何创建它们。
IPIP
顾名思义,IPIP隧道是RFC 2003中定义的IP over IP隧道。IPIP隧道标头如下所示:
On Server A: # ip link add name ipip0 type ipip local LOCAL_IPv4_ADDR remote REMOTE_IPv4_ADDR # ip linkset ipip0 up # ip addr add INTERNAL_IPV4_ADDR/24 dev ipip0 Add a remote internal subnet route if the endpoints don't belong to the same subnet # ip route add REMOTE_INTERNAL_SUBNET/24 dev ipip0
On Server B: # ip link add name ipip0 type ipip local LOCAL_IPv4_ADDR remote REMOTE_IPv4_ADDR # ip linkset ipip0 up # ip addr add INTERNAL_IPV4_ADDR/24 dev ipip0 # ip route add REMOTE_INTERNAL_SUBNET/24 dev ipip0
最初,它只有IPv6 over IPv4隧道模式。然而,经过多年的发展,它获得了对多种不同模式的支持,例如ipip(与IPIP隧道相同),ip6ip,mplsip等。模式any用于同时接受IP和IPv6流量,这在某些部署中可能很有用。 SIT隧道还支持ISATA,用法见示例。
SIT隧道报文头看起来如下所示:
加载sit模块后,Linux内核将创建一个默认设备,名为sit0。
如下是创建SIT隧道的方法:
1 2 3 4
On Server A: # ip link add name sit1 type sit local LOCAL_IPv4_ADDR remote REMOTE_IPv4_ADDR mode any # ip linkset sit1 up # ip addr add INTERNAL_IPV4_ADDR/24 dev sit1
# ip link add name vti1 type vti key VTI_KEY local LOCAL_IPv4_ADDR remote REMOTE_IPv4_ADDR # ip linkset vti1 up # ip addr add LOCAL_VIRTUAL_ADDR/24 dev vti1 # ip xfrm state add src LOCAL_IPv4_ADDR dst REMOTE_IPv4_ADDR spi SPI PROTO ALGR mode tunnel # ip xfrm state add src REMOTE_IPv4_ADDR dst LOCAL_IPv4_ADDR spi SPI PROTO ALGR mode tunnel # ip xfrm policy add dirin tmpl src REMOTE_IPv4_ADDR dst LOCAL_IPv4_ADDR PROTO mode tunnel mark VTI_KEY # ip xfrm policy add dir out tmpl src LOCAL_IPv4_ADDR dst REMOTE_IPv4_ADDR PROTO mode tunnel mark VTI_KEY
# ip link add name gre1 type ip6gre local LOCAL_IPv6_ADDR remote REMOTE_IPv6_ADDR # ip link add name gretap1 type ip6gretap local LOCAL_IPv6_ADDR remote REMOTE_IPv6_ADDR
# ip fou add port 5555 ipproto 4 # ip link add name tun1 type ipip remote 192.168.1.1 local 192.168.1.2 ttl 225 encap fou encap-sport auto encap-dport 5555
# ip link add dev erspan1 type erspan local LOCAL_IPv4_ADDR remote REMOTE_IPv4_ADDR seq key KEY erspan_ver 1 erspan IDX or # ip link add dev erspan1 type erspan local LOCAL_IPv4_ADDR remote REMOTE_IPv4_ADDR seq key KEY erspan_ver 2 erspan_dir DIRECTION erspan_hwid HWID
Add tc filter to monitor traffic # tc qdisc add dev MONITOR_DEV handle ffff: ingress # tc filter add dev MONITOR_DEV parent ffff: matchall skip_hw action mirred egress mirror dev erspan1
i Running your build in Okteto Cloud... [+] Building 2.8s (16/16) FINISHED => importing cache manifest from pchico83/hello-world:okteto 1.1s ... ... ... ✓ Source code pushed to the development environment 'hello-world'
# Create the remote docker machine on GCE. This is a pretty beefy machine with SSD disk. KUBE_BUILD_VM=k8s-build KUBE_BUILD_GCE_PROJECT=<project> docker-machine create \ --driver=google \ --google-project=${KUBE_BUILD_GCE_PROJECT} \ --google-zone=us-west1-a \ --google-machine-type=n1-standard-8 \ --google-disk-size=50 \ --google-disk-type=pd-ssd \ ${KUBE_BUILD_VM} # Set up local docker to talk to that machine eval $(docker-machine env ${KUBE_BUILD_VM}) # Pin down the port that rsync will be exposed on the remote machine export KUBE_RSYNC_PORT=8730 # forward local 8730 to that machine so that rsync works docker-machine ssh ${KUBE_BUILD_VM} -L ${KUBE_RSYNC_PORT}:localhost:${KUBE_RSYNC_PORT} -N &