參考網址: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/calico.md
需將kubespray專案佈屬,預設值修改為以下
BGP mode
To enable BGP no-encapsulation mode:
calico_ipip_mode: 'Never'
calico_vxlan_mode: 'Never'
calico_network_backend: 'bird'
- 查看目前bgp狀況
root@k8s-master71u:~# calicoctl get nodes
NAME
k8s-master71u
k8s-master72u
k8s-master73u
k8s-node75u
k8s-node76u
root@k8s-master71u:~# calicoctl get ippool -o wide
NAME CIDR NAT IPIPMODE VXLANMODE DISABLED DISABLEBGPEXPORT SELECTOR
default-pool 10.201.0.0/16 true Never Always false false all()
# 看到 The BGP backend process (BIRD) is not running.
root@k8s-master71u:~# calicoctl node status
Calico process is running.
The BGP backend process (BIRD) is not running.
- 修改kubespray專案設定
(kubespray-venv) [root@ansible kubespray]# vim inventory/mycluster/group_vars/k8s_cluster/k8s-net-calico.yml
## 以下是預設
# Set calico network backend: "bird", "vxlan" or "none"
# bird enable BGP routing, required for ipip and no encapsulation modes
# calico_network_backend: vxlan
# IP in IP and VXLAN is mutualy exclusive modes.
# set IP in IP encapsulation mode: "Always", "CrossSubnet", "Never"
# calico_ipip_mode: 'Never'
# set VXLAN encapsulation mode: "Always", "CrossSubnet", "Never"
# calico_vxlan_mode: 'Always'
## 修改為
# Set calico network backend: "bird", "vxlan" or "none"
# bird enable BGP routing, required for ipip and no encapsulation modes
calico_network_backend: 'bird'
# IP in IP and VXLAN is mutualy exclusive modes.
# set IP in IP encapsulation mode: "Always", "CrossSubnet", "Never"
calico_ipip_mode: 'Never'
# set VXLAN encapsulation mode: "Always", "CrossSubnet", "Never"
calico_vxlan_mode: 'Never'
- 佈屬(無法使用ansible再佈屬,所以直接進k8s修改設定)
- 1.切換calico_network_backend為bird
root@k8s-master71u:~# vim /etc/kubernetes/calico-config.yml
data:
cluster_type: "kubespray"
calico_backend: "bird"
root@k8s-master71u:~# kubectl apply -f /etc/kubernetes/calico-config.yml
configmap/calico-config configured
- 2.設定calico mode ipip和vxlan為Never
修改設定 “ipipMode”:“Never”, “vxlanMode”:“Never”
root@k8s-master71u:~# calicoctl patch felixconfig default -p '{"spec":{"vxlanEnabled":false}}'
# 設定下去會POD之間都會連不到喔,除非重啟calico
root@k8s-master71u:~# calicoctl patch ippool default-pool -p '{"spec":{"ipipMode":"Never", "vxlanMode":"Never"}}'
Successfully patched 1 'IPPool' resource
root@k8s-master71u:~# calicoctl get ippool -o wide
NAME CIDR NAT IPIPMODE VXLANMODE DISABLED DISABLEBGPEXPORT SELECTOR
default-pool 10.201.0.0/16 true Never Never false false all()
重啟calico
root@k8s-master71u:~# kubectl get pod -n kube-system | grep calico
calico-kube-controllers-777ff6cddb-d4htx 1/1 Running 8 (79d ago) 105d
calico-node-4qhfj 1/1 Running 8 (79d ago) 105d
calico-node-9wrql 1/1 Running 7 (79d ago) 105d
calico-node-9xzmn 1/1 Running 7 (79d ago) 105d
calico-node-m975z 1/1 Running 8 (79d ago) 105d
calico-node-s9w4x 1/1 Running 9 (79d ago) 105d
# 重啟calico
root@k8s-master71u:~# for i in `kubectl get pod -n kube-system | grep calico | awk '{print $1}'`;do kubectl delete pod $i -n kube-system ;done
pod "calico-kube-controllers-777ff6cddb-d4htx" deleted
pod "calico-node-4qhfj" deleted
pod "calico-node-9wrql" deleted
pod "calico-node-9xzmn" deleted
pod "calico-node-m975z" deleted
pod "calico-node-s9w4x" deleted
root@k8s-master71u:~# kubectl get pod -n kube-system | grep calico
calico-kube-controllers-777ff6cddb-xbmt2 1/1 Running 0 26s
calico-node-fn4pp 1/1 Running 0 25s
calico-node-vrmk7 1/1 Running 0 24s
calico-node-w8k8p 1/1 Running 0 23s
calico-node-x8j4b 1/1 Running 0 24s
calico-node-zc5pm 1/1 Running 0 24s
# 可以看到去各節點的路由,都是從實體介面ens160出去了
root@k8s-master71u:~# ip route
default via 192.168.1.1 dev ens160 proto static
10.201.14.128/26 via 192.168.1.75 dev ens160 proto bird
10.201.96.0/26 via 192.168.1.73 dev ens160 proto bird
10.201.126.0/26 via 192.168.1.72 dev ens160 proto bird
blackhole 10.201.133.0/26 proto bird
10.201.133.8 dev cali755b97c9930 scope link
10.201.133.10 dev cali0c704e12b74 scope link
10.201.133.14 dev cali95c22b6e0f1 scope link
10.201.133.22 dev cali2787253e85e scope link
10.201.133.29 dev cali3e0b78920ca scope link
10.201.255.192/26 via 192.168.1.76 dev ens160 proto bird
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/16 dev br-9b299fee73c0 proto kernel scope link src 172.18.0.1 linkdown
192.168.1.0/24 dev ens160 proto kernel scope link src 192.168.1.71
- 3.測試跨節點,網路是否通的
root@k8s-master71u:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redisinsight-cf7f6847b-s2zgg 1/1 Running 5 (79d ago) 91d 10.201.255.199 k8s-node76u <none> <none>
test-nginx 1/1 Running 7 (79d ago) 104d 10.201.14.162 k8s-node75u <none> <none>
web2-5d48fb75c5-dt5xd 1/1 Running 7 (79d ago) 104d 10.201.255.230 k8s-node76u <none> <none>
web2-5d48fb75c5-ggmrz 1/1 Running 7 (79d ago) 104d 10.201.14.136 k8s-node75u <none> <none>
web2-5d48fb75c5-jsvck 1/1 Running 7 (79d ago) 104d 10.201.255.217 k8s-node76u <none> <none>
# 測試跨節點,網路是通的
root@k8s-master71u:~# kubectl exec -ti test-nginx sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping 10.201.255.230
PING 10.201.255.230 (10.201.255.230): 56 data bytes
64 bytes from 10.201.255.230: seq=0 ttl=62 time=0.445 ms
64 bytes from 10.201.255.230: seq=1 ttl=62 time=0.364 ms
64 bytes from 10.201.255.230: seq=2 ttl=62 time=0.447 ms
64 bytes from 10.201.255.230: seq=3 ttl=62 time=0.357 ms
# 重啟節點一樣沒問題,如果是在營運的主機,就別重啟了
root@k8s-master71u:~# reboot
root@k8s-master72u:~# reboot
root@k8s-master73u:~# reboot
root@k8s-node75u:~# reboot
root@k8s-node76u:~# reboot
- 查看bgp狀況
root@k8s-master71u:~# calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 192.168.1.72 | node-to-node mesh | up | 15:53:34 | Established |
| 192.168.1.73 | node-to-node mesh | up | 15:54:20 | Established |
| 192.168.1.75 | node-to-node mesh | up | 15:53:58 | Established |
| 192.168.1.76 | node-to-node mesh | up | 15:54:40 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.