摘要:支持協議,所以可以很方便的通過編程實現大規模網絡的自動化,被大量運用于網絡中。流表中,優先級高的優先匹配,并執行匹配規則的。
sdn (software defines network)
看了些相關的資料,這里記錄一下自己對sdn的理解,能力有限,如有錯誤歡迎指正。
sdn軟件定義網絡,目的是想要利用軟件來模擬網絡設備,如交換機,路由器之類的。
為什么需要這么做? 一個主要原因是云計算的高速發展,給傳統的數據中心帶來了更加靈活和復雜的組網需求。
傳統網絡設備完成數據中心服務器的組網,在此之上,通過sdn來完成虛機和容器之間的連通
ovs (openvSwitch)官方的說法:Open vSwitch是一款高質量的多層虛擬交換機,以開源Apache 2許可證授權,非常適合在虛擬機環境中充當2層交換機。支持多種基于Linux的虛擬化技術,包括Xen / XenServer、KVM和VirtualBox。
支持Open Flow協議,所以可以很方便的通過編程實現大規模網絡的自動化,被大量運用于SDN網絡中。
架構和原理之類的文章很多,這里就不在一一闡述,本文以實踐為主。
安裝# 安裝docker yum install -y docker-1.13.1 # 預安裝 yum -y install wget openssl-devel gcc make python-devel openssl-devel kernel-devel graphviz kernel-debug-devel autoconf automake rpm-build redhat-rpm-config libtool python-twisted-core python-zope-interface PyQt4 desktop-file-utils libcap-ng-devel groff checkpolicy selinux-policy-devel # 安裝open vswitch: yum install -y openvswitch-2.8.2-1.el7.x86_64 # 此版本包含ovs-docker systemctl start openvswitch.service systemctl is-active openvswitch systemctl enable openvswitchovs單機連通性
創建容器, 設置net=none可以防止docker0默認網橋影響連通性測試
docker run -itd --name con6 --net=none ubuntu:14.04 /bin/bash docker run -itd --name con7 --net=none ubuntu:14.04 /bin/bash docker run -itd --name con8 --net=none ubuntu:14.04 /bin/bash
創建網橋
ovs-vsctl add-br ovs0
使用ovs-docker給容器添加網卡,并掛到ovs0網橋上
ovs-docker add-port ovs0 eth0 con6 --ipaddress=192.168.1.2/24 ovs-docker add-port ovs0 eth0 con7 --ipaddress=192.168.1.3/24 ovs-docker add-port ovs0 eth0 con8 --ipaddress=192.168.1.4/24
查看網橋
[root@controller /]# ovs-vsctl show 21e4d4c5-cadd-4dac-b025-c20b8108ad09 Bridge "ovs0" Port "b167e3dcf8db4_l" Interface "b167e3dcf8db4_l" Port "f1c0a9d0994d4_l" Interface "f1c0a9d0994d4_l" Port "121c6b2f221c4_l" Interface "121c6b2f221c4_l" Port "ovs0" Interface "ovs0" type: internal ovs_version: "2.8.2"
測試連通性
[root@controller /]# docker exec -it con8 sh # ping 192.168.1.2 PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. 64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.886 ms ^C --- 192.168.1.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.886/0.886/0.886/0.000 ms # # ping 192.168.1.3 PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data. 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.712 ms ^C --- 192.168.1.3 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms #設置VLAN tag
查看網橋
[root@controller /]# ovs-vsctl show 21e4d4c5-cadd-4dac-b025-c20b8108ad09 Bridge "ovs0" Port "b167e3dcf8db4_l" Interface "b167e3dcf8db4_l" Port "f1c0a9d0994d4_l" Interface "f1c0a9d0994d4_l" Port "121c6b2f221c4_l" Interface "121c6b2f221c4_l" Port "ovs0" Interface "ovs0" type: internal ovs_version: "2.8.2"
查看interface
[root@controller /]# ovs-vsctl list interface f1c0a9d0994d4_l _uuid : cf400e7c-d2d6-4e0a-ad02-663dd63d1751 admin_state : up duplex : full error : [] external_ids : {container_id="con6", container_iface="eth0"} ifindex : 239 ingress_policing_burst: 0 ingress_policing_rate: 0 lacp_current : [] link_resets : 1 link_speed : 10000000000 link_state : up mac_in_use : "96:91:0a:c9:02:d6" mtu : 1500 mtu_request : [] name : "f1c0a9d0994d4_l" ofport : 3 other_config : {} statistics : {collisions=0, rx_bytes=1328, rx_crc_err=0, rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=18, tx_bytes=3032, tx_dropped=0, tx_errors=0, tx_packets=40} status : {driver_name=veth, driver_version="1.0", firmware_version=""} type : ""
設置vlan tag
ovs-vsctl set port f1c0a9d0994d4_l tag=100 //con6 ovs-vsctl set port b167e3dcf8db4_l tag=100 //con8 ovs-vsctl set port 121c6b2f221c4_l tag=200 //con7
測試連通性
[root@controller /]# docker exec -it con8 sh # # ping 192.168.1.2 -c 3 PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. 64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.413 ms 64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.061 ms 64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.057 ms --- 192.168.1.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2044ms rtt min/avg/max/mdev = 0.057/0.177/0.413/0.166 ms # # ping 192.168.1.3 -c 3 PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data. From 192.168.1.4 icmp_seq=1 Destination Host Unreachable From 192.168.1.4 icmp_seq=2 Destination Host Unreachable --- 192.168.1.3 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2068ms pipe 3 #跨主機連通性 環境
網橋: ovs0 容器: con6 192.168.1.2 con7 192.168.1.3 con8 192.168.1.4
創建方式依上
網橋: ovs1 容器: con11
準備環境
創建網橋 ovs-vsctl add-br ovs1 創建容器 docker run -itd --name con11 --net=none ubuntu:14.04 /bin/bash 掛到ovs0網橋 ovs-docker add-port ovs1 eth0 con11 --ipaddress=192.168.1.6/24
查看網橋ovs1
[root@compute82 /]# ovs-vsctl show 380ce027-8edf-4844-8e89-a6b9c1adaff3 Bridge "ovs1" Port "0384251973e64_l" Interface "0384251973e64_l" Port "ovs1" Interface "ovs1" type: internal ovs_version: "2.8.2"設置vxlan
在host1上
[root@controller /]# ovs-vsctl add-port ovs0 vxlan1 -- set interface vxlan1 type=vxlan options:remote_ip=172.29.101.82 options:key=flow [root@controller /]# [root@controller /]# ovs-vsctl show 21e4d4c5-cadd-4dac-b025-c20b8108ad09 Bridge "ovs0" Port "b167e3dcf8db4_l" tag: 100 Interface "b167e3dcf8db4_l" Port "f1c0a9d0994d4_l" tag: 100 Interface "f1c0a9d0994d4_l" Port "121c6b2f221c4_l" tag: 200 Interface "121c6b2f221c4_l" Port "ovs0" Interface "ovs0" type: internal Port "vxlan1" Interface "vxlan1" type: vxlan options: {key=flow, remote_ip="172.29.101.82"} ovs_version: "2.8.2"
在host2上
[root@compute82 /]# ovs-vsctl add-port ovs1 vxlan1 -- set interface vxlan1 type=vxlan options:remote_ip=172.29.101.123 options:key=flow [root@compute82 /]# [root@compute82 /]# ovs-vsctl show 380ce027-8edf-4844-8e89-a6b9c1adaff3 Bridge "ovs1" Port "0384251973e64_l" Interface "0384251973e64_l" Port "vxlan1" Interface "vxlan1" type: vxlan options: {key=flow, remote_ip="172.29.101.123"} Port "ovs1" Interface "ovs1" type: internal ovs_version: "2.8.2"設置vlan tag
ovs-vsctl set port 0384251973e64_l tag=100連通性測試
[root@compute82 /]# docker exec -ti con11 bash root@c82da61bf925:/# ping 192.168.1.2 PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. 64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.161 ms 64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.206 ms ^C --- 192.168.1.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms root@c82da61bf925:/# root@c82da61bf925:/# ping 192.168.1.3 PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data. ^C --- 192.168.1.3 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2027ms root@c82da61bf925:/# root@c82da61bf925:/# exit結論
vxlan只能連通兩臺機器的ovs上同一個網段的容器,無法連通ovs上不同網段的容器。如果需要連通不同網段的容器,接下來我們嘗試通過ovs的流表來解決這個問題。
OpenFlow flow table支持openflow的交換機中可能包含多個flow table。每個flow table包含多條規則,每條規則包含匹配條件和執行動作。flow table中的每條規則有優先級,優先級高的優先匹配,匹配到規則以后,執行action,如果匹配失敗,按優先級高低,繼續匹配下一條。如果都不匹配,每張表會有默認的動作,一般為drop或者轉給下一張流表。
實踐 環境host1 172.29.101.123
網橋: ovs0 容器: con6 192.168.1.2 tag=100 con7 192.168.1.3 tag=100
host2 172.29.101.82
網橋: ovs1 容器: con9: 192.168.2.2 tag=100 con10:192.168.2.3 tag=100 con11: 192.168.1.5 tag=100查看默認流表
在host1上查看默認流表
[root@controller msxu]# ovs-ofctl dump-flows ovs0 cookie=0x0, duration=27858.050s, table=0, n_packets=5253660876, n_bytes=371729202788, priority=0 actions=NORMAL
在容器con6中ping con7,網絡連通
[root@controller /]# docker exec -ti con6 bash root@9ccc5c5664f9:/# root@9ccc5c5664f9:/# ping 192.168.1.3 PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data. 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.613 ms 64 bytes from 192.168.1.3: icmp_seq=2 ttl=64 time=0.066 ms --- 192.168.1.3 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1058ms rtt min/avg/max/mdev = 0.066/0.339/0.613/0.274 ms root@9ccc5c5664f9:/#
刪除默認流表
[root@controller /]# ovs-ofctl del-flows ovs0 [root@controller /]# [root@controller /]# ovs-ofctl dump-flows ovs0 [root@controller /]#
測試網絡連通性,發現網絡已經不通
[root@controller /]# docker exec -ti con6 bash root@9ccc5c5664f9:/# root@9ccc5c5664f9:/# ping 192.168.1.3 PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data. ^C --- 192.168.1.3 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1025ms root@9ccc5c5664f9:/#添加流表
如果要con6和con7能夠通信,需要建立規則,讓ovs轉發對應的數據
查看con6和con7在ovs上的網絡端口
[root@controller /]# ovs-vsctl show 21e4d4c5-cadd-4dac-b025-c20b8108ad09 Bridge "ovs0" Port "f1c0a9d0994d4_l" tag: 100 Interface "f1c0a9d0994d4_l" Port "121c6b2f221c4_l" tag: 100 Interface "121c6b2f221c4_l" Port "ovs0" Interface "ovs0" type: internal Port "vxlan1" Interface "vxlan1" type: vxlan options: {key=flow, remote_ip="172.29.101.82"} ovs_version: "2.8.2" [root@controller /]# ovs-vsctl list interface f1c0a9d0994d4_l |grep ofport ofport : 3 ofport_request : [] [root@controller /]# [root@controller /]# ovs-vsctl list interface 121c6b2f221c4_l |grep ofport ofport : 4 ofport_request : []
添加規則:
[root@controller /]#ovs-ofctl add-flow ovs0 "priority=1,in_port=3,actions=output:4" [root@controller /]#ovs-ofctl add-flow ovs0 "priority=2,in_port=4,actions=output:3" [root@controller /]# ovs-ofctl dump-flows ovs0 cookie=0x0, duration=60.440s, table=0, n_packets=0, n_bytes=0, priority=1,in_port="f1c0a9d0994d4_l" actions=output:"121c6b2f221c4_l" cookie=0x0, duration=50.791s, table=0, n_packets=0, n_bytes=0, priority=1,in_port="121c6b2f221c4_l" actions=output:"f1c0a9d0994d4_l" [root@controller /]#
測試連通性:con6和con7已通
[root@controller msxu]# docker exec -ti con6 bash root@9ccc5c5664f9:/# ping 192.168.1.3 PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data. 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.924 ms 64 bytes from 192.168.1.3: icmp_seq=2 ttl=64 time=0.058 ms ^C --- 192.168.1.3 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1057ms rtt min/avg/max/mdev = 0.058/0.491/0.924/0.433 ms root@9ccc5c5664f9:/#
設置一條優先級高的規則:
[root@controller /]# ovs-ofctl add-flow ovs0 "priority=2,in_port=4,actions=drop" [root@controller /]# [root@controller /]# docker exec -ti con6 bash root@9ccc5c5664f9:/# root@9ccc5c5664f9:/# ping 192.168.1.3 PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data. ^C --- 192.168.1.3 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2087ms root@9ccc5c5664f9:/# root@9ccc5c5664f9:/#
流表中的規則是有優先級的,priority數值越大,優先級越高。流表中,優先級高的優先匹配,并執行匹配規則的actions。如果不匹配,繼續匹配優先級低的下一條。
跨網段連通在上一個vxlan的實踐中,通過設置vxlan可以打通兩個機器上的ovs,但我們提到兩個機器ovs上的容器得在同一個網段上才能通信。
在ip為192.168.2.2的con9上ping另一臺機上的con6 192.168.1.2
[root@compute82 /]# docker exec -ti con9 bash root@b55602aad0ac:/# root@b55602aad0ac:/# ping 192.168.1.2 connect: Network is unreachable root@b55602aad0ac:/#添加流表規則:
在host1上:
[root@controller /]# ovs-ofctl add-flow ovs0 "priority=4,in_port=6,actions=output:3" [root@controller /]# [root@controller /]# ovs-ofctl add-flow ovs0 "priority=4,in_port=3,actions=output:6" [root@controller /]# ovs-ofctl dump-flows ovs0 cookie=0x0, duration=3228.737s, table=0, n_packets=7, n_bytes=490, priority=1,in_port="f1c0a9d0994d4_l" actions=output:"121c6b2f221c4_l" cookie=0x0, duration=3215.544s, table=0, n_packets=0, n_bytes=0, priority=1,in_port="121c6b2f221c4_l" actions=output:"f1c0a9d0994d4_l" cookie=0x0, duration=3168.297s, table=0, n_packets=9, n_bytes=546, priority=2,in_port="121c6b2f221c4_l" actions=drop cookie=0x0, duration=12.024s, table=0, n_packets=0, n_bytes=0, priority=4,in_port=vxlan1 actions=output:"f1c0a9d0994d4_l" cookie=0x0, duration=3.168s, table=0, n_packets=0, n_bytes=0, priority=4,in_port="f1c0a9d0994d4_l" actions=output:vxlan1
在host2上
[root@compute82 /]# ovs-ofctl add-flow ovs1 "priority=1,in_port=1,actions=output:6" [root@compute82 /]# [root@compute82 /]# ovs-ofctl add-flow ovs1 "priority=1,in_port=6,actions=output:1" [root@compute82 /]# ovs-ofctl dump-flows ovs1 cookie=0x0, duration=1076.522s, table=0, n_packets=27, n_bytes=1134, priority=1,in_port="0384251973e64_l" actions=output:vxlan1 cookie=0x0, duration=936.403s, table=0, n_packets=0, n_bytes=0, priority=1,in_port=vxlan1 actions=output:"0384251973e64_l" cookie=0x0, duration=70205.443s, table=0, n_packets=7325, n_bytes=740137, priority=0 actions=NORMAL測試連通性
在host2 con9上ping 192.168.1.2
[root@compute82 /]# docker exec -ti con9 bash root@b55602aad0ac:/# root@b55602aad0ac:/# ping 192.168.1.2 connect: Network is unreachable root@b55602aad0ac:/#
發現網絡并不通,查看發現路由規則有問題,添加默認路由規則,注意這里需要已privileged權限進入容器
[root@compute82 /]# docker exec --privileged -ti con9 bash root@b55602aad0ac:/# root@b55602aad0ac:/# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 root@b55602aad0ac:/# route add default dev eth0 root@b55602aad0ac:/# root@b55602aad0ac:/# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 eth0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 root@b55602aad0ac:/#
在host1和host2的容器中都添加好路由規則后,測試連通性
[root@compute82 /]# docker exec --privileged -ti con9 bash root@b55602aad0ac:/# root@b55602aad0ac:/# ping 192.168.1.2 PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. 64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=1.16 ms 64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.314 ms ^C --- 192.168.1.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 0.314/0.739/1.165/0.426 ms
已成功通過ovs,vxlan打通兩臺機器上不同網段容器
文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。
轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/27931.html
摘要:需要修改數據包的二層源目地址以及三層包頭的因為路由是逐跳轉發的,每一跳都需要做這些工作,即使是現在通過流表轉發,中間的轉發器直接轉發報文,到達倒數第一跳的時候還是需要把數據包的目的地址修改為接受端的地址。 前言 熟悉這款設備的同學,應該也快到不惑之年了吧!這應該是Cisco最古老的路由器了。上個世紀80年代至今,路由交換技術不斷發展,但是在這波瀾壯闊的變化之中,總有一些東西在嘈雜的機房...
摘要:需要修改數據包的二層源目地址以及三層包頭的因為路由是逐跳轉發的,每一跳都需要做這些工作,即使是現在通過流表轉發,中間的轉發器直接轉發報文,到達倒數第一跳的時候還是需要把數據包的目的地址修改為接受端的地址。 前言 熟悉這款設備的同學,應該也快到不惑之年了吧!這應該是Cisco最古老的路由器了。上個世紀80年代至今,路由交換技術不斷發展,但是在這波瀾壯闊的變化之中,總有一些東西在嘈雜的機房...
摘要:需要修改數據包的二層源目地址以及三層包頭的因為路由是逐跳轉發的,每一跳都需要做這些工作,即使是現在通過流表轉發,中間的轉發器直接轉發報文,到達倒數第一跳的時候還是需要把數據包的目的地址修改為接受端的地址。 前言 熟悉這款設備的同學,應該也快到不惑之年了吧!這應該是Cisco最古老的路由器了。上個世紀80年代至今,路由交換技術不斷發展,但是在這波瀾壯闊的變化之中,總有一些東西在嘈雜的機房...
摘要:在實踐中,我們開發并上線了網關和負載均衡網關。而負載均衡網關則支持無縫替換傳統交換機實現網關集群,支持一致性,并支持根據任意字段,內存和端口來計算哈希,支持協議。網絡作為信息時代的重要載體,在云服務的快速發展下形成了獨具特色的虛擬網絡服務架構和模式。12月19日,2020中國云網絡峰會于北京順利召開,會上UCloud虛擬網絡VPC負責人陳煌棟給大家帶來了演講《UCloud VPC技術演進之路...
摘要:每個節點的網橋使用一個子網,每個容器使用一個子網內的,那么我們就可以組成下圖中所示網絡。到此,在的協調下,各個主機上的子網就不會再沖突了,另外,會維護容器網絡的路由規則,容器就可以通過訪問容器了,也就實現了跨主機容器互聯。 當您將多臺服務器節點組成一個Docker集群時,需要對集群網絡進行設置,否則默認情況下,無法跨主機容器互聯,接下來我們首先分析一下原因。 跨主機容器互聯 下圖描述了...
閱讀 1776·2021-11-11 11:02
閱讀 1679·2021-09-22 15:55
閱讀 2483·2021-09-22 15:18
閱讀 3488·2019-08-29 11:26
閱讀 3743·2019-08-26 13:43
閱讀 2646·2019-08-26 13:32
閱讀 897·2019-08-26 10:55
閱讀 965·2019-08-26 10:27