Once again it is obvious that there is no free lunch. Not even in networking technology. Saving money on routers and interfaces is often accompanied by reduced performance.

Here I was discussing a solution of a simple problem – how to forward traffic between a global router (GRT) and a VRF within a single box with some constraints:

  • systems in the VRF must also reach directly connected networks in the GRT
  • there is no BGP information about directly connected networks in the GRT, therefore VRF route leaking with BGP is not an option
  • no extra routers can be used
  • no additional ports can be used

GRE tunnels came to the rescue :-).
Here is how it can be done.


Traffic from the VRF server A toward the Internet and all other system in the “blue” autonomous system A is forwarded via the GRE tunnel which has been constructed on router A. We have used tengigabit ethernet interfaces in this setup on a Cisco Cat6500 equipped with supervisor 2T. Every packet that follows the arrowed line is being encapsulated by interface Tun1, forwarded locally and de-encapsulated again on Tun2 (this detour is depicted with the dashed line). A single ingress packet is switched twice by the L3 forwarding engine (please, check the output below: you will find 17.200 mio packets leaving interface Tun1 and 34.400 mio packets switched out in L3 forwarding engine which is exactly twice the number of input packets to ingress interface Ten1/2).

In our test we generate a constant 5 Gbps ethernet flow from servers A toward the Internet. We are sending approximately 69.150 IP packets per second, 9000 bytes each plus the additional 18 bytes for ethernet framing with CRC (7 byte preample and the interpacket gap are taken into our 5 gig here). Let us check the traffic rates on all four interfaces along the path via router A – starting at the bottom Ten1/2, then Tun1, Tun2 and Ten1/1 at the top facing router C (non-relevant output erased for brevity):

r6500-sup2t#show int ten1/2
TenGigabitEthernet1/2 is up, line protocol is up (connected)
  Hardware is C6k 10000Mb 802.3, address is 001c.584b.6629 (bia 001c.584b.6629)
  MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 1/255, rxload 127/255
  30 second input rate 4989029000 bits/sec, 69153 packets/sec
  30 second output rate 0 bits/sec, 0 packets/sec
     17191163336 packets input, 155029910838820 bytes, 0 no buffer

r6500-sup2t#show int tun1
Tunnel1 is up, line protocol is up
  Hardware is Tunnel
  Internet address is 100.0.0.1/30
  MTU 17868 bytes, BW 10000000 Kbit, DLY 50000 usec,
     reliability 255/255, txload 138/255, rxload 1/255
  Tunnel source 1.2.3.4 (Loopback69), destination 4.3.2.1
  Tunnel protocol/transport GRE/IP
  Tunnel transport MTU 1490 bytes
  30 second input rate 0 bits/sec, 0 packets/sec
  30 second output rate 5446714000 bits/sec, 75448 packets/sec
     17190901997 packets output, 155130617643346 bytes, 0 underruns

r6500-sup2t#show int tun2
Tunnel2 is up, line protocol is up
  Hardware is Tunnel
  Internet address is 100.0.0.2/30
  MTU 17868 bytes, BW 10000000 Kbit, DLY 50000 usec,
     reliability 255/255, txload 1/255, rxload 115/255
  Tunnel source 4.3.2.1 (Loopback96), destination 1.2.3.4
  Tunnel protocol/transport GRE/IP
  Tunnel transport MTU 1490 bytes
  30 second input rate 4540221000 bits/sec, 62890 packets/sec
  30 second output rate 0 bits/sec, 0 packets/sec
     17191172250 packets input, 155133056406418 bytes, 0 no buffer

r6500-sup2t#show int ten1/1
TenGigabitEthernet1/1 is up, line protocol is up (connected)
  Hardware is C6k 10000Mb 802.3, address is 000f.f88a.d080 (bia 000f.f88a.d080)
  Internet address is 37.0.0.1/30
  MTU 9216 bytes, BW 10000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 127/255, rxload 1/255
  30 second input rate 0 bits/sec, 0 packets/sec
  30 second output rate 4984578000 bits/sec, 69091 packets/sec
  L3 in Switched: ucast: 0 pkt, 0 bytes - mcast: 0 pkt, 0 bytes
  L3 out Switched: ucast: 17191326955 pkt, 155031386480190 bytes - mcast: 0 pkt, 0 bytes
     17191694480 packets output, 155034661077494 bytes, 0 underruns

r6500-sup2t#show platform hardware statistics module 5
 --- Hardware Statistics for Module 5 Earl 0---

L2 Forwarding Engine
    Switched in L2 : 51578500669 @ 207539 pps

L3 Forwarding Engine
    Processed in L3 : 38553538 @ 207538 pps
    Switched in L3 : 34383700843 @ 138353 pps

    Bridged : 176165
    FIB Switched
        IPv4 Ucast : 23962475
    ACL Routed
        Input : 0
        Output : 11996969
    Netflow Switched
        Input : 0
        Output : 0
    Exception Redirected
        Input : 0
        Output : 0
    Mcast Bridge Disable & No Redirect
                   : 0
    Total packets with TOS Changed : 0
    Total packets with TC Changed : 0
    Total packets with COS Changed : 133333
    Total packets with EXP Changed : 0
    Total packets with QOS Tunnel Encap Changed : 11981242
    Total packets with QOS Tunnel Decap Changed : 11981241
    Total packets dropped by ACL : 0
    Total packets dropped by Policing : 0

The statistics confirm that we can deliver 5 Gbp/s one way via the tunnel on Cat6500, Sup-2T. When packet size is reduced from jumbo 9000 byte-frames to smaller ones, say 512 bytes, performance drops to 3.6 Gbps approx. 5 gig is the absolute maximum we can achieve. And we have to use jumbo frames for such a performance. The screenshot from the test instrument show this – a flat line on the graph on the right side displays the throughput from the egress interface, while the statistics from the ingress interafaces are shown on the left:


To summarise – to forward traffic between a global router and the VRF on the same box GRE tunnels will save you from the additional cost for interfaces, but only one half of the throughput can be achieved, say 5 Gbps instead of full 10 Gbps.

Advertisements