I’ve been leveraging VIRL for some time to build and test self-contained labs. I’ve always known that there was some ability to connect to the world outside of this environment. Recently, I decided to configure this functionality and I wanted to take just a moment to share what I found.
First and foremost, this isn’t anything difficult or time consuming. So if you have a need to leverage physical devices with your VIRL deployment, don’t hesitate before building it out.
There are two mechanisms for outside connectivity. The first mechanism is called SNAT. This method basically builds static NAT in and out of the environment. I get how this could be beneficial, but I would typically prefer to keep any NAT configuration contained to an environment that I am very familiar with (possibly an ASA or IOS instance outside the lab when an additional NAT layer is required).
The second method, and configuration we will be testing is called FLAT. In this configuration, VIRL connects a L2 broadcast domain between a lab device and an Ethernet interface. In my example I am running the VIRL components in a VM environment on ESXi. So this is a virtual interface that needs to be mapped through the VMWare vSwitch.
To test this functionality, I created the following topology.
From a VIRL perspective, there are a couple of things to be aware of. The default configuration would have VIRL owning the subnet and the external default GW existing at 172.16.1.1.
This configuration can be found in User Workspace Management in the VIRL Server -> System Configuration section (in the Networks tab).
Using the above defaults, the only thing that is necessary on the physical (external network devices) is to build out a VLAN that has a default gateway that matches the indicated Gateway IP address. The VM environment needs to be mapped through to the physical network. In my case, I built a VLAN 10 and assigned 172.16.1.1/24 on a Meraki Firewall. From a VMWare perspective the ‘eth1’ seemed to be presented as ‘Network adapter 2’.
It is also worth noting that flat networks seem to require promiscuous mode (on the security tab above).
VM Attachment to ‘flat’
At this point a test environment can be built by building a topology with any device connected to the General type ‘L2 External (FLAT)’. In this example, I attached a CSR1000v to ‘flat-1’.
For verification, a simple test can be performed from the csr1000v.
csr1000v-1(config)#interface GigabitEthernet2 csr1000v-1(config-if)#ip address 172.16.1.62 255.255.255.0 csr1000v-1(config-if)#exit csr1000v-1(config)#ip route 0.0.0.0 0.0.0.0 172.16.1.1 csr1000v-1(config)#exit csr1000v-1#ping 172.16.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 172.16.1.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms csr1000v-1#ping 188.8.131.52 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 184.108.40.206, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 40/54/95 ms
From a VIRL monitoring standpoint, there is some useful information that is provided in the “Connectivity” section of User Workspace Manager.
My recommendation for putting this into practice is to build out a dedicated VLAN. I would also allow VIRL to own the IP addressing with the exception of the default gateway. If there is a need for other external IP addresses on the connected subnet, I would start with 2-49 in the last octect. Anything else might require adjustment of the pool in the Flat Network settings.
Configuring external connectivity with VIRL isn’t overly complex. There is just a need to think through what the physical and virtual environments should look like and how they relate. This article should serve as a simple starting point for those who have a need for this type of functionality.
Disclaimer: This article includes the independent thoughts, opinions, commentary or technical detail of Paul Stewart. This
may or may does not reflect the position of past, present or future employers.