Several years ago I wrote an article about the Woes of Using an ASA as a Default Gateway. I have received a lot of feedback about this post and recently had a request for an update around ASA > 8.3. When building this scenario out with current ASA code, I found that the base NAT configuration (internet only PAT) had no bearing on the hairpin configuration. As expected, I found the same challenge around state bypass. I wanted to share a current post that demonstrates the challenges and solutions when traffic is bounced off the inside interface of the ASA.
The requirements of the configuration are as follows–
- TestHost must be able to Telnet and Ping to Internet and PartnerHost
- The inside interface of asav-1 must be the default gateway for TestHost
- asav-1 is doing PAT for Internet destined traffic
- PartnerRTR and ParnterHost have been preconfigured as shown above
The following are the base configurations for all of the devices. The configuration of asav-1 does not seem to allow communication from TestHost to PartnerHost (188.8.131.52/24 network).
hostname TestHost ! interface GigabitEthernet2 description to iosvl2-1 ip address 10.1.1.5 255.255.255.0 ! ip route 0.0.0.0 0.0.0.0 10.1.1.100
hostname asav-1 ! interface GigabitEthernet0/0 description to iosvl2-1 security-level 100 ip address 10.1.1.100 255.255.255.0 ! interface GigabitEthernet0/1 description to Internet security-level 0 ip address 184.108.40.206 255.255.255.0 ! object network obj_any subnet 0.0.0.0 0.0.0.0 ! object network obj_any nat (inside,outside) dynamic interface ! class-map inspection_default match default-inspection-traffic ! route outside 0.0.0.0 0.0.0.0 220.127.116.11 1 ! policy-map global_policy class inspection_default inspect ip-options inspect netbios inspect rtsp inspect sunrpc inspect tftp inspect xdmcp inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect esmtp inspect sqlnet inspect sip inspect skinny inspect http inspect dns migrated_dns_map_1 inspect icmp !
hostname Internet ! interface Loopback1 ip address 18.104.22.168 255.255.255.255 interface GigabitEthernet2 description to asav-1 ip address 22.214.171.124 255.255.255.0
hostname PartnerRTR ! interface GigabitEthernet2 description to iosvl2-1 ip address 10.1.1.200 255.255.255.0 interface GigabitEthernet3 description to PartnerHost ip address 126.96.36.199 255.255.255.0
hostname PartnerHost ! interface GigabitEthernet2 description to PartnerRTR ip address 188.8.131.52 255.255.255.0 ! ip route 10.1.1.0 255.255.255.0 184.108.40.206
The obvious challenge for this configuration is that traffic sourced from TestHost and destined to 220.127.116.11/24 must be redirected by the asav-1 to the next hop of 10.1.1.200. Those familiar with ASA know that, by default, traffic entering an interface cannot leave out that same interface. However there is an override command that allows this to occur. So the obvious configuration changes are as follows.
ASA Initial Changes
route inside 18.104.22.168 255.255.255.0 10.1.1.200 same-security-traffic permit intra-interface
At this point, many would expect the requirements to be met. Let’s go ahead and test to see what happens.
//connection to the internet looks good TestHost#ping 22.214.171.124 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 126.96.36.199, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 7/10/16 ms TestHost#telnet 188.8.131.52 Trying 184.108.40.206 ... Open User Access Verification Username: cisco Password: Internet#exit //connection to the PartnerHost isn't working exactly as expected //ping is working TestHost#ping 220.127.116.11 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 18.104.22.168, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 8/12/16 ms //telnet isn't working TestHost#telnet 22.214.171.124 Trying 126.96.36.199 ... % Connection timed out; remote host not responding
Just for craps and giggles, let’s also see what we can see from PartnerHost to TestHost.
//surprisingly even ICMP isn't working from here PartnerHost#ping 10.1.1.5 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.1.1.5, timeout is 2 seconds: ..... Success rate is 0 percent (0/5) PartnerHost#telnet 10.1.1.5 Trying 10.1.1.5 ... % Connection timed out; remote host not responding
When we back up and think about the traffic flow, this starts to make a little sense. We know that the ASA is a stateful device and cares about things like TCP flags (syn,ack,rst). It is also apparent from the original policy-map that it cares about ICMP state (notice ‘inspect icmp’). In other words, there should be no Echo-Reply without seeing an Echo.
Moreover, there is an asymmetric traffic flow that prevents the ASA from seeing both directions of a flow from the local subnet to the Partner network. Outbound traffic flows from TestHost to asav-1, then to PartnerRTR before arriving at PartnerHost. Traffic from PartnerHost flows through PartnerRTR then directly to TestHost (it is L3 direct connected).
Supporting this theory, outbound ICMP echoes from TestHost to PartnerHost worked as expected. The inbound Echo Replies were delivered without the ASA seeing them or being able to drop them. However, a ping from PartnerHost to TestHost fails because TestHost is delivering an Echo-Reply through the ASA and the ASA has not seen the accompanying Echo packets. This theory can be tested by making the ASA stateless as it relates to ICMP.
policy-map global_policy class inspection_default no inspect icmp
Now we can test Ping from PartnerHost
PartnerHost# PartnerHost#ping 10.1.1.5 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.1.1.5, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 8/14/30 ms
So that sort of proves our theory, but what about the telnet traffic. Since this was TCP, there is more state. Even the outbound flows aren’t working. Many may wonder why. The simple answer is again that the asav-1 is seeing only outbound flows. Normally, it would see SYN, SYN-ACK, ACK. In this case, it is seeing SYN….ACK. So how do we solve this with the TCP traffic? The answer to this question is TCP State Bypass.
TCP State Bypass Configuration
access-list STATEBYPASS extended permit ip any 188.8.131.52 255.255.255.0 ! class-map inspection_default match default-inspection-traffic ! policy-map STATEBYPASS class STATEBYPASS set connection advanced-options tcp-state-bypass ! service-policy STATEBYPASS interface inside !
This can be tested attempting to telnet to PartnerHost
//now it seems to work TestHost#telnet 184.108.40.206 Trying 220.127.116.11 ... Open User Access Verification Username: cisco Password: PartnerHost#
It would seem that the requirements are met. However on careful examination, it would seem that we can no longer ping Internet.
TestHost#ping 18.104.22.168 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 22.214.171.124, timeout is 2 seconds: ..... Success rate is 0 percent (0/5)
So this is important because it demonstrates the need to address anything contained in the global_policy policy-map. Recall that we removed inspect icmp. So anywhere the global_policy is inherited, this needs to be considered. If we add the inspection back, PartnerHost will not be able to get an Echo-Reply from TestHost. Although the requirements may be met, I wanted to address this for the article.
This problem can be solved by keeping the ICMP inspection outside of the global_policy. This may be necessary with other protocols. Basically, we are moving the inspections to another policy and qualifying them with source and destination information.
access-list INSPECTICMP extended deny icmp any 126.96.36.199 255.255.255.0 access-list INSPECTICMP extended permit icmp any any ! class-map INSPECTICMP match access-list INSPECTICMP ! policy-map STATEBYPASS //Class STATEBYPASS should exist from previous steps class STATEBYPASS set connection advanced-options tcp-state-bypass class INSPECTICMP inspect icmp
To test this, I will ping both Internet and PartnerHost from TestHost. I will also ping TestHost from PartnerHost.
//from TestHost TestHost#ping 188.8.131.52 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 184.108.40.206, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 5/7/13 ms TestHost#ping 220.127.116.11 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 18.104.22.168, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 7/9/13 ms TestHost# //from PartnerHost PartnerHost#ping 10.1.1.5 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.1.1.5, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 8/11/22 ms PartnerHost#
As we can see, all of the requirements are met but I want to share some additional thoughts. First, this is an ugly design. We are quite literally designing FOR asymmetrical flows. This type of corner case configuration creates serious operational challenges and increases the risk of downtime. It would be much better to use PartnerRTR as a the first hop default gateway or connect it to a dedicated interface or subinterface on asav-1. Further consideration should be considered if redundancy is a requirement.
Click here for the VIRL download for this article.
Disclaimer: This article includes the independent thoughts, opinions, commentary or technical detail of Paul Stewart. This
may or may does not reflect the position of past, present or future employers.