We recently upgraded our border router to a Cisco ASR 1001 and installed it using a zone-based firewall. Soon after installation we received reports from around campus that downloads were being interrupted and failing. After some research we narrowed it down to HTTP downloads, FTP/SCP/etc all worked properly. We worked with TAC and found out it was related to the classification and inspection of the HTTP protocol within the zone-based firewall. Specifically it dealt with the ability of the firewall to handle out of order packets. The only known workaround for this is to create a secondary class-map that sets HTTP to inspect as TCP.
Here's a basic version of what our outbound config looked like before:
ip access-list extended ACL_OUT
permit tcp host xxx.xxx.xxx.xx1 any eq smtp
permit tcp host xxx.xxx.xxx.xx2 any eq smtp
deny tcp any any eq smtp
permit tcp any any
class-map type inspect match-any CM_OUT
match access-group name ACL_OUT
policy-map type inspect PM_OUT
class type inspect CM_OUT
inspect
class class-default
pass
This caused all protocols to be inspected, including HTTP on all traffic leaving the network. The inbound config looked the same except ACL_IN was a bit longer and specific to some of our internal servers (HTTP, HTTPS, SSH, etc).
Here's what it looked like after:
ip access-list extended ACL_WEB_OUT
permit tcp any any eq 80
class-map type inspect match-all CM_WEB_OUT
match protocol tcp
match access-group name ACL_WEB_OUT
We had to take HTTP and force it to be inspected as TCP, then make sure to apply it above the original class-map in the existing policy-map.
policy-map type inspect PM_OUT
class type inspect CM_WEB_OUT
inspect
class type inspect CM_OUT
inspect
class class-default
pass
Once we applied those changes, instantaneously we had full ability to download files over HTTP again. It took me about 5 minutes to re-write my ACLs and apply the changes.
To me it all seems like a major flaw that should be addressed immediately, but I was assured this is the way to keep my firewall running indefinitely as it doesn't appear there are any plans to fix it. With the big push coming on NBAR2, I'm hesitant to trust the inspection and policing it will do if HTTP has this kind of bug in it.
On top of all of that, no one in the TAC even knew about this issue until almost 2 weeks of work and countless hours of the security team blaming NAT, routing team blaming security, shift changes, call center changes, etc. I spent the last few days coming close to my wits end and working nearly 40 hours in two days on this particularly high profile issue at my company. This is one of the few cases where I put my trust in a company to provide a product that I'm used to getting and they failed me, very visibly to my boss and his boss and the entire board. My confidence is going to be shaken for a little while if a known issue with a major component of a major product isn't actually known within the support channels.