This post is part of a series based on a presentation I did for the Scottish Ruby Conference in May 2013 (part 1 here) which was around defense in depth and some of the controls companies should be looking at to help protect them when something goes wrong.
The first segment to cover is Firewalling. Network firewalls get quite a bit of flack in the security world, mainly because people tend to rely too heavily on them for protection without really understanding where they are and are not useful.
The “low-risk” setup option that I covered is around the use of egress filtering on firewalls.
One of the main limitations that I see on practical firewall deployments is that they don’t take a “default deny” position for all interfaces. In the typical Internet facing firewall setup almost every one will have a default deny rule from the untrusted network (e.g. the Internet) to the more trusted network (e.g. An internal network) but in many cases the other direction (from internal to Internet) will have a default allow rule set-up.
Setting a default deny on connections from trusted–>untrusted networks can be a really useful control in making an attackers life more difficult for them and hindering their post exploitation activities. So in an e-commerce environment it might be possible to have rules on the firewall that restrict all servers from initiating any connections to the Internet except for a couple of hosts for package updates. This means that someone who has access to the server and who trys to connect to any other system on the Internet will get blocked.
If you consider an attack on a web application, once the attacker has compromised a server (e.g. via SQL Injection or command injection) one of the first things they might try to do is make a connection back to a system under their control to download more tools and also to make a shell connection to the compromised system. So with egress filtering this could be considerably trickier to pull off.
One thing if you do intend to do this is, I would recommend putting it in place when you’re designing the network. Retro-fitting more restrictive firewall rules can be quite difficult as things like periodic connections that only happen once a month might not be noticed, leading to unexpected failures after the firewall rules have been put in place.
The “high risk” setup option looking at the area of network segregation. One of the setups I’ve seen quite commonly is that only one firewall is used, with all Internet facing systems in a single DMZ and then potentially all back-end systems on either the Internal network or perhaps in another single DMZ network. It’s a setup I call the “warm smarty” approach to security crunchy on the outside but soft and gooey once you get past the shell.
The problem with this approach is that once an attacker has compromised a single server it’s much easier for them to attack other systems in the environment and expand their access. The reality is that most internal networks are pretty easy for a dedicated attacker to compromise as there’s always some system that doesn’t get patched somewhere, so once they’re in, it’s pretty much game over.
Addressing this isn’t cheap or easy but effective network segmentation can make attackers lives much more difficult.
There are a variety of approaches that can be used for network segmentation. One is to segment individual Internet facing applications so that they are in their own DMZ, this can reduce the risk of onwards compromise, although it does depend on the firewall ruleset being suitably restrictive.
This approach obviously will increase management costs, for example requiring more management servers and potentially less automation of maintenance, so there is a trade-off between the desired level of security and the cost involved, but it’s something that should be considered rather than just going for the default one firewall approach.