Bits and Chaos

Icon

Between bits and chaos, a sysadmin stands.

Certificate Patrol can really save your pocket

Certificate Patrol is a nice add-on for Firefox: it basically monitors all SSL connections and checks, during activation, if the exchanged certificate has changed. This is extremely useful for determining if you are under a man-in-the-middle attack.

To give you an idea, I tell you that my university has a webmail service, which I use a lot. A couple of days ago, I access this service from work, and Certificate Patrol shows up this message screen:

The message is a bit cryptic, but the sense is clear if you know how to read it: the Certification Authority that guarantees the authenticity of the site I’m using is changed, and is no longer Cybertrust. So I ran into the operations office and told them that we are under attack, just to discover that they are doing a test, using some (I cannot tell you the name) web proxy to inspect all the SSL connections. Of course, it was just a test, but Certificate Patrol really does its job, alerting me that something strange it’s happening in the network.

It’s interesting to observe that, prior to the message, I was temporarily unable to access the webmail: I thought it was because they were experiencing problems, while it was due operations reconfiguring the web proxy. When I was finally able to access the webmail, Firefox told me (using the standard message) that the connection to the website was with an unsecure certificate, and my first idea was that they had rebooted the webmail in the university and they have somehow changed the certificate, so I click, click and click again to tell Firefox that I was willing to accept the risks.

In fact, I did a stupid thing, because I should not accept, at least no easily, that a website is changing its certificate with something not issued by a CA: without Certificate Patrol I would be unaware of what was really happening.

And, if you think that you would never experience anything like this, because you always refuse accepting certificates from an unknown CA, you’d better read this Law Enforcement Appliance Subverts SSL and Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL, an article where another plugin for Firefox to address this kind of vulnerability is exposed.

Filed under: Uncategorized, , , , ,

HTTP cannot be longer used for authenticated web sites

If you are an user of a web site that requires authentication (which means, basically, every site) you usually access it from a network you don’t have control over it, i.e. you don’t know, besides many other things, which DNS server the infrastructure guy has chosen and which version it’s running.This means that you can be exposed to the well known Dan Kaminsky’s DNS hijack attack (you can actually check for this).

Leveraging on this vulnerability (it’s still plenty of DNS that haven’t fixed) it’s possible to implement a man in the middle attack at the application level, stealing your cookies from the authenticated HTTP session: ladies and gentlemen, please welcome CookieMonster. You are exposed even if your login page is protected via HTTPS, as the auth-cookie will be passed in cleartext in every subsequent HTTP interaction.

This worst case scenario requires a flawed DNS implementation (better, a DNS implementation following the original and flawed DNS protocol) so you can be reasonably safe if you always control your DNS or at least can have some trust in the guys that are operating it, but if you are a roaming user you are completely exposed.

So, as you are a competent Linux user, you could fix this in a very simple way: install a DNS caching webserver and use, as your primary DNS, something you could trust.

If you cannot do this, you must ask to your web application provider to fix this issue (some have already done this, as an example you can force all WordPress administration pages to be accessed only via HTTPS, and I’m writing this blog entry via HTTPS so it works).

If you are a system administrator, you must check and eventually fix your DNS implementation, and probably you should take a look at an SSL accelerator, because your connection peers (i.e. users accessing web sites under your control) could be from every possible insecure networks, and my 2 cents are that this man in the middle attack will be only the first of a new kind based on an interaction of different levels on the TCP/IP stack.

Filed under: network, security, , , ,

Squid over SSH

At work we need to externally access some intranet sites that are not visible on the Internet, both for architectural deployment and security concerns.

I manage this with an SSH tunnel and Squid access over it. Security is my first requirement for this kind of remote access, and this configuration will maximize it.

Configuring SSH server on the gateway

First, I work on an SSH externally accessible server that will act as a gateway, configuring the sshd running on it to refuse root login, allowing for TCP port forwarding and gateways. SSHD will listen on a non standard port, to avoid automated attacks from bots:

from /etc/ssh/sshd_config on the gateway:

Port 12345   # use a more strange number for your configuration
PermitRootLogin no
AllowTcpForwarding yes
GatewayPorts yes
X11Forwarding yes

on this gateway, I have only one user account, remoteaccess, with no password. This means that access will be only possible with certificates. After modifications, restart SSH server.

For and on each trustworthy client, I create a certificate on it:

client> ssh-keygen -t dsa

and the resulting public certificate, $HOME/.ssh/id_dsa.pub, has to be added to the authorized keys for the remoteaccess user on the gateway machine:

gateway> cat id_dsa.pub >> /home/remoteaccess/.ssh/authorized_keys

If you are paranoid, you can limit the number of tentative connections made on the gateway machine with some iptables rules. Add to /etc/sysconfig/iptables:

-A INPUT -s WHITELISTNETWORK1 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -s WHITELISTNETWORK2 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 180
         --hitcount 3 --name DEFAULT --rsource -j DROP
-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name DEFAULT
         --rsource -j ACCEPT

if you add these rules, then restart iptables for the new configuration to take effect. The proposed rules will drop an IP when it originates more than 3 connections in 180 seconds. Note that this is made at TCP level, so even successful and legitimate connections are counted. This is why it’s good to white list some IPs (like localhost) or networks (like 10.0.0.0 or whatever).

In my experience, this specific hardening is not required, because it’s sufficient that your SSH server is listening on a non standard port to remove accesses from standard bots, where more sophisticated attack swill be stopped by the SSH certificate-only access. But if you are really paranoid, you can also add portknocking enabled access to your SSH server.

Remember that this gateway must always be updated for critical security fixes. It could be good if it’s running a different operating system than the rest of your infrastructure (e.g. a BSD flavour if you are a Linux shop) because, at the cost of some more management and administration effort, you avoid that the same zero-day security flaw will target all of your systems (this is not necessarily true, because you may end up running the same software on different Unixes, so evaluate pros and cons carefully).

Check that SSH connection is working:

client> ssh -P 12345 remoteaccess@gateway

it works if you have access on the gateway without being asked for a password.

Configuring Squid

Squid will run on the gateway, accepting only proxying requests from clients running on the gateway machine. This could be accomplished in some different ways:

  • configuring squid to accept only requests from 127.0.0.1;
  • definining a iptables rule that stops foreign requests;
  • configuring the external firewall to stop external requests for Squid.

As defense in deep is a good security practice, it’s better if you do all of these: would one of them falling apart, the others will save the day.

The iptables rules:

gateway> iptables -A INPUT -p tcp -m tcp --source 127.0.0.1 --dport 3128 -j ACCEPT
gateway> iptables -A INPUT -p tcp -m tcp --dport 3128 -j DROP

After definition of them, save the new iptables configuration (service iptables save) and restart the service (service iptables restart). As a result, only TCP connections originated from 127.0.0.1 on port 3128 (the standard port of Squid, and there is no need to work on a non default port) are accepted.

Configuration for Squid is made on the (big) /etc/squid/squid.conf file. Critical parameters are:

acl INTRANET dstdomain .intranet.example.com critical.example.com
http_access allow INTRANET
http_access deny all

These three lines defines an acl named INTRANET, that allows for proxying of HTTP request for the critical.example.com server and the *.intranet.example.com domain. These rules must be added (in Squid order of configuration directives is critical) after this point:

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

# Example rule allowing access from your local networks. Adapt
# to list your (internal) IP networks from where browsing should
# be allowed
#acl our_networks src 192.168.1.0/24 192.168.2.0/24
#http_access allow our_networks

Note that if you want defense in deep, uncomment the last two lines, defining suitable values for our_networks (our_networks src 127.0.0.1/32)

The configuration above is for RHEL 4.x, which ships Squid 2.4. If you have RHEL 5.x, you have Squid 2.5 that allows for password authentication. In such a case, use these configuration rules:

auth_param digest program /usr/lib/squid/digest_pw_auth /etc/squid/password.txt
auth_param digest realm Example.com Intranet Access
auth_param digest children 2
auth_param digest nonce_max_duration 8 hours
auth_param digest nonce_max_count 1000
acl AUTHUSER proxy_auth REQUIRED
acl INTRANET dstdomain .intranet.example.com critical.example.com

http_access allow INTRANET AUTHUSER

http_access deny all

The file /etc/squid/password.txt will be a simple line:

username:password

Use more lines if you want to discrimate between users. (More authentication schemas are available for Squid). Now (re)start Squid on the gateway.

Client configuration

First, create the SSH tunnel:

client> ssh -l remoteuser -N -T -p 12345  -L 3128:localhost:3128 gateway-hostname-or-public-ip-address

so a connection on the local 3128 port of the localhost will be SSH tunnelled to the port 3128 of the gateway, where Squid is eagerly listening.

Then we need to instruct the client’s browser to proxy request for the intranet domain to port 3128 on localhost, with a Proxy Auto Configuration (PAC) file.

A PAC file example is:

function FindProxyForURL(url, host)
{
  if (shExpMatch(host, "*intranet.example.com"))
    return "PROXY 127.0.0.1:3128";

  if (shExpMatch(host, "critical.example.com"))
    return "PROXY 127.0.0.1:3128";

  return "DIRECT";

}

This file could be put on a web server where it could be accessed for all the clients, or distributed via e-mail, it’s not security sensitive.

After all these steps done, you will access your intranet via SSH, which means you could work even on Sundays.

Filed under: rhel, , ,