Bits and Chaos


Between bits and chaos, a sysadmin stands.

How to be dishonest and live happy

It’s simple, write something like this.

The bottom line is: Debian is far more secure than RHEL and Fedora, not due to technical reasons but for their development model. When Debian’s openssl was compromised, they immediately issued a warning, told their users what to do, whilst Red Hat and Fedora were obscure, pointless and corporate-minded.

Dude, you are forgetting that it’s entirely possible that the Debian’s openssl security bug could have been the patient zero, and actual compromise of Red Hat’s server could have been happened starting from a stolen passkey. Also, you are forgetting that, being Red Hat a corporate with some billions cash (of course, they have so much money because it’s plenty of stupid people like me that pay them for their services) they were forced to work closely with law enforcement agencies such an intrusion could occur, and when FBI reaches the crime scene they are not primarily interested in sending an e-mail message on the mailing lists to tell them “ehy, we are here to save the day!”.

Filed under: oss, rhel, , , , ,

Bonding, aliasing and natting

Scenario: you want to connect a LAN to another one. Connection should be easily enabled and disabled.

At work we have a training and examination classroom with its own IP addressing schema. This LAN should be disconnected from the rest of the infrastructure when exams are in place (people should not be allowed to access Internet to find answer to questions) but we need to be allowed to do client’s operating system update when needed.

To do so, we have a classroom server that act as a NAT, DHCP and DNS server for the computers in the classroom. As availability is critical, we have grouped the two NICs on it to give a bonding interface. We have defined a bond0 interface, with an address of the external LAN, and a bond0:0 alias, with an address of the classroom LAN.

Then we have these rules for iptables:

iptables -t nat -A POSTROUTING -o bond -j SNAT --to-source EXTERNAL_IP

EXTERNAL_IP is the IP by which every client of the classroom should appear out of it.

iptables -t nat -A POSTROUTING -o bond -j MASQUERADE

To allow for IP Forwarding, we need to do this:

echo 1 > /proc/sys/net/ipv4/ip_forward

(this could be make persistent across reboot by adding net.ipv4.ip_forward = 1 in /etc/sysctl.conf). Connection will be disabled with

service iptables stop

and enabled with

service iptables start

Note that iptables rules don’t deal with interface aliasing, they need just the bare interface, and that here we are doing bonding and aliasing, and it appears working 🙂

Of course, this configuration is no way complex, but it has the property that I always forget about it, so I write this on the blog to allow to find it easily when needed.

Filed under: rhel, , , , ,

Squid over SSH

At work we need to externally access some intranet sites that are not visible on the Internet, both for architectural deployment and security concerns.

I manage this with an SSH tunnel and Squid access over it. Security is my first requirement for this kind of remote access, and this configuration will maximize it.

Configuring SSH server on the gateway

First, I work on an SSH externally accessible server that will act as a gateway, configuring the sshd running on it to refuse root login, allowing for TCP port forwarding and gateways. SSHD will listen on a non standard port, to avoid automated attacks from bots:

from /etc/ssh/sshd_config on the gateway:

Port 12345   # use a more strange number for your configuration
PermitRootLogin no
AllowTcpForwarding yes
GatewayPorts yes
X11Forwarding yes

on this gateway, I have only one user account, remoteaccess, with no password. This means that access will be only possible with certificates. After modifications, restart SSH server.

For and on each trustworthy client, I create a certificate on it:

client> ssh-keygen -t dsa

and the resulting public certificate, $HOME/.ssh/, has to be added to the authorized keys for the remoteaccess user on the gateway machine:

gateway> cat >> /home/remoteaccess/.ssh/authorized_keys

If you are paranoid, you can limit the number of tentative connections made on the gateway machine with some iptables rules. Add to /etc/sysconfig/iptables:

-A INPUT -s WHITELISTNETWORK1 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -s WHITELISTNETWORK2 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 180
         --hitcount 3 --name DEFAULT --rsource -j DROP
-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name DEFAULT
         --rsource -j ACCEPT

if you add these rules, then restart iptables for the new configuration to take effect. The proposed rules will drop an IP when it originates more than 3 connections in 180 seconds. Note that this is made at TCP level, so even successful and legitimate connections are counted. This is why it’s good to white list some IPs (like localhost) or networks (like or whatever).

In my experience, this specific hardening is not required, because it’s sufficient that your SSH server is listening on a non standard port to remove accesses from standard bots, where more sophisticated attack swill be stopped by the SSH certificate-only access. But if you are really paranoid, you can also add portknocking enabled access to your SSH server.

Remember that this gateway must always be updated for critical security fixes. It could be good if it’s running a different operating system than the rest of your infrastructure (e.g. a BSD flavour if you are a Linux shop) because, at the cost of some more management and administration effort, you avoid that the same zero-day security flaw will target all of your systems (this is not necessarily true, because you may end up running the same software on different Unixes, so evaluate pros and cons carefully).

Check that SSH connection is working:

client> ssh -P 12345 remoteaccess@gateway

it works if you have access on the gateway without being asked for a password.

Configuring Squid

Squid will run on the gateway, accepting only proxying requests from clients running on the gateway machine. This could be accomplished in some different ways:

  • configuring squid to accept only requests from;
  • definining a iptables rule that stops foreign requests;
  • configuring the external firewall to stop external requests for Squid.

As defense in deep is a good security practice, it’s better if you do all of these: would one of them falling apart, the others will save the day.

The iptables rules:

gateway> iptables -A INPUT -p tcp -m tcp --source --dport 3128 -j ACCEPT
gateway> iptables -A INPUT -p tcp -m tcp --dport 3128 -j DROP

After definition of them, save the new iptables configuration (service iptables save) and restart the service (service iptables restart). As a result, only TCP connections originated from on port 3128 (the standard port of Squid, and there is no need to work on a non default port) are accepted.

Configuration for Squid is made on the (big) /etc/squid/squid.conf file. Critical parameters are:

acl INTRANET dstdomain
http_access allow INTRANET
http_access deny all

These three lines defines an acl named INTRANET, that allows for proxying of HTTP request for the server and the * domain. These rules must be added (in Squid order of configuration directives is critical) after this point:


# Example rule allowing access from your local networks. Adapt
# to list your (internal) IP networks from where browsing should
# be allowed
#acl our_networks src
#http_access allow our_networks

Note that if you want defense in deep, uncomment the last two lines, defining suitable values for our_networks (our_networks src

The configuration above is for RHEL 4.x, which ships Squid 2.4. If you have RHEL 5.x, you have Squid 2.5 that allows for password authentication. In such a case, use these configuration rules:

auth_param digest program /usr/lib/squid/digest_pw_auth /etc/squid/password.txt
auth_param digest realm Intranet Access
auth_param digest children 2
auth_param digest nonce_max_duration 8 hours
auth_param digest nonce_max_count 1000
acl AUTHUSER proxy_auth REQUIRED
acl INTRANET dstdomain

http_access allow INTRANET AUTHUSER

http_access deny all

The file /etc/squid/password.txt will be a simple line:


Use more lines if you want to discrimate between users. (More authentication schemas are available for Squid). Now (re)start Squid on the gateway.

Client configuration

First, create the SSH tunnel:

client> ssh -l remoteuser -N -T -p 12345  -L 3128:localhost:3128 gateway-hostname-or-public-ip-address

so a connection on the local 3128 port of the localhost will be SSH tunnelled to the port 3128 of the gateway, where Squid is eagerly listening.

Then we need to instruct the client’s browser to proxy request for the intranet domain to port 3128 on localhost, with a Proxy Auto Configuration (PAC) file.

A PAC file example is:

function FindProxyForURL(url, host)
  if (shExpMatch(host, "*"))
    return "PROXY";

  if (shExpMatch(host, ""))
    return "PROXY";

  return "DIRECT";


This file could be put on a web server where it could be accessed for all the clients, or distributed via e-mail, it’s not security sensitive.

After all these steps done, you will access your intranet via SSH, which means you could work even on Sundays.

Filed under: rhel, , ,

Mongrel integration for RHEL, Fedora and derivatives: new release

I have made a new release of my script for controlling mongrel instances in RHEL, Fedora and derivatives.

The most important thing is that now you can selectively choose which instances start or stop. To do so, after the start or stop directive, you can add a prefix, each filename starting with that prefix (and ending with .conf, as usual) will be processed.

As an example, if you have these instance description files in your /etc/mongrel:

a command line as service mongrel start test will start each instance described by a filename like test*conf, so for the above example you’ll start the instance(s) described in In the same way:

  • service mongrel start new will start instances described in and;
  • service mongrel stop newapp will stop instance(s) described in (newportal.etc won’t be touched);

If you don’t specify any prefix, the command will be applied on all files. Note that service mongrel status will ignore any prefix.

Click on the link to download the mongrel service script.

EDITED: The script is available here, but you can better use mod_passenger, which embed a Ruby interpreter in an Apache or NGINX web server.

Filed under: fedora, rhel, ruby on rails, , , , , ,

Integration of Mongrel with RHEL and derivatives

(EDITED: You probably live a lot better using mod_passenger)

Mongrel is the Ruby on Rails application server. It’s not included in RHEL 4.x nor 5.x, so I wrote this little script to have it managed by the service command. If you’re used to Apache server, Mongrel is a bit different as it’s only an application server, and it’s mono-threaded. So if you want more than one instance working, you need to explicitly decides how many of them you want and on which port they will listen (one instance for one port). Mongrel works better as the back-end, on the front-end put Apache that will do a lot better work. The script I’m describing in this blog entry is for automatic starting/stopping of this instances, integration with Apache will be described in another entry.

Read the rest of this entry »

Filed under: rhel, ruby on rails, , , , , ,