Bits and Chaos

Icon

Between bits and chaos, a sysadmin stands.

NYSE will move to Linux

NY Times reports that the New York Stock Exchange is heavily investing in Linux. Tech speaking:

  • they prefer Linux over IBM-AIX and HP-UX, and they have HP hardware (Opteron blades and Integrity);

  • they will use HP OpenView for monitoring;

  • they are already using Solaris on some systems, but prefer to move to a more open system;

  • they do not like virtualization, as it introduces not negligible latency.

Using Linux (in the Red Hat or SuSE flavor, I guess. Note that Red Hat is listed on NYSE) will be of big impact to financial institutions world-wide, as the volume of data and money processed by the NYSE is simply enormous: NYSE is saying that Linux is mature enough although is not so polyshed as other Unix dialects. Some feature are missing (as an example, backup of an entire system, a la mksysb in IBM AIX) but these things can usually be managed at the infrastructure level: if you system is completely redundant, you can put offline small portions of it and do the planned maintenance tasks. It will cost money for the redundancy (which is possible you would pay in every case) but it’s entirely possibile that it would be cheaper than paying the high licenses for other unixes, especially when, in the long term you will found yourself vendor locked-in.

Latency in virtualization is a big problem, best known as os interference (see this good article from IBM) that will severely impact performances for HPC clusters and could be desruptive for financial trading systems, where you receive hundreds of different data feeds per second, and you should decide what to buy or sell according to some heuristics before your competitors. It will be interesting to see what the Red Hat MRG platform will do in this field. As usual, we are at the beginning of the open source world.

Filed under: virtualization, , ,

Mongrel integration for RHEL, Fedora and derivatives: new release

I have made a new release of my script for controlling mongrel instances in RHEL, Fedora and derivatives.

The most important thing is that now you can selectively choose which instances start or stop. To do so, after the start or stop directive, you can add a prefix, each filename starting with that prefix (and ending with .conf, as usual) will be processed.

As an example, if you have these instance description files in your /etc/mongrel:

testsite.internal.example.com.conf
newapp.internal.example.com.conf
newportal.internal.example.com.conf
fileserver.extranet.example.com.conf

a command line as service mongrel start test will start each instance described by a filename like test*conf, so for the above example you’ll start the instance(s) described in testsite.internal.example.com.conf. In the same way:

  • service mongrel start new will start instances described in newapp.internal.example.com.conf and newportal.internal.example.com.conf;
  • service mongrel stop newapp will stop instance(s) described in newapp.interna.example.com.conf (newportal.etc won’t be touched);

If you don’t specify any prefix, the command will be applied on all files. Note that service mongrel status will ignore any prefix.

Click on the link to download the mongrel service script.

EDITED: The script is available here, but you can better use mod_passenger, which embed a Ruby interpreter in an Apache or NGINX web server.

Filed under: fedora, rhel, ruby on rails, , , , , ,

Upgrading from Fedora 6 to Fedora 8 via yum

I’ve successfully perfomed an update from Fedora 6 64bit to Fedora 8 64 bit, this won’t be a news except that, to make things more spicy, I choose to perform this update by yum.

I read that Anaconda has some troubles in updating from previous releases to Fedora 8 (it hangs with packages fromdifferent repositories) and, more important, I am too lazy to download a whole DVD and prefer doing things the fancy way, so I decided to go along the yum way.

First, I installed the Fedora Release RPM (see here), then I eagerly launched yum to see if it’s capable of deploying such a massive upgrade (yes, I’m doing this in the spirit of testing). Yum found that 1350+ packages must be downloaded, for a grand total of 1.4 GB of new software. After download, it started the upgrade process, made of 2700+ steps.

This wasn’t without some harshness, because yum need some help from me: I manually remove some packages, they conflict with packages from Fedora 8 that yum was pulling in. In this ward there are mostly packages from third party repositories that I installed once and the forgot to have.

Unfortunately, yum stuck at step 1950 circa, I guess because I submitted from another console a simple rpm -qa statement: it’s sad to say, but still today we – inhabitants of the RPM world – are experiencing deadlocks. So I choose to remove all the packages from Fedora 6, and I define this sets as “all the packages with “.fc6″ in their name”. By this definition, I remove keyutils-libs, and as a result after the boot SSH server, X Server, yum failed to start.

Yum reports a cryptic message, complaining about a SHA256 missing module, good for me that SSH server startup fails with a more informative string. I installed the package, fixed the entry in /etc/fstab (no more /dev/hd<n> for us! Labels everywhere) and now I’m posting about it.

(Thinking about it, I devised that the best way to find no longer used packages is the package-cleanup –orphans command)

First impressions: Firefox (which is now the 64 bit application shipped with Fedora, not the 32 bit application downloaded the site I used to have) is really faster, and I have the distinctive feeling that the whole desktop is more responsive. Getting help for a 64 bit installation (which is somewhat less mainstream) is as easy as googling for it and going to Fedora 64 web site.

Two years ago the landscape was completely different, but Red Hat is no longer the leader in the desktop market, Ubuntu holds the sceptre. A problem for Red Hat, because the boys that today are using Linux on their desktop are the men that tomorrow will use Linux on their server (think of Microsoft). They need to regain popularity, and it seems to me that they are headed in the right way.

Filed under: fedora,

How to manipulate the files stored inside a Xen virtual machine

Assume you have created a Xen virtual machine, and its data are stored on a file. You might want to access the files stored inside the machine (even when the machine is not running) for one reason or another:

  • inspect them;
  • change some of them, as an example you copy the virtual machine imagine file to another physical host and you need to change some settings (e.g. network-related) in order to fit the virtual machine for the new host;
  • doing some post-mortem analysis and/or recovery;

If you search on the net, you may find this document from IBM that will give you some hints, assuming that on the virtual machine you don’t use LVM. I will move a step further, with the the procedure for dealing with virtual machine image file containing a LVM partition.

For the rest of the discussion, I assume you know what a loop backed file system is and how LVM works. Instead of saying “virtual machine file”, i.e. a file on the host that is the disk of the virtual machine, I would say “virtual disk”.

 

1. Be sure the virtual machine it’s shutted down

This is required to avoid metadata corruption.

 

2. Use losetup to associate the virtual disk to a block device

$ losetup -d /dev/loop0 virtual-disk

3. Use fdisk to get information about the block device

$ fdisk -l -u /dev/loop0

Disk /dev/loop0: 8388 MB, 8388608000 bytes
255 heads, 63 sectors/track, 1019 cylinders, total 16384000 sectors
Units = sectors of 1 * 512 = 512 bytes

      Device Boot      Start         End      Blocks   Id  System
/dev/loop0p1   *          63      208844      104391   83  Linux
/dev/loop0p2          208845    16370234     8080695   8e  Linux LVM

This shows that the virtual machine has a tiny initial partition (probably the /boot) and a LVM partition.

4. Use losetup, again, to associate a specific partition of the virtual disk to a specific block device.

If you want the /boot partition:

$ losetup -o $((512*63)) /dev/loop1 /dev/loop0

$ mount /dev/loop1 /mnt

the magic here is the -o parameter, which defines the offset from the beginning of /dev/loop0: 63 sectors, as seen above from the fdisk output. If you want the files in the LVM partition, see next point.

5. Accessing LVM inside a loop device.

$ mount -o $$(208845*512) /dev/loop2 /dev/loop0

To scan the volume groups and the logical volumes, you need to tell to LVM that the /dev/loop<n> devices may contain LVM data. So, edit /etc/lvm/lvm.conf, find the line that defines the “types” parameter:

# types = [ “fd”, 16]

it’s a comment, showing the default value. Immediately after, put this line:
types = [ “fd”, 16, “loop”, 1]

and see what happens:

$ vgscan

  ACTIVE            '/dev/system/root' [14.62 GB] inherit
  ACTIVE            '/dev/system/home' [97.66 GB] inherit
  ACTIVE            '/dev/system/tmp' [512.00 MB] inherit
  ACTIVE            '/dev/system/swap' [4.00 GB] inherit
  inactive          '/dev/VolGroup00/LogVol00' [5.75 GB] inherit
  inactive          '/dev/VolGroup00/LogVol01' [1.94 GB] inherit

So we eventually can access the LogVol00 which contains data (someone tells me that LogVol01 is a swap partition):

$ vgchange -a y

 4 logical volume(s) in volume group "system" now active
 2 logical volume(s) in volume group "VolGroup00" now active

$ mount /dev/VolGroup00/LogVol01 /mnt/

Now the files of the virtual machine are accessible from the /mnt mount point.

6. Unmount all

$ umount /mnt
$ vgchange –activate n VolGroup00
$ losetup -d /dev/loop2
$ losetup -d /dev/loop1
$ losetup -d /dev/loop0

Remove the type parameter from /etc/lvm/lvm.conf if you feel you won’t need it anymore (if it’s present it does not hurt, though).

7. Some notes

  • Remember that the LVM namespace is unique across the system, so your virtual machine must use volume group name and logical volume name different from the ones of the physical host;
  • If you need to change the UUID of the virtual disk, see the uuidgen.py from the Xen distribution for some hints.
  • A different approach, based on kpartx, is available in the Fedora Wiki.

Filed under: virtualization, ,

Remotize your desktop: NX

Sometimes, you are so lucky to encounter a software so great that you realize that the future is already here.

This is one of my all time favourite software, the NX server from Nomachine. It’s basically a secure terminal server for Linux, with an amazing speed: remotely work over an ADSL line and you barely notice the difference between the local desktop and the remote desktop.

From time to time, the experienced system administrator may find him/herself in the necessity of using a graphical tool on a remote server (the junior administrator, instead, always wants to use a graphical tool for system administration). The first way to get this is using the ssh-X command, but the speed of the remote desktop is quite low: the X Window System was designed for LAN environments, whilst in a WAN scenario round trip times are no longer negligible.

The very experienced system administrator could bypass this problem by using VNC, tunnelling it over a secure connection, and this could give a speed gain. This require some work: installing the vnc server on the remote server, securing it, definining a X startup script, creating an SSH tunnel.

This is where NX comes in place. You can install it on the server (you need the NX client, NX server and NX node pieces, they are packaged for all the most important distributions), and then by using the NX client on your client you have:

  • near local speed;
  • encryption of all traffic;
  • session suspending and resuming.

the latter being a real advantage over the traditional ssh -X solution. NX is release free as in beer, the free server can accept a maximum of two user at the same time, if you need more power and want advanced features like server statistics and node balancing you can buy the premium versions.

NX is in a market niche where the behemoth is Citrix Metaframe. To gain market share, they were smart enough to release the NX libraries with the GPL license. A completely GPL’ed server is being developed by the FreeNX project, I played with it some months ago and I found that it lacked some functionalities (as an example, there was problem with a Microsoft Terminal Server client running inside a remote desktop session), but things may have changed, so you can get it a try.

There’s only one thing to check for after the installation of the NX server: if you have a iptables firewall on the remote server with rules that governs incoming SSH traffic, be sure to add a rule like this one:

iptables -I INPUT 1 -p tcp –source 127.0.0.1 –dport 22 -j ACCEPT

this is required because NX does a proxy authentication with the remote SSH server. If you omit this rule, the NX connection will stall at “Downloading session information” phase.

With such a tool, I was able to administer a scientific cluster working from home, installing software that requires a graphical screen (like Matlab). I’m projecting to install a Network Management Server like Nagios or OpenNMS and to make it listening for incoming HTTP requests only from 127.0.0.1: I will connect to the remote machine via NX and then issue a remote Firefox browser to reach the local web server. With such scheme, I can connect to the remote machine using a certificate or port knocking or both, and even if the Network Management Server would experience a security flaw, it won’t be listening on the Internet.

Congratulations Nomachine! And thanks for bringing us this great tool.

Filed under: virtualization, ,

The day that Ruby was as fast as PHP

The main objection against Ruby on Rails it’s about its performances, especially when they are compared to PHP. Although it’s cheaper to add up new computational power (or lease it from services like Amazon EC2) than pay for more developers, this objection has some real reason. Ruby 1.8.x is not really fast, and this is a problem for every framework built on top of it as RoR, but things are changing, as this benchmarks from Antonio Cangiano shows:

  • Ruby 1.9 is 2-3 times faster than Ruby 1.8.6;
  • JRuby is now as fast as Ruby 1.8.6, so you can now get all the power of Ruby on a Java application server;
  • There’s a lot of room for improvements, as some other projects (XRuby, Rubinius) indicate.

Filed under: ruby on rails, , ,

Why we need to deploy IPv6 inter-networks, and why we need it now

As I have a background from the research, I always found IPv6 as a excellent protocol, because it solves so many problems that IPv4 has, and allows us for building new infrastructures and services on top of it.

IPv6 deployment is minimal at best, and even in the research community there are resistances. I submitted an article to a workshop which shows an architecture that has great benefits from using IPv6 as transport protocol: using IPv4 as transport protocol is affordable only for customers that can afford pay a lot of money to have dedicated circuits, when the adoption of IPv6 solve the specific problem (of which I wouldn’t say more, sorry) for everyone, by doing the correct resource allocation at the proper level. This article has been refused because, among other things, it’s using IPv6 which seems exotic, not an urgent necessity.

I’m working on an expanded version of the article, that I’ll submit to a more network-aware conference, but I see this refusal as a clear indication that some people, even the most supposed tech-savy, does not realize that we are going to hit a wall, and hit it badly.

Articles begin to appear in the specialized media to suggest people to begin thinking to switch, this reports the opinion of John Curran, Board of the ARIN in the last decade. He evaluates the problem as complex as the Y2K problem, ad he’s shifting (partially) the problem solution from the service providers (that I guess could temporarily accept a balkanized Internet) to content providers: social sites cannot accept to be segregated to serve only a part of the Internet users, and as they are valued in the order of billions, they could give some help or, at least, pushing for the adoption of the IPv6 model. Suggest this article to your web content manager, when you require some IPv6 related budget.

People more interested in the mathematical model can worry themselve by reading the details and begin the countdown: we have between 2 and 4 years.

Filed under: network, ,