Bits and Chaos

Icon

Between bits and chaos, a sysadmin stands.

So bad we didn’t think it before

The NYT has the definitive solution to the very complex problem of digital signatures.

Wondering who’s gonna be the first one to patent it.

Advertisements

Filed under: Uncategorized

Thanks VMS, and thanks Dave, for everything

VAX/VMS was my first exposure to the real world of real operating systems. Before it, I had had only MS-DOS as operating system.

I remember the complexity, resilience and structure of this beautiful system, capable of extracting every single cycle of power from hardware as powerful as an Intel 386 (more or less) ┬ásustaining two dozens of terminals at the time being. I remember the scheduler, that has two main classes of programs, batch and real time. I remember the extensive documentation (the joy of seeing a couple of shelfs filled with documentation manuals (even the help system has its own manual, both for the average user and for the developer). I also remember when, it was 1994, I wrote a keylogging to steal the SYSTEM password (authorized and in fact challenged by my CS teacher, and to say you a simple truth, the two passwords – yes there were two of them – were part of a motto published in the entrance of the computer lab). RUN AUTHORIZE was my first exposure to what, twenty years later, would be the Linux capabilities system.

So, today is a sad day, because HP has announced that it will discontinue OpenVMS, the notable heir of VMS. Many reasons for that, probably some short sighting for the DEC-DIGITAL-COMPAQ executives who didn’t believe in a multi-user, multi-program system quite different from Unix. If they did so, the history of computing would have been probably different, because VMS on a Alpha processor was quite a number crunching beast.

Filed under: Uncategorized, , ,

Import PST and DBX file into Thunderbird (or any other mail client)

Scenario: you have some .MBX or .PST files, and you want to import them into Thunderbird.

The easy way to accomplish requires the installation of Outlook/Outlook Express, then importing the files into this application, then firing up Thunderbird and using the Import menu (there’s also an extension, I’ve tested and doesn’t work). I don’t like this very much, as it requires the installation of another application; it would be better if Thunderbird could directly load an .MBX/.PST file, but this options is still lacking as of Thunderbird 3.1.1.

So I have devised another approach:

  1. Convert the .MBX/.PST file into mbox format: readpst (available for Linux, in Fedora 12 is in the libpst package) could handle the PST format, while DBXConv (available for Windows, AFAIK) could handle the MBX format; both of them will produce a standard MBOX file;
  2. Create a dedicated user for a Linux system, and copy the MBOX file produced in step 1 into /var/spool/mail/<username>;
  3. Install and configure dovecot to act as a POP3 server; from the standard configuration in Fedora 12, you have to change just a couple of paramaters in /etc/dovecot.conf: the first one is protocols, that you’ll set at pop3, the second is mail_location, that must be set at /var/spool/mail/%u (%u will be expanded to username, see the configuration file itself for more information);
  4. Start dovecot, check that firewall doesn’t block connections, etc…;
  5. Now on Thunderbird you could create a new account, that connect to the POP3 server, to download the content of the MBOX file; then you’ll select all the mails and copy then into a local folder.

This could be repeated for every file, when needed; also, the MBOX format is so good that you could concatenate multiple MBOX files into /var/spool/mail/<username> and then performing a single bigger import, maybe with some filters to dispatch incoming mails.

I’ve done this today for a friend of mine, and I’ve been able to import about 30,000 mails, from 1999 to today, coming from at least three different computers and multiple versions of Outlook and Outlook Express.

Of course, starting from now, he is kindly requested to use a mail client that natively support an open format.

Filed under: Uncategorized, , , , , , ,

Certificate Patrol can really save your pocket

Certificate Patrol is a nice add-on for Firefox: it basically monitors all SSL connections and checks, during activation, if the exchanged certificate has changed. This is extremely useful for determining if you are under a man-in-the-middle attack.

To give you an idea, I tell you that my university has a webmail service, which I use a lot. A couple of days ago, I access this service from work, and Certificate Patrol shows up this message screen:

The message is a bit cryptic, but the sense is clear if you know how to read it: the Certification Authority that guarantees the authenticity of the site I’m using is changed, and is no longer Cybertrust. So I ran into the operations office and told them that we are under attack, just to discover that they are doing a test, using some (I cannot tell you the name) web proxy to inspect all the SSL connections. Of course, it was just a test, but Certificate Patrol really does its job, alerting me that something strange it’s happening in the network.

It’s interesting to observe that, prior to the message, I was temporarily unable to access the webmail: I thought it was because they were experiencing problems, while it was due operations reconfiguring the web proxy. When I was finally able to access the webmail, Firefox told me (using the standard message) that the connection to the website was with an unsecure certificate, and my first idea was that they had rebooted the webmail in the university and they have somehow changed the certificate, so I click, click and click again to tell Firefox that I was willing to accept the risks.

In fact, I did a stupid thing, because I should not accept, at least no easily, that a website is changing its certificate with something not issued by a CA: without Certificate Patrol I would be unaware of what was really happening.

And, if you think that you would never experience anything like this, because you always refuse accepting certificates from an unknown CA, you’d better read this Law Enforcement Appliance Subverts SSL and Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL, an article where another plugin for Firefox to address this kind of vulnerability is exposed.

Filed under: Uncategorized, , , , ,

Still waiting for a good e-book reader

Almost every year, several years now on, starts with the declaration that it will be the year of the Linux desktop. Although we are making some progresses in developing a competing platform for desktop PC (including releasing of some malware via a screensaver application) we are not seeing such widespread adoption, and probably we won’t see as the hot spot today is in cloud computing, virtual desktop, web-oriented operating systems and whatever.
But another prediction could be done, that this year could be the year of the e-book reader. At least for me, as I have pondered a lot whether I should buy one of these during these holiday season. But I regret to avoid this expensive self-gift, as I am not seeing a device that does all and only all it should do. The portable music audio market took off when Apple released the iPod, as it does only one thing but very well, not from the technological point of view (it’s always an MP3, so a relative good sound quality) but for the interface itself, that let people do what we want to do: choose and building our music library, arrange and playing it the way we want. This extremely good design seems not happening until now for the e-book reader market.
What should I expect to find in a e-book reader? I’ve thought this sort of list:

  • a 9-10 inches display: a 6 inches display is too small to comfortably read a page, i.e. it will contain few words and, as a result, it will force to turn pages more frequently; also, a 9-10 inches allows to zoom the text, so it makes the device adapting to myself and not the contrary;
  • a touchscreen interface, as I am already read books using also my hands, and I do not want to use a stylus, that could easily get lost and it will make things unnecessarily clumsy;
  • the ability to make notes, which would be expecially useful for tech books and documentation;
  • the design principle, deeply rooted in the device, that I am the owner of my books, and I could do with them whatever I want to, including reading, taking notes, making summaries, lending and borrowing;
  • some kind of wireless connectivity so I could move books to and from the device without setting a physical connection (that, nevertheless, should be available);
  • ability to read technical documentation, i.e. something available in a PDF format but designed and developed with an A4 paper format in mind;
  • an integrated dictionary, and something on the big and complex side (not an “First English Dictionary” but more the Webster) letting me to pinpoint a word and obtain its definition with a simple gesture;
  • a price not over 300 euros (400 $), as otherwise the time it takes to repay the investment would be several years.

Even not including the price limit, there are no devices in the market with all these features: most e-book readers have a 6 inches display, the Kindle is deeply integrated with the Amazon DRM (which could let to disaster like this), or some features are missing (the dictionary) or bad-implemented (stylus instead of the touchscreen).

It seems to me like designers are so satisfied with the e-Ink technology that they simply refuse to work more on the interface, empathizing thinks like the battery lasting (“you could read 10,000 pages before recharging”) and not the most fundamental interaction with the device (“you can make notes, export them, share them with your friends”). It’s so bad, because as the result of these I’ll be forced to continue buying books and printing documentation, which pollutes a lot and requires a lot more trees to be sacrificed on the altar of knowledge.

Filed under: Uncategorized,

Red Hat PV drivers made a huge difference

If you, like me, are using Xen with a HVM guest machine (i.e. with an hardware assisted virtualization) you may have experienced horrible network performances for the domU.

I tested the performance with the netperf tool. First result where embarrassing, about 2.5 Mbps. The problem is very common, and it could be someway related to the host NIC. I have run the tests over an HP DL 380 G5, with two Broadcom 5708 on board (it’s the standard configuration for an HP DL server. People have reported that with an Intel chipset based NIC everything works fine).

The problem was that there were a lot of dropped packets and errors, that force TCP entering the congestion avoidance phase, so performances are almost zero.

I’ve unsuccessfully tried to do the following:

All these, combined, give me something, about 50 Mbps, but there are still dropped packets and transmission errors.

On the RHEL 5 mailing list, someone reported that are now available the para-virtualized drivers for RHEL 5. I’ve tried them, following the Red Hat guide, and performances jumped to an incredible 345 MBps of sustained throughput (for two domU on the same dom0: i.e. they are memory-to-memory transfers).

(funny enough, now I’ve found an italian guide describing the same problem, although it seems to me they’re wrong in identifying the root cause: it’s not the virtual NIC speed the problem, the errors are).

Filed under: Uncategorized, , , , , , ,

Review: Essential SNMP, 2nd edition

Last previous entry in this blog was one month ago, I’m unhappy with this but as I need to write down my Ph.D. dissertation by two or three months’ time I’m feeling the pressure and have very little time. But the life continues and goes on, so I’m still a Linux system administrator, and as such I feel that the most important skill I must develop is in the monitoring area. Only if you have a monitoring system that helps you track down software and, more important, hardware failures, you can successfully administer a large cluster of machines and being productive and pro-active, otherwise you’ll simply waste your time by fixing the today’s problem, and tomorrow will be another day with a tomorrow’s problem.
This may sound very common if you work in a corporate environment, but in Italy we have very few big customers, so the idea of monitoring is very well confined in some magic gardens where you are usually not invited. To learn the path for these gardens, I decided that this year I should focus on enterprise monitoring, and I started with the very basic of it, the SNMP protocol.
If you dig on the Internet for SNMP, you find some interesting tutorials, I greedily read them but I realized that I need something more robust and comprehensive. For my forma mentis (a latin expression that means shape/settings/idea of your mind, plus with no Wikipedia entry) I cannot successfully use a layer 7 tool if I don’t have a good idea of the communication protocol it will rely on. So I searched for an in-depth book and I finally landed on Essential SNMP, second edition from O’Really.
I found it’s an excellent book to understand what SNMP is and how it works, from the definition to packet sniffing on the network to see real data exchange. Also there are some real programming example if you want to write your own SNMP agent, so it’s a good starting point when you need to interact with an heavy customized environment.
But it’s a bit outdated, as every example in the book is about configuring and using HP OpenView, whilst open source tools like Nagios , Zabbix, Zenoss and OpenNMS have no more than some pages (if any) in the appendixes.
These tools, for what I’ve understand now, are usually hybrid, meaning that they covers both the hardware level monitoring function and the software one. Some of them, like Nagios, comes from application level monitoring and have some SNMP extensions, others are natively in the application layer and go deep in the stack, others were designed with the idea to cover both areas. They are very different in installation requirements, required configuration efforts, ease of maintenance. Some of them have a lot of plugins that makes the interaction with the hardware or the applications easy, some requires more tweaking. Even support is completely different, ranging from a free consultancy market to a single company that writes the software, give it to you for free, and try to made revenues from the support service.
So, to go back to the long term project, I think that I should understand how these oss solutions work, compare them, and deploy one or more of them to have a complete control over the infrastructure.

Filed under: Uncategorized, , , , , , ,