Bits and Chaos

Icon

Between bits and chaos, a sysadmin stands.

Security by stupidity

On behalf of NSA, I highly commence this awesome extension of IMAP.

Filed under: security, , ,

So bad we didn’t think it before

The NYT has the definitive solution to the very complex problem of digital signatures.

Wondering who’s gonna be the first one to patent it.

Filed under: Uncategorized

Thanks VMS, and thanks Dave, for everything

VAX/VMS was my first exposure to the real world of real operating systems. Before it, I had had only MS-DOS as operating system.

I remember the complexity, resilience and structure of this beautiful system, capable of extracting every single cycle of power from hardware as powerful as an Intel 386 (more or less) ┬ásustaining two dozens of terminals at the time being. I remember the scheduler, that has two main classes of programs, batch and real time. I remember the extensive documentation (the joy of seeing a couple of shelfs filled with documentation manuals (even the help system has its own manual, both for the average user and for the developer). I also remember when, it was 1994, I wrote a keylogging to steal the SYSTEM password (authorized and in fact challenged by my CS teacher, and to say you a simple truth, the two passwords – yes there were two of them – were part of a motto published in the entrance of the computer lab). RUN AUTHORIZE was my first exposure to what, twenty years later, would be the Linux capabilities system.

So, today is a sad day, because HP has announced that it will discontinue OpenVMS, the notable heir of VMS. Many reasons for that, probably some short sighting for the DEC-DIGITAL-COMPAQ executives who didn’t believe in a multi-user, multi-program system quite different from Unix. If they did so, the history of computing would have been probably different, because VMS on a Alpha processor was quite a number crunching beast.

Filed under: Uncategorized, , ,

Import PST and DBX file into Thunderbird (or any other mail client)

Scenario: you have some .MBX or .PST files, and you want to import them into Thunderbird.

The easy way to accomplish requires the installation of Outlook/Outlook Express, then importing the files into this application, then firing up Thunderbird and using the Import menu (there’s also an extension, I’ve tested and doesn’t work). I don’t like this very much, as it requires the installation of another application; it would be better if Thunderbird could directly load an .MBX/.PST file, but this options is still lacking as of Thunderbird 3.1.1.

So I have devised another approach:

  1. Convert the .MBX/.PST file into mbox format: readpst (available for Linux, in Fedora 12 is in the libpst package) could handle the PST format, while DBXConv (available for Windows, AFAIK) could handle the MBX format; both of them will produce a standard MBOX file;
  2. Create a dedicated user for a Linux system, and copy the MBOX file produced in step 1 into /var/spool/mail/<username>;
  3. Install and configure dovecot to act as a POP3 server; from the standard configuration in Fedora 12, you have to change just a couple of paramaters in /etc/dovecot.conf: the first one is protocols, that you’ll set at pop3, the second is mail_location, that must be set at /var/spool/mail/%u (%u will be expanded to username, see the configuration file itself for more information);
  4. Start dovecot, check that firewall doesn’t block connections, etc…;
  5. Now on Thunderbird you could create a new account, that connect to the POP3 server, to download the content of the MBOX file; then you’ll select all the mails and copy then into a local folder.

This could be repeated for every file, when needed; also, the MBOX format is so good that you could concatenate multiple MBOX files into /var/spool/mail/<username> and then performing a single bigger import, maybe with some filters to dispatch incoming mails.

I’ve done this today for a friend of mine, and I’ve been able to import about 30,000 mails, from 1999 to today, coming from at least three different computers and multiple versions of Outlook and Outlook Express.

Of course, starting from now, he is kindly requested to use a mail client that natively support an open format.

Filed under: Uncategorized, , , , , , ,

Certificate Patrol can really save your pocket

Certificate Patrol is a nice add-on for Firefox: it basically monitors all SSL connections and checks, during activation, if the exchanged certificate has changed. This is extremely useful for determining if you are under a man-in-the-middle attack.

To give you an idea, I tell you that my university has a webmail service, which I use a lot. A couple of days ago, I access this service from work, and Certificate Patrol shows up this message screen:

The message is a bit cryptic, but the sense is clear if you know how to read it: the Certification Authority that guarantees the authenticity of the site I’m using is changed, and is no longer Cybertrust. So I ran into the operations office and told them that we are under attack, just to discover that they are doing a test, using some (I cannot tell you the name) web proxy to inspect all the SSL connections. Of course, it was just a test, but Certificate Patrol really does its job, alerting me that something strange it’s happening in the network.

It’s interesting to observe that, prior to the message, I was temporarily unable to access the webmail: I thought it was because they were experiencing problems, while it was due operations reconfiguring the web proxy. When I was finally able to access the webmail, Firefox told me (using the standard message) that the connection to the website was with an unsecure certificate, and my first idea was that they had rebooted the webmail in the university and they have somehow changed the certificate, so I click, click and click again to tell Firefox that I was willing to accept the risks.

In fact, I did a stupid thing, because I should not accept, at least no easily, that a website is changing its certificate with something not issued by a CA: without Certificate Patrol I would be unaware of what was really happening.

And, if you think that you would never experience anything like this, because you always refuse accepting certificates from an unknown CA, you’d better read this Law Enforcement Appliance Subverts SSL and Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL, an article where another plugin for Firefox to address this kind of vulnerability is exposed.

Filed under: Uncategorized, , , , ,

Internet Traffic Consolidation

We had learnt and taught Internet as an hierarchy of ASN, starting from local ISP to regional to Tier 1, with traffic graciously moving between the levels.

This is no longer true, according to this presentation from NANOG47:

  • 150 ASN accounts for 50% of all Internet traffic;
  • Revenues from Internet Transit are declining, whilst revenues from Internet Advertisement compensates;
  • The new rule is 30/30: the 30 top destinations (Google, Yahoo, Facebook, …) accounts for 30% of all traffic, so if you are a provider you’d better make a deal with them: your customer would get a better Internet experience which is a commercial advantage: as a result, Youtube bandwidth bill is a lot less than you could imagine.

It’s time to rewrite some courses material.

Filed under: network, , , ,

LinkedIn and MTU settings for Linux systems

For reasons quite above my understanding, some (including mine) Linux systems are unable to access LinkedIn. Symptoms include hanging forever in the login page, i.e. you could access the authentication page, reading some and yours profile, but cannot do anymore.

This could fixed by issuing, as root:

ifconfig eth0 mtu 1360

(assuming that you reach the Internet via eth0).

It’s quite a strange setup, indeed. The only other time I had to do something like that was when I was trying to reach a Moodle server, that we had put on a LAN connected to the Internet via an ADSL consumer connection; the server was reachable from each customer of the same ISP but stuck for everyone else, everytime the poor guy requests a page whose size is more than or reaching the TCP/IP maximum payload (I guess this is for some kind of NAT magic/MPLS black magic/Peering sorcery that happens only for customers outside the AS).

I’m pretty sure that LinkedIn is not using a customer ADSL to connect itself to the Internet, and that they are seeing a constant loss of Linux customers due to this issue, which is very difficult to spot.

Filed under: network, , , ,

Still waiting for a good e-book reader

Almost every year, several years now on, starts with the declaration that it will be the year of the Linux desktop. Although we are making some progresses in developing a competing platform for desktop PC (including releasing of some malware via a screensaver application) we are not seeing such widespread adoption, and probably we won’t see as the hot spot today is in cloud computing, virtual desktop, web-oriented operating systems and whatever.
But another prediction could be done, that this year could be the year of the e-book reader. At least for me, as I have pondered a lot whether I should buy one of these during these holiday season. But I regret to avoid this expensive self-gift, as I am not seeing a device that does all and only all it should do. The portable music audio market took off when Apple released the iPod, as it does only one thing but very well, not from the technological point of view (it’s always an MP3, so a relative good sound quality) but for the interface itself, that let people do what we want to do: choose and building our music library, arrange and playing it the way we want. This extremely good design seems not happening until now for the e-book reader market.
What should I expect to find in a e-book reader? I’ve thought this sort of list:

  • a 9-10 inches display: a 6 inches display is too small to comfortably read a page, i.e. it will contain few words and, as a result, it will force to turn pages more frequently; also, a 9-10 inches allows to zoom the text, so it makes the device adapting to myself and not the contrary;
  • a touchscreen interface, as I am already read books using also my hands, and I do not want to use a stylus, that could easily get lost and it will make things unnecessarily clumsy;
  • the ability to make notes, which would be expecially useful for tech books and documentation;
  • the design principle, deeply rooted in the device, that I am the owner of my books, and I could do with them whatever I want to, including reading, taking notes, making summaries, lending and borrowing;
  • some kind of wireless connectivity so I could move books to and from the device without setting a physical connection (that, nevertheless, should be available);
  • ability to read technical documentation, i.e. something available in a PDF format but designed and developed with an A4 paper format in mind;
  • an integrated dictionary, and something on the big and complex side (not an “First English Dictionary” but more the Webster) letting me to pinpoint a word and obtain its definition with a simple gesture;
  • a price not over 300 euros (400 $), as otherwise the time it takes to repay the investment would be several years.

Even not including the price limit, there are no devices in the market with all these features: most e-book readers have a 6 inches display, the Kindle is deeply integrated with the Amazon DRM (which could let to disaster like this), or some features are missing (the dictionary) or bad-implemented (stylus instead of the touchscreen).

It seems to me like designers are so satisfied with the e-Ink technology that they simply refuse to work more on the interface, empathizing thinks like the battery lasting (“you could read 10,000 pages before recharging”) and not the most fundamental interaction with the device (“you can make notes, export them, share them with your friends”). It’s so bad, because as the result of these I’ll be forced to continue buying books and printing documentation, which pollutes a lot and requires a lot more trees to be sacrificed on the altar of knowledge.

Filed under: Uncategorized,

Google’s namebench and your name server

Google has recently announced its own public DNS server, responding at IP address 8.8.8.8 and 8.8.4.4 (how nice). Also, they released Namebench, a Python tool to compare DNS performances.

Namebench basically determines your current DNS setup, some DNS you could use according to your ISP and geographic region, and tests them against also, of course, Google Public DNS.

Each DNS server is tested on the resolution of the most popular 10,000 site names, according to Alexa web survey. Each DNS test is done in parallel with the others, so network latency spikes are more evenly distributed.

I gave it a try, to meausure how fast Google DNS server is, how well my ISP performs and how good is the local DNS I’m using.

Namebench produces a lot of data, in the sake of clearness I show here only the graph of the response time, trimmed to the first 200ms of response time: each resolution taking more than 200ms is out of the graph.

In all the graphs, you see that almost every DNS does a lot of caching: cache plays a role in reducing to almost zero the response time, and after a cache miss the response time increases almost linearly, as the DNS server must perform a recursive query to give the answer to the client.

I made three run of Namebench, to see how much the cache plays a role for my local DNS server, which is the standard BIND shipped with Fedora 11, configured as a caching nameserver, without chrooting.

On the first graph, you could see that my local DNS resolves about 10% of the requests extremely fast: these requests get an answer from the local cache or require little interaction with external (root) DNS nameservers. All others requests require some network interaction, and the response time increase linearly. Take into account that all the graphs are for responses requiring up to 200ms, so there are not the unlucky interactions where my local DNS take 1800ms to give an answer: the local DNS has the worst performances in these (rare) cases.

The second graph is for a run made immediately after the first, to see the effect of local DNS cache filling: about 25% of the requests are now satisfied by the cache. In this run Namebench has replaced UltraDNS with the DNS of the University of Basilicata, Italy.

On the third graph, the cache for the local DNS plays the same as the second run, so there is a cache saturation effect. The local DNS is not suffering from memory saturation, so there is not point in increasing the local cache size by the max-cache-size directive.

There is something more in the graphs. Response curves have the shape of a constant (near zero) time for some of the queries, which means that the caches are massive, then the responses time grow linearly as the data in the cache are expired and the queried name server must contact the authoritative name servers doing a recursive query. Also:

  1. Google Public DNS has a cache hit for almost 50% of the requests, and for a cache miss the response time is dominated by the network time (from the DNS server point of view, i.e. the time it takes to do a recursive query) but this time is almost constant;
  2. OpenDNS response curvers are initially linear, which could means that the network path for reaching OpenDNS servers is not as optimized as Google’s path, but after that the cache is here to do its job;
  3. My ISP DNS (labeled as Wind2-IT) has usually good performances, probably more because the network path is its friend, it’s entirely possible that the cache is not so big;
  4. Local DNS suffers when, to fulfill a request, has to made some recursive queries, as these are usually carried over UDP and the local router is not higly optimized for UDP NATting (educated guess).

It is important to stress that the tests are made over the list of the 10,000 most popular websites: it’s probably the only way to have a benchmark of the general use, but if you visit just some a bunch of sites (as it’s usually the case) you must consider how much these results could apply to your environment. Also, these websites are all treated equal, while clearly popularity plays a role every time you deal with a cache.

These benchmarks have shown that my current setup (a local DNS) is the best, but when a cache miss occurs, and there are a lot of recursive queries to be made, the local router (and it’s UDP NATting function) is the bottleneck. Nothing to worry about, but an interesting sight to get.

Generally speaking, it’s fair to say that Google Public DNS is quite a good infrastructure, a fierce competitor both to an ISP DNS provided (which has the big benefit of the network latency) or OpenDNS (which is now several years in place).

Filed under: network, , , , , ,

Convert a .NRG file into an .ISO file

An .nrg file can be easily converted in a ISO 9660 file by skipping it’s initial 150 2048-blocks:

dd if=image.nrg of=image.iso bs=2048 skip=150

Filed under: Desktop, , ,

Follow

Get every new post delivered to your Inbox.