On behalf of NSA, I highly commence this awesome extension of IMAP.
October 24, 2013 • 3:59 pm 0
On behalf of NSA, I highly commence this awesome extension of IMAP.
June 28, 2013 • 3:31 pm 0
The NYT has the definitive solution to the very complex problem of digital signatures.
Wondering who’s gonna be the first one to patent it.
June 18, 2013 • 9:38 am 0
VAX/VMS was my first exposure to the real world of real operating systems. Before it, I had had only MS-DOS as operating system.
I remember the complexity, resilience and structure of this beautiful system, capable of extracting every single cycle of power from hardware as powerful as an Intel 386 (more or less) sustaining two dozens of terminals at the time being. I remember the scheduler, that has two main classes of programs, batch and real time. I remember the extensive documentation (the joy of seeing a couple of shelfs filled with documentation manuals (even the help system has its own manual, both for the average user and for the developer). I also remember when, it was 1994, I wrote a keylogging to steal the SYSTEM password (authorized and in fact challenged by my CS teacher, and to say you a simple truth, the two passwords – yes there were two of them – were part of a motto published in the entrance of the computer lab). RUN AUTHORIZE was my first exposure to what, twenty years later, would be the Linux capabilities system.
So, today is a sad day, because HP has announced that it will discontinue OpenVMS, the notable heir of VMS. Many reasons for that, probably some short sighting for the DEC-DIGITAL-COMPAQ executives who didn’t believe in a multi-user, multi-program system quite different from Unix. If they did so, the history of computing would have been probably different, because VMS on a Alpha processor was quite a number crunching beast.
July 25, 2010 • 8:41 pm 0
Scenario: you have some .MBX or .PST files, and you want to import them into Thunderbird.
The easy way to accomplish requires the installation of Outlook/Outlook Express, then importing the files into this application, then firing up Thunderbird and using the Import menu (there’s also an extension, I’ve tested and doesn’t work). I don’t like this very much, as it requires the installation of another application; it would be better if Thunderbird could directly load an .MBX/.PST file, but this options is still lacking as of Thunderbird 3.1.1.
So I have devised another approach:
This could be repeated for every file, when needed; also, the MBOX format is so good that you could concatenate multiple MBOX files into /var/spool/mail/<username> and then performing a single bigger import, maybe with some filters to dispatch incoming mails.
I’ve done this today for a friend of mine, and I’ve been able to import about 30,000 mails, from 1999 to today, coming from at least three different computers and multiple versions of Outlook and Outlook Express.
Of course, starting from now, he is kindly requested to use a mail client that natively support an open format.
March 29, 2010 • 11:32 am 4
Certificate Patrol is a nice add-on for Firefox: it basically monitors all SSL connections and checks, during activation, if the exchanged certificate has changed. This is extremely useful for determining if you are under a man-in-the-middle attack.
To give you an idea, I tell you that my university has a webmail service, which I use a lot. A couple of days ago, I access this service from work, and Certificate Patrol shows up this message screen:
The message is a bit cryptic, but the sense is clear if you know how to read it: the Certification Authority that guarantees the authenticity of the site I’m using is changed, and is no longer Cybertrust. So I ran into the operations office and told them that we are under attack, just to discover that they are doing a test, using some (I cannot tell you the name) web proxy to inspect all the SSL connections. Of course, it was just a test, but Certificate Patrol really does its job, alerting me that something strange it’s happening in the network.
It’s interesting to observe that, prior to the message, I was temporarily unable to access the webmail: I thought it was because they were experiencing problems, while it was due operations reconfiguring the web proxy. When I was finally able to access the webmail, Firefox told me (using the standard message) that the connection to the website was with an unsecure certificate, and my first idea was that they had rebooted the webmail in the university and they have somehow changed the certificate, so I click, click and click again to tell Firefox that I was willing to accept the risks.
In fact, I did a stupid thing, because I should not accept, at least no easily, that a website is changing its certificate with something not issued by a CA: without Certificate Patrol I would be unaware of what was really happening.
And, if you think that you would never experience anything like this, because you always refuse accepting certificates from an unknown CA, you’d better read this Law Enforcement Appliance Subverts SSL and Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL, an article where another plugin for Firefox to address this kind of vulnerability is exposed.
March 19, 2010 • 11:33 am 0
We had learnt and taught Internet as an hierarchy of ASN, starting from local ISP to regional to Tier 1, with traffic graciously moving between the levels.
This is no longer true, according to this presentation from NANOG47:
It’s time to rewrite some courses material.
January 7, 2010 • 2:32 pm 3
Almost every year, several years now on, starts with the declaration that it will be the year of the Linux desktop. Although we are making some progresses in developing a competing platform for desktop PC (including releasing of some malware via a screensaver application) we are not seeing such widespread adoption, and probably we won’t see as the hot spot today is in cloud computing, virtual desktop, web-oriented operating systems and whatever.
But another prediction could be done, that this year could be the year of the e-book reader. At least for me, as I have pondered a lot whether I should buy one of these during these holiday season. But I regret to avoid this expensive self-gift, as I am not seeing a device that does all and only all it should do. The portable music audio market took off when Apple released the iPod, as it does only one thing but very well, not from the technological point of view (it’s always an MP3, so a relative good sound quality) but for the interface itself, that let people do what we want to do: choose and building our music library, arrange and playing it the way we want. This extremely good design seems not happening until now for the e-book reader market.
What should I expect to find in a e-book reader? I’ve thought this sort of list:
Even not including the price limit, there are no devices in the market with all these features: most e-book readers have a 6 inches display, the Kindle is deeply integrated with the Amazon DRM (which could let to disaster like this), or some features are missing (the dictionary) or bad-implemented (stylus instead of the touchscreen).
It seems to me like designers are so satisfied with the e-Ink technology that they simply refuse to work more on the interface, empathizing thinks like the battery lasting (“you could read 10,000 pages before recharging”) and not the most fundamental interaction with the device (“you can make notes, export them, share them with your friends”). It’s so bad, because as the result of these I’ll be forced to continue buying books and printing documentation, which pollutes a lot and requires a lot more trees to be sacrificed on the altar of knowledge.
December 8, 2009 • 1:30 pm 5
Namebench basically determines your current DNS setup, some DNS you could use according to your ISP and geographic region, and tests them against also, of course, Google Public DNS.
Each DNS server is tested on the resolution of the most popular 10,000 site names, according to Alexa web survey. Each DNS test is done in parallel with the others, so network latency spikes are more evenly distributed.
I gave it a try, to meausure how fast Google DNS server is, how well my ISP performs and how good is the local DNS I’m using.
Namebench produces a lot of data, in the sake of clearness I show here only the graph of the response time, trimmed to the first 200ms of response time: each resolution taking more than 200ms is out of the graph.
In all the graphs, you see that almost every DNS does a lot of caching: cache plays a role in reducing to almost zero the response time, and after a cache miss the response time increases almost linearly, as the DNS server must perform a recursive query to give the answer to the client.
I made three run of Namebench, to see how much the cache plays a role for my local DNS server, which is the standard BIND shipped with Fedora 11, configured as a caching nameserver, without chrooting.
On the first graph, you could see that my local DNS resolves about 10% of the requests extremely fast: these requests get an answer from the local cache or require little interaction with external (root) DNS nameservers. All others requests require some network interaction, and the response time increase linearly. Take into account that all the graphs are for responses requiring up to 200ms, so there are not the unlucky interactions where my local DNS take 1800ms to give an answer: the local DNS has the worst performances in these (rare) cases.
The second graph is for a run made immediately after the first, to see the effect of local DNS cache filling: about 25% of the requests are now satisfied by the cache. In this run Namebench has replaced UltraDNS with the DNS of the University of Basilicata, Italy.
On the third graph, the cache for the local DNS plays the same as the second run, so there is a cache saturation effect. The local DNS is not suffering from memory saturation, so there is not point in increasing the local cache size by the max-cache-size directive.
There is something more in the graphs. Response curves have the shape of a constant (near zero) time for some of the queries, which means that the caches are massive, then the responses time grow linearly as the data in the cache are expired and the queried name server must contact the authoritative name servers doing a recursive query. Also:
It is important to stress that the tests are made over the list of the 10,000 most popular websites: it’s probably the only way to have a benchmark of the general use, but if you visit just some a bunch of sites (as it’s usually the case) you must consider how much these results could apply to your environment. Also, these websites are all treated equal, while clearly popularity plays a role every time you deal with a cache.
These benchmarks have shown that my current setup (a local DNS) is the best, but when a cache miss occurs, and there are a lot of recursive queries to be made, the local router (and it’s UDP NATting function) is the bottleneck. Nothing to worry about, but an interesting sight to get.
Generally speaking, it’s fair to say that Google Public DNS is quite a good infrastructure, a fierce competitor both to an ISP DNS provided (which has the big benefit of the network latency) or OpenDNS (which is now several years in place).
September 15, 2009 • 2:23 pm 1
An .nrg file can be easily converted in a ISO 9660 file by skipping it’s initial 150 2048-blocks:
dd if=image.nrg of=image.iso bs=2048 skip=150