Bits and Chaos


Between bits and chaos, a sysadmin stands.

Pinnacle 50i with Fedora 8 64bit

Problem statement: I have a Pinnacle 50i and I want it working on Fedora 8 64 bit.


  • The card works well with Windows XP, I have audio and video, so if audio is missing on Linux it’s not a cable problem;
  • On Linux, I got the video but not the audio. On Fedora 6, the audio was present but feeble, which suggests that the problem is part due to kernel module configuration and part to Pulseaudio.

Solution steps:

  • On /etc/modprobe.conf, tell the system that we want to use a saa7134 card plus the module to get its audio output feed into the ALSA subsystem:

options saa7134 card=77 video_nr=1 vbi_nr=1 radio_nr=1
install saa7134 /sbin/modprobe –ignore-install saa7134; /sbin/modprobe saa7134-alsa

  • As root issue: modprobe saa7134-alsa. Check that you have the card as an audio source:

$ arecord -l

**** List of CAPTURE Hardware Devices ****
card 0: CK8S [NVidia CK8S], device 0: Intel ICH [NVidia CK8S]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: CK8S [NVidia CK8S], device 1: Intel ICH – MIC ADC [NVidia CK8S – MIC ADC]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 2: SAA7134 [SAA7134], device 0: SAA7134 PCM [SAA7134 PCM]
Subdevices: 0/1
Subdevice #0: subdevice #0

  • You have to make some kind of magic sound rerouting, instructing sox to connect the output of the card to the input of the ALSA subsystem:

$ ls -l /dev/dsp*

crw-rw—-+ 1 root root 14, 3 2008-01-27 12:50 /dev/dsp
crw-rw—-+ 1 root root 14, 35 2008-01-27 13:31 /dev/dsp2
$ sox -c 2 -s -w -r 32000 -t ossdsp /dev/dsp2 -t ossdsp -w -r 32000 /dev/dsp

  • Now you can start your tv viewer of choice (yum install tvtime kdetv to have them both). It’s possible that tvtime search for a /dev/video0 device whilst you have /dev/video1. If this happens, issue tvtime-configure -d /dev/video1 to fix for the configuration file.
  • Enjoy watching tv, which is something impossible on Sunday afternoon in Italy due to the very poor quality of what’s aired.

Some interesting links:

Read the rest of this entry »

Filed under: fedora, , , , , ,

Multiple Rails applications proxyed by Apache 2.0

At the $JOB we are moving from a traditional LAMP stack to a Ruby on Rails based one. Security has always been central, so we choose to have one operating system user for each application: if an application experiences a security flaw, it could not affect the others as the operating system will prevent from accessing and manipulating files and data that belong to other users. Of course, an application flaw plus a kernel privilege escalation could be a bit more problematic, but it’s a lot more unlikely that two different security problems appear at the same time, so you can keep your kernel (and core libraries) updated and you should be able to contain the problem.
Beside this, we want to have only one access point for the Rails applications, a kinda of grown up intranet-extranet portal where each application is developed by an agile methodology (at least, the developers told me so) so they could more easily updated: it’s like a specific task for a specific application task.
If you search the Internet, you’ll find a lot of example to configure Apache 2.2 to work with Mongrel, leveraging on mod_proxy_balancer. But Red Hat 4.x ships with Apache 2.0, so we need to went another way.
(Note: some of the next is based on some other blogs I’ve read. I’m stupid enough to be unable to remember them, but if you feel it could be your blog and work I’m standing on, tell me and I’ll credit).
The scenario we consider is this one:
  • we have an access site, that could or could not has some contents of its own;
  • we have a first Rail application, called First App, that should be accessed with the starting URL;
  • we have a second Rail application, called Second App, that should be accessed with the starting URL;
  • (we need someone payed to find fancy names for our applications, by the way);
Configuration of First App and Second App from the Mongrel point of view is somewhat made, as an example by using my Mongrel helper script. One way or the other, we have that First App is listening on ports 3000,3001,3002, and Second App on 3003 and 3004.
So we first write a file that will tell Apache where each application is listening:
$ cat firstapp.ports
ports 3000|3001|3002
$ cat secondapp.ports
ports 3003|3004
then we define the VirtualHost section for (if you don’t have a virtual host based setup, skip the <VirtualHost> and </VirtualHost> markups):
<VirtualHost *:80>
ProxyRequests Off
# First app
ProxyPassReverse /firstapp
ProxyPassReverse /firstapp
ProxyPassReverse /firstapp

# Second App
ProxyPassReverse /secondapp
ProxyPassReverse /secondapp

ProxyPreserveHost On
RewriteEngine On

# Rewriting maps
RewriteMap mongrelfirstapp   rnd:/path-to/firstapp.ports
RewriteMap mongrelsegreteria      rnd:/path-to/secondapp.ports

# Rewrite for
RewriteCond %{REQUEST_URI} ^/firstapp.*
RewriteRule ^/(.*) http://localhost:${mongrelfirstapp:ports}/$1 [P,L]

# Rewrite for
RewriteCond %{REQUEST_URI} ^/secondapp.*
RewriteRule ^/(.*) http://localhost:${mongrelsecondapp:ports}/$1 [P,L]

#RewriteLogLevel 9
#RewriteLog logs/
ErrorLog logs/

The ProxyPassReverse directives instruct Apache on how to deal with the Mongrel back-end servers, you need one directive for every listening Mongrel. Then, we define two RewriteMap, one for each application, called mongrelfirstap and mongrelsecondapp. These rewriting maps choose a random value for the port parameter, due to the rnd: prefix. Then we tell Apache that: if the URI starts with /firstapp, it should proxy [P] the request to one of the three Mongrel serving the first application, choosing it randomly (mongrelfirstapp:ports), and this must be the last URL rewriting rule to apply for this request [L]. Otherwise, if the URL is starting with /secondapp, it should do the same with respect che Mongrel servers serving Second App. Otherwise, it will serve the URI with the standard Apache mechanism. If is itself served via mongrel, you can add other configuration directives where the prefix is not /firstapp but a simple /.

Speaking about prefix, the Mongrel istances must be started with the –prefix parameter that should equal the part defined in the RewriteCond (so, –prefix=/firstapp and no trailing slash). It’s possible to made some syntactic sugar to avoid this, but I prefer that the application itself has an idea on where is located.

About performance: this configuration will proxy every request, included the ones asking for static content. It’s possible to define some rewrite conditions that result in the intervention of Mongrel instances only when there’s actually some Rails code to parse, otherwise relying on Apache, but I don’t need this in my setup.

Filed under: ruby on rails, , , , ,

Why Sun acquired MySQL AB

ACM Queue, Vol. 5. No. 6 Sep/Oct 2007, “A Conversation with Jeff Bonwick and Bill Moore“:

“If you have a database sitting on top of a transactional file system, the database is sitting up there being very careful about ordering its writes to the log and being clear about saying when it wants to know this stuff is flushed out. Then beneath it, you’ve got this transactional file system creating transaction groups, putting in a bunch of changes, and then committing those out to disk atomically.
Why not have a blending of the layers where basically the whole back end o the database – the part that isn’t parsing the SQL – participates directly in the transaction groups that ZFS provides, so that consistency is no longer a problem the database has to solve? It can be handled entirely by the storage software below”.
So, Sun has acquired MySQL AB to spreads the adoption of its ZFS file system. Interestingly enough, MySQL has a plugin based architecture, so it’s possible do define more than one data-engine to deal with your data. Saying that we’ll soon see a plugin for MySQL that leverages on ZFS it’s even too easy.
Now that this acquisition is done, I could write some more. About one year ago I heard rumors that Red Hat was considering to buy MySQL AB, and that they stopped at the very last time as they prefer not to put their relationship with Oracle at risk. It’s entirely possibile (but it’s a my own speculation) that Oracle Enterprise Linux came out as a stop signal for Red Hat, which got the message.
Now, some key pieces of a modern Linux installation (Java stack, MySQL database) that Red Hat supports and offers to its customer, are owned by Sun, which became a key player in the Linux market. Another speculation is that, in a one-two years time, Solaris will be the best choice for a “LAMP” architecture, to rename it “SAMP”.
Yes, you can use Ruby on Rails if you want to depart from PHP and avoid Java, but recall to your mind who is behind JRuby, and you got the picture.

Filed under: mysql, oss, , ,

Review: Essential SNMP, 2nd edition

Last previous entry in this blog was one month ago, I’m unhappy with this but as I need to write down my Ph.D. dissertation by two or three months’ time I’m feeling the pressure and have very little time. But the life continues and goes on, so I’m still a Linux system administrator, and as such I feel that the most important skill I must develop is in the monitoring area. Only if you have a monitoring system that helps you track down software and, more important, hardware failures, you can successfully administer a large cluster of machines and being productive and pro-active, otherwise you’ll simply waste your time by fixing the today’s problem, and tomorrow will be another day with a tomorrow’s problem.
This may sound very common if you work in a corporate environment, but in Italy we have very few big customers, so the idea of monitoring is very well confined in some magic gardens where you are usually not invited. To learn the path for these gardens, I decided that this year I should focus on enterprise monitoring, and I started with the very basic of it, the SNMP protocol.
If you dig on the Internet for SNMP, you find some interesting tutorials, I greedily read them but I realized that I need something more robust and comprehensive. For my forma mentis (a latin expression that means shape/settings/idea of your mind, plus with no Wikipedia entry) I cannot successfully use a layer 7 tool if I don’t have a good idea of the communication protocol it will rely on. So I searched for an in-depth book and I finally landed on Essential SNMP, second edition from O’Really.
I found it’s an excellent book to understand what SNMP is and how it works, from the definition to packet sniffing on the network to see real data exchange. Also there are some real programming example if you want to write your own SNMP agent, so it’s a good starting point when you need to interact with an heavy customized environment.
But it’s a bit outdated, as every example in the book is about configuring and using HP OpenView, whilst open source tools like Nagios , Zabbix, Zenoss and OpenNMS have no more than some pages (if any) in the appendixes.
These tools, for what I’ve understand now, are usually hybrid, meaning that they covers both the hardware level monitoring function and the software one. Some of them, like Nagios, comes from application level monitoring and have some SNMP extensions, others are natively in the application layer and go deep in the stack, others were designed with the idea to cover both areas. They are very different in installation requirements, required configuration efforts, ease of maintenance. Some of them have a lot of plugins that makes the interaction with the hardware or the applications easy, some requires more tweaking. Even support is completely different, ranging from a free consultancy market to a single company that writes the software, give it to you for free, and try to made revenues from the support service.
So, to go back to the long term project, I think that I should understand how these oss solutions work, compare them, and deploy one or more of them to have a complete control over the infrastructure.

Filed under: Uncategorized, , , , , , ,