Saturday, December 4, 2010

Part II: Adding SSH Public Keys to DNS

Fairly quick one, you'll need sshfp (available in EPEL for CentOS) installed.

Just need to type:
sshfp -s crane.initd.net
which should give the response:

crane.initd.net IN SSHFP 1 1 c4d55ccbc4fdd2f4304586d6cdc4ad6fca5c743e
crane.initd.net IN SSHFP 2 1 7b504855c13277490992dea7091537d0f9bfdb1d
You just need to add these lines to your DNS zone for initd.net.

To get SSH to check these keys, you'll need to modify your /etc/ssh/ssh_config or ~/.ssh/config to include
VerifyHostKeyDNS true

Wednesday, December 1, 2010

Adding GPG public keys to your DNS

Recently I read a couple of interesting posts which outlined how to add GPG public keys to a zone file.

Reasons for wanting GPG key in DNS:

  • It's a nice way for people who wish to get your GPG key to obtain it.
  • It won't typically get blocked by firewalls, had major problems in past getting hkp etc. working through the firewall.
  • I prefer hosting the key myself as opposed to using subkeys.pgp.net or similar (lack of control).

There are three methods to do this - I'll go into detail on two:

  1. Use the pka record (TXT-type record) which gives a URI to the location of your key; http/finger etc.
  2. PGP Cert record.
  3. IPGP Cert Record (Won't go into much detail on this).

PKA Record:
This is the quickest/easiest method, however it is just a pointer to the cert, which I'm not keen on.
It can co-exist with the PGP Cert record, so I do it for completeness sake mostly.

Find your GPG key, export the public section & take note of the fingerprint.
gpg --list-keys
gpg --list-keys --fingerprint D4164694
gpg --export --armour D4164694 > D4164694.pub.asc
You need to make the .asc file available to the public (http), then construct your dns record:
seanos._pka.seanos.net.                             IN TXT  "v=pka1;fpr=B830A1A76A1A87C84C95B06C7476F7AFD4164694\;uri=http://elk.red-hat.eu/keys/D4164694.pub.asc"
That's it!

PGP Cert Records:
You need the program 'make-dns-cert', which is not available on Debian/Ubuntu or CentOS, but is a .c program which is distributed with gnupg. I have made it available here (just requires you to do 'gcc make-dns-cert.c -o make-dns-cert').

Assuming you have a copy of your key from the previous step, you just need to type:
make-dns-cert -n seanos.seanos.net. -k D4164694.pub.asc
This will output a single (very long) line, which you can put directly into your zone file.
Note the above address I used 'seanos.seanos.net', when referring to email addresses in DNS zones, the @ is replaced by a period. To stop screwups with copy & paste, would suggest appending the above output into the zone directly & then manually modify the serial.

Once you reload DNS, you can test everything is working using the following comands:
dig +short seanos._pka.seanos.net. TXT
dig +short seanos.seanos.net CERT
Note, if you wish your gpg installation to automatically search DNS for GPG keys, you must make the following modification to gpg.conf (typically ~/.gnupg/gpg.conf):
auto-key-locate cert pka
This will ensure it attempts to ge tthe cert first, then fallback to pka, you can also add other methods (such as hkp://subkeys.pgp.net) for it to fall further back on.

Quick note about IPGP keys:
Since you can only have one cert record for an email address, and IPGP is similar to the PKA record, in so far as it's a pointer, I prefer going with the easy (TXT Record) PKA & PGP Cert Record.

Read various sites while setting this up, but I found ​http://www.gushi.org/make-dns-cert/HOWTO.html to be the most complete resource for the above.

Sunday, November 21, 2010

Hetzner, upgrading lenny to squeeze

Late last week ended buying a hetzner VM (vServer vq7) mainly out of curiosity.
I'd heard a fair bit of good things about hetzner, and wanted to try them.
Fairly impressed, 30ms - 50ms latency (stable, varies from site-to-site), and they seem to be sanely installed VMs (no custom kernels etc).

Went for Debian 64-bit VM which turned out to be 5.0 (Lenny), again, out of curiosity did the upgrade process to 6.0 (Squeeze - unstable):
Modify sources.list to squeeze
apt-get update
apt-get remove linux-image-2.6-amd64
apt-get install linux-image-2.6-amd64
reboot
aptitude full-upgrade
reboot

(Note: The above must be done in that order, otherwise you will hit udev/kernel bug).

Sure enough, it came back up, no problems.

Spent last couple days moving all my stats and monitoring over to the hetzner VM, pleased to take some load/work off my atom servers; and have a fourth location for DNS/backups.

Thursday, November 18, 2010

Yaketystats & Jart

I've now switched my stats collection for the third time in the past year, and I'm hoping this will be the last time.

I'm not going to cover configuration or installation here, as their wiki is excellent, I will however post a couple of things which got me, and a couple of changes I made.

Yaketystats Homepage
Five minute demo video (well worth watching)

I really liked collectd,  nice plugins collection - there was a plugin for everything I wanted/cared about.
Written in C, push-based, fast, and regular polling intervals for HD stats (and who doesn't want HD these days :)

The catch is the frontend, they all seem to be severely lacking.

Various areas where CGP lacks was pointed out at my new job, mainly the inability to compare multiple graphs - at a specific point in time, combine values into one graph, and be able to search/combine graphs easily (regex or similar). After much searching for an alternative frontend, I finally came across Jart, which is the frontend for Yaketystats.

Yaketystats 2.1 has been around for over two years (Yaketystats itself has been around for approximately five altogether internally at USG), but it doesn't seem to have gotten the same community attention as munin or collectd, which is a shame.

Yaketystats is all bash/perl/php, and it's all quite simply put together.
That's not to say it's simple -- but it's very easy to modify and add onto.
They have put thought into the backend system, stats are collected on each host, stored (plain text), and a cron runs which posts the stats to a php page on the server. If it cannot contact the server it just keeps queueing up the stats till it can.

Once the stats are submitted, a couple of jobs are run on them, eventually outputting them as RRDs.
I noticed less load from this system than collectd/rrdcached for the same number of hosts.

This is where the real magic starts, Jart is their frontend.
It can draw graphs in three different ways:
1) You draw 'all' the graphs for cpu for a host - this draws a graph containing idle, iowait, user, sys for a system (combined for all CPUs).
2) You select which stats you'd like, specifically, from any number of hosts, which it will then put in one graph
3) Regex!  You can build graphs using Perl's regex -- very, very cool.

You can modify any graph you create, on the fly -- change from using lines, to stacks to area, negative or positive for different values (e.g. received traffic is given in negative, transmitted given in positive values on the graph), change colour etc - you have full control.
You can then save whichever graphs you have open as a playlist - meaning you can re-open these graphs easily, with all your settings.

So far been using this two or three days, and converted a couple other people to it too.


Couple of changes I made for my setups:
patch to add limits tab to graphs.

Remove the info at end of each graph (generation time, most recent rrds etc), modify index.php:
$tmp  = array_merge($this->args,$this->defs,$this->lines,$this->events,$this->comments);
to read
$tmp  = array_merge($this->args,$this->defs,$this->lines,$this->events);

Also, we wanted to sort/break up our server names by the fourth character, and not the first, again requires modifying index.php:
var nl = me.charAt(0);
to
var nl = me.charAt(3);
Note, found a bug where it breaks completely if you have a username with less characters than it looks for above (4 in this case).

The last change is to remove the capitalisation of the hostnames - comment out the two lines above the charAt line:
mefirst = me.replace(/(.).*/,'$1').toUpperCase();
me      = me.replace(/.(.*)/,mefirst + '$1');

Sunday, November 7, 2010

Collectd - Frontent & performance modifications

Been using collectd a fair while now, one of the largest drawbacks of it the lack of a good frontend.

I recently found Collectd Graph Panel (CGP) which is by far the best frontend I've used for it yet (oddly enough the handiest to setup also!)

It doesn't seem to be under overly active development, but lot of it works, and support for more plugins is getting added to the git repository (which I'd suggest using as opposed to the 0.3 release).

Another modification to collectd I made recently was moving from using the rrd plugin to the rrdcached plugin, which lets the rrdcached service handle the rrd data.
Caching seems to be done better using this service as opposed to within collectd.

By default rrd storage location is different, so to migrate just rsync (assuming defaults) from /var/lib/collectd/ to /var/lib/rrdcached/db/collectd/ and restart collectd (after disabling rrd plugin & enabling rrdcached plugin).

Migrating to nginx from apache (with PHP + CGI support)

Was always aware of nginx, and several people mentioned it, eventually it hit home (especially after reading some interesting LOPSA posts on their mailing list.

For this example, I'll detail my setup for  PHP, CGI, nginx status page and restricting by address (IPv4 & IPv6).

The following is done on Debian Squeeze, I have tested the config with nginx version 0.7 & 0.8.

The following dependencies are required: php-fastcgi fcgiwrap spawn-fcgi php5-cgi

Although not in the Debian repository, I found php-fastcgi packaged, and a friend ended up packaging nginx 0.8 (0.7 is available in the repository).
php-fastcgi
nginx-0.8

I went with version 0.8 of nginx as it allows you to restrict access to sites using IPv6 addresses, as opposed to IPv4 only.

Firstly, the /etc/init.d/php-fastcgi wrapper must be modified - by default it listens on a port, and only starts one process - I want to use socket-based communication between nginx & start multiple processes (for me, 3 is enough).


LOGDIR=/var/log/php-fastcgi
PIDFILE=/var/run/$NAME.pid
SOCKET=/var/run/$NAME.socket
CHILDREN=3
DODTIME=1
DAEMON_OPTS=" -s ${SOCKET} -u ${USER} -g ${GROUP} -f ${PHP_CGI} -P ${PIDFILE} -C ${CHILDREN}"

The following is my nginx configs from /etc/nginx/sites-enabled/
localhost:

server {
        listen   127.0.0.1:80; ## listen for ipv4
        listen   [::1]:80;
        server_name  localhost;
        location /nginx_status {
                stub_status on;
                access_log   off;
                allow 127.0.0.1;
                allow ::1;
                deny all;
        }
}


stats.domain.com:
server {
        listen   80;
        listen   [::]:80;
        server_name  stats.domain.com;
        root   /var/www/stats;
        access_log  /var/log/nginx/stats.domain.com-access.log;
        location / {
                index  index.html index.php;
                allow 127.0.0.0/8;
                allow ::1;
                deny all;
        }
        location ~ \.php$ {
                gzip off;
                fastcgi_pass unix:/var/run/php-fastcgi.socket;
                fastcgi_index index.php;
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        }
        location /smokeping/ {
                index smokeping.cgi;
                gzip off;
                if ($uri ~ "/smokeping/smokeping.cgi") {
                        fastcgi_pass unix:/var/run/fcgiwrap.socket;
                }
        }
}
 I symlinked /usr/share/smokeping/www to /var/www/smokeping & also symlinked /usr/share/smokeping/cgi-bin/smokeping.cgi to /usr/share/smokeping/www/

Thursday, October 21, 2010

New ION2 HTPC build using XBMC

Finally purchased & setup the new HTPC, was a couple of delays along the way, but setup is pretty close to what I had planned in my previous post.

Case - kustompcs.co.uk:
Antec ISK 300-150 Mini-ITC
£67

Motherboard - kustompcs.co.uk:
ASUS AT5IONT-I Atom D525
£135 + £24 Shipping

Fans - quietpc.com:
2x Noctua NF-R8
€30 + €4 Shipping

Memory:
elara.ie: Kingston 1GB PC8500 DD3 SODIMM
€24 + €8 Shipping
misco.ie: Kingston 2GB PC8500 DD3 SODIMM
€45 + €3 Shipping


Case same as the low-power always-on box I have, except went from 65W power brick to 150W internal PSU with fan (Motherboard needed 90W max, and only option for power brick would have been picoPSU which I could still go for if wished).
PSU fan is  thankfully as quiet as the Noctua fans, so no need to go for picuPSU.

As previously mentioned, I found the Noctua fans excellent, so went for another pair of the NF-R8s.

Motherboard was the thing I'd been waiting for. Soon as I heard about ION2, been waiting for a suitable board to come out.
Newer generation Atom uses less power, so although I haven't taken any measurements yet, I can only assume the box uses 25W or less.

Using Fedora 13 again for XBMC (SVN) with MPlayer (SVN) for playback, which is the best player I've encountered for VDPAU playback.
Performance is excellent all round, smooth menu navigation and transitions, and no problem with playback.

I did test Debian Squeeze (with an ION1 box), and noticed massive problems.
Lirc and menu navigation was sluggish and juddery.

Outputting video & audio over HDMI from the HTPC to my amp.

The box is near-silent, faint fan noise can only be heard from it when 1m away from it, as I generally sit 3-4m from it, don't hear a thing. It stays reasonably cool, typically at ~40°C.

I'll post up my configs & scripts used soon, and hopefully get some pics up by the weekend.

Saturday, September 25, 2010

Time to complete move to low-power

Finally, ASUS release an ION2 version of their previous boards I've been so pleased with.
AT510NT-I, Fanless Dual-Core 1.8Ghz.

Reckon I'll be upgrading to this fairly soon, and finally put down my ageing Dell Dimension C521 (Current media centre).
A couple problems with my current HTPC setup:

  • Highish power use, for what it's doing
  • Requires separate nVidia graphics card to do VDPAU
  • Both system & GPU have a fan, latter generating quite a bit of noise a times. 

As my HTPC uses XBMC for displaying of directories/files, and then launches mplayer (using VDPAU) for playback, it means there is minimal CPU use; XBMC uses OpenGL for a lot of things, and is quite heavy in that respect.
MPlayer will only use 2-5% CPU when using VDPAU, and ION2 means that VDPAU will also decode Xvid - I'll be very happy with with the above board I believe.

Not yet fully decided on what the new config will be, but I reckon very similar to the first mini-itx box I built using Antec's ISK-300-65

Monday, July 5, 2010

Pics of current setup

Decided I'd throw up some pictures of current/new setup, since already documented power use etc.

       



       

Friday, June 25, 2010

New File Server

Finally bought it, it's now sync'ing over files from previous beast.

In the end went with:
  • Lian Li PC-Q08 (Overclockers.co.uk)
  • 2TB Samsung F3 EcoGreen (Overclockers.co.uk)
  • 32GB SSD, OCZ Vertex (Overclockers.co.uk)
  • Nexus Value-430, 430W PSU (Quietpc.com)
  • Noctua NF-P14 & NF-S12B; 140 & 120mm case fans (Quietpc.com)
  • Asus AT3IONT-I Fanless & 2x2GB (Mini-itx.com)
  • Startech eSATA & USB 3.5/2.5" Dock (Amazon)
  • 2x2TB Samsung F3 EcoGreen (Dabs.ie)
Initially went for only one 2TB disk, then got other two later.
Only thing left to get is a PCIe SATA controller (non-RAID). Looking to find one with 4 Internal ports, and one External, however will probably settle for 4 Internal & buy a faceplate with eSATA <-> SATA.

So far very impressed overall; there was more room to work within the case than I thought there would have been - though getting the PSU in, and working with the mass of cables is hellish.
Nice thing Lian Li did though was include a bracket to attack to PSU, which means can aim the extractor fan of the PSU towards the fanless motherboard, and pull more hot air away from it.

CPU seems to be running at about 48C, and the mainboard at 32C, but it's in a fairly small room with the large server beside it; and it has been transferring files for 19hours straight, so would expect some heat buildup.

Not had chance to take power readings yet; will update this & the previous thread once I get around to it; hoping for something fairly low though (so I can justify the cost to myself!)

Software:
I tested FreeNAS & Openfiler, and in the end just went for Debian Squeeze.
I couldn't get used to doing everything via web GUI.

WOL works too out of the box, so once I get all my stuff copied over, I'll be scripting shutdowns/wakeups & backups -- will post any particularly interesting scripts.

Last note, noise -- extremely impressed.
The sound of the disks is the loudest thing; and barely hear them. I have the case in the mini-server room (downstairs toilet converted) anyway, so unless actually in there, will never hear a thing.