Linux apcupsd and Old APC Serial to USB

I recently acquired an old APC BackUPS 500 that has a ‘dumb’ serial connection. It is old enough that it doesn’t even support a ‘smart’ serial connection which gives much better output. Unfortunately it doesn’t communicate very much at all with the computer. However it is useful enough (I hope) to shut my computer down during an outage.

One of the first things I had to do was to recompile my kernel with the ‘usbserial’ and ‘pl2303′ modules. The usbserial module allows for serial interface over USB and the pl2303 module is for the model of serial to USB cable that I bought.

After inserting those modules, I found that I could not communicate with the UPS properly, or know which ‘/dev/ttyUSBx’ it would appear as on each boot. Thermionix of GitHub posted a great and simple fix for this issue by setting the specific model of serial cable to the same tty device each time it is inserted. Here are the instructions:

# use lsusb to find the details of the serial adapter to create a udev rule

~$ lsusb | grep Serial
Bus 003 Device 002: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port

# based on the output from lsusb add a rule to /etc/udev/rules.d/local.rules

~$ cat /etc/udev/rules.d/local.rules
#Bus 004 Device 002: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port"
ENV{ID_BUS}=="usb", ENV{ID_VENDOR_ID}=="067b", ENV{ID_MODEL_ID}=="2303", SYMLINK+="apc-usb"

##########

# set /etc/apcupsd.conf to use the new device
~$ head /etc/apcupsd.conf
UPSCABLE smart
UPSTYPE apcsmart
DEVICE /dev/apc-usb

##########

# After a reboot check the status of the UPS using
~$ sudo apcaccess status

Source

Gentoo Upgrading to Netatalk 3.x (AFP & Time Machine)

I currently run a Time Machine server on my custom-built NAS that runs ZFS. This box hosts an AFP share for my TimeMachine backups over the network.

Recently I updated my Gentoo packages and I received Netatalk 3.x as an upgrade from Netatalk 2.x. This broke my shares as afp.conf is now being used to setup shares and it is located in /etc/afp.conf.

After upgrading, you must edit /etc/afp.conf to add your shares, which is self-explanatory. After setting up my Time Machine share again, my /etc/afp.conf looks like this:

cleteNAS netatalk # cat /etc/afp.conf
;
; Netatalk 3.x configuration file
;

[Global]
; Global server settings

; [Homes]
; basedir regex = /xxxx

; [My AFP Volume]
; path = /path/to/volume
[TimeMachine]
path = /rpool/cleteTimeMachine

This is correct in and of itself and you can access your shares using this configuration. However, you will receive the following error from TimeMachine (see the Console Mac application for more details):

11/10/12 7:58:04.294 AM com.apple.backupd[10232]: Destination /Volumes/TimeMachine does not support TM Lock Stealing

If you Google the issue, you will see that you should edit /etc/netatalk/AppleVolumes.default. However this is not where the setting resides anymore. An analysis of the Netatalk documentation on setting up shares reveals an extremely simple solution:

 Mac OS X 10.5 (Leopard) added support for Time Machine backups over AFP. Two new functions ensure that backups are written to spinning disk, not just in the server’s cache. Different host operating systems honour this cache flushing differently. To make a volume a Time Machine target use the volume option “time machine = yes“. 

Source: Netatalk Documentation

Modify your /etc/afp.conf to look like this:

cleteNAS netatalk # cat /etc/afp.conf
;
; Netatalk 3.x configuration file
;

[Global]
; Global server settings

; [Homes]
; basedir regex = /xxxx

; [My AFP Volume]
; path = /path/to/volume
[TimeMachine]
path = /rpool/cleteTimeMachine
time machine = yes

Restart /etc/init.d/avahi-daemon and you are good to go!

iPhone GPS Information and Accelerometer with Sharing

Recently, I’ve been wanting to learn more about mobile development. I picked up an Objective-C book targeted for the Mac (iOS development is very similar) but the transition from Java and C++ to Objective-C’s methodologies was difficult for me. For a while, I put the book down because it wasn’t getting me anywhere. It did not do a good job of explaining MVC to someone who had never used it before and it didn’t do a good job at explaining the delegate pattern that is extremely common in the Cocoa and Cocoa Touch frameworks. I did not understand at all how it worked.

My interest piqued again when I saw Stanford University’s iTunes U series on iPhone Application Development. I decided to give the first two classes a whirl as I figured they might be good at introducing the way that Objective-C and the Cocoa frameworks function. It turns out that I was right and they were a great introduction. I didn’t even continue to watch the series and I just jumped straight into creating some simple applications.

The first application that I have created is a GPS sensor information app, with a minimal feature list. All the app will do is allow you to view a map and it has two other tabs, one showing GPS information along with your current address and another showing accelerometer data. Each of the last two tabs has buttons so that you can easily e-mail or message the information to someone.

This application turned out to be a good starting point for me with Cocoa Touch and I am now working on two more applications that follow along the GPS path. I will be creating a trip tracking application soon as well as a “live” online trip tracking application. Look out for these in the next six months, as they might actually be useful. ;)

Check out my GPS & Sensor Info app on the App Store.

Munin Monitoring – Temperature

I’ve written yet another Munin monitoring plugin recently. I am a little bit obsessed with checking the weather, so this latest plugin allows me to see exactly what the temperature is both outside and inside at any given point in time, as well as view graphs for the past week, month, and year. I haven’t figured out how to make Munin store more than 1 year of data (if you know, please post a comment).

Daily temperature:

 

Monthly temperature:

 

Yearly temperature:

 

I wrote this plugin in conjunction with an Arduino script, an Arduino, an Ethernet shield, and two DS18B20 Dallas OneWire sensors. The plugin will send a request over to my Arduino’s URL to get the sensor data once each polling period (5 minutes default). The Arduino will send data as such:

Indoor.value 70.47
Outdoor.value 37.06

When running the script for testing, you should see:

cleteNAS ~ # munin-run arduinotemp
multigraph temperature
Indoor.value 69.69
Outdoor.value 36.84

multigraph temperature.Indoor
Indoor.value 69.69
multigraph temperature.Outdoor
Outdoor.value 36.84

 

The plugin will parse the information (remove the line break) and create a unified graph as well as individual graphs for each sensor. The whole setup is easy to get going once you have the parts and it is easily adaptable. I didn’t write the code for reusability, so each file attached will have to be modified in order to fit your build.

If you want to use this plugin but are having issues, leave a comment below.

arduinotemp (Munin code, Perl)

TempServer.zip (Arduino code, you must have the Dallas OneWire library first)

 

Munin Plugins – CrashPlan Monitoring and Hard Drive Spinning State

Ever since I have installed Munin, I have been noticing system statistics that I would like to monitor that the built-in plugins do not. I have written two Munin plugins and I’d like to share them, as simple as they are. Munin doesn’t require a lot when it runs its plugins. The only necessary steps to writing a plugin can be learned in just a few minutes. A guide to writing plugins can be found on Munin’s website.

The first plugin I have written is one that will graph the spinning state of a specified hard drive. At the time I wanted to keep track of whether or not my drives were spinning. After realizing that it is best to keep my drives spinning, I have since turned off drive sleep. Never the less, the plugin graph is shown below.

Source code for hddsleep_.

 

The second and more interesting plugin I have written is one that will monitor the status of the incoming backups to my NAS from my family’s CrashPlan clients. This plugin will produce multiple graphs; one set for each computer that is backing up to the computer that Munin is monitoring. Two graphs will be created for each incoming backup: a remaining backup size and a backup state. Two graphs are linked below, but they can be hard to read. Clicking on these graphs will show a page detailing each individual computer.

Source code for crashplan.

Munin – Simple Monitoring at its Finest

On Thursday, I discovered Munin, a nifty daemon for monitoring just about anything on any type of system. I had previously tried Nagios, but found it to be cumbersome and simply too much for my personal needs. I want a monitoring solution that doesn’t have any bells or whistles, doesn’t alert me, but is there when I want to check up on my new system. Munin is perfect for just that. Munin’s installation is simple and its configuration script will setup a majority of the plugins automatically based on its discovery of your installed packages. Munin is also easily customizable and its scripting interface is very easy to utilize.

Check out some graphs for my system below:

See my Munin installation here.

Gentoo + ZFS on Linux + RAID-Z1 = Awesome

Now that I have had a few days to form an opinion on ZFS, I will provide a more in-depth analysis, while still only scratching the surface. Before I begin, let me explain my build.

  • AMD ASROCK E350M1-USB3 Mini-ITX Motherboard (4x SATA 3 ports, all on a single controller as it seems)
  • 4GB DDR3 1333 RAM (1x4GB)
  • 2x2TB HDDs (2 new and cheap, 1 external that is 6 months old that I pulled out of its case)
  • 1x120GB 2.5″ HDD manufactured in June 2007, used as ext4 root

ZFS as a whole is amazing. Copy-on-write, clones, snapshots, compression, deduplication, NFS sharing, SMB sharing (not on Linux yet), and encryption (currently Solaris closed-source only) are some of its best features, to name a few. The features are simply astounding, which is what makes ZFS the volume manager of choice (I say volume manager because its package contains more than just a filesystem) for anyone who is interested in managing the way their data is stored. Personally, I have no real need for ZFS; I have plenty of space, copies of my data in two physical locations 700 miles apart, and infrequent hard copies of my data. The real motivator for trying ZFS is pure “sport,” if you will. Without further ado, here follows my notes on the whole setup. I began my setup with Pendor’s guide over at Github. Pendor wrote a Gentoo Linux overlay, made a custom LiveCD, and a nice and easy guide for installing ZFS. His overlay really helped me get up and running much faster than I otherwise would have been able to. I was able to shortcut some of the steps because I did not aim to have a ZFS root. Regardless, I thank Pendor for his excellent guide. The first troubles I ran into were, even after switching to the HEAD revision of the ZFSOnLinux project at Github, that I could not compile SPL (Solaris Porting Layer) under any kernel in the 3.0 line. There seem to be some incompatible functions that SPL relies on that have changed their interfaces. After moving to 2.6.39, the compile went as smooth as warm butter. From there, it has been smooth sailing. I have not encountered any bugs at all, but that is to be semi-expected since the ZFS code isn’t a rewrite of ZFS but an adaptation for Linux.

Creating the initial zpool is, not unlike every other command for ZFS, simple. It is a one-liner that will setup three disks in a RAID-Z1 (RAID-5) format:

zpool create rpool raidz sdb sdc sdd

Next, create the first filesystem and sharing it over NFS. This one will be used for my Time Machine backups over the network:

zfs create rpool/cleteTimeMachine
zfs set sharenfs=on rpool/cleteTimeMachine

There is one small problem with the previous command. Time Machine cannot be limited through its own user interface. Time Machine always consumes as much as it can grow to. To fix that, I used the following command:

zfs set quota=500g rpool/cleteTimeMachine

Time Machine doesn’t compress its backups, so I should have ZFS do this for me:

zfs set compression=gzip rpool/cleteTimeMachine

After setting up Time Machine, I created a few more filesystems. Namely, one to backup the NAS itself (the root drive isn’t a part of the RAID), one to backup my family’s computers (CrashPlan), and one for my files.

zfs create rpool/cleteFiles
zfs create rpool/crashPlan
zfs create rpool/cleteNASBackup

All properties are scope-aware, so you can set deduplication and compression at the pool level and revoke it for the filesystems that perform their own deduplication and/or compression:

zfs set dedup=on rpool
zfs set compression=gzip rpool
zfs set dedup=off rpool/cleteTimeMachine
zfs set dedup=off rpool/crashPlan
zfs set compression=off rpool/crashPlan

Creating snapshots is easy and so is destroying data:

zfs snapshot rpool/cleteTimeMachine@12345abc
rm -rf /rpool/cleteTimeMachine/*
zfs rollback rpool/cleteTimeMachine@12345abc

I really cannot stress how simple ZFS/ZVOL has been to use. It has really been a painless experience so far. Keeping tabs on your filesystems are just as easy. I aliased together a command that will print out information about the pool, its health, space usage, and compression and deduplication ratios.

cleteNAS ~ # zstatus
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                    559G  3.02T  38.6K  /rpool
rpool/cleteFiles         199G  3.02T  93.4G  /rpool/cleteFiles
rpool/cleteNASBackup    1.18G  98.8G   689M  /rpool/cleteNASBackup
rpool/cleteTimeMachine   253G   247G   252G  /rpool/cleteTimeMachine
rpool/crashPlan          103G   297G   103G  /rpool/crashPlan
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool  5.44T   831G  4.63T    14%  1.01x  ONLINE  -
NAME                    PROPERTY       VALUE  SOURCE
rpool/cleteFiles        compressratio  1.00x  -
rpool/cleteNASBackup    compressratio  3.14x  -
rpool/cleteTimeMachine  compressratio  1.21x  -
rpool/crashPlan         compressratio  1.00x  -
all pools are healthy

ZFS on Linux performance is lacking. I do not use ZFS-Fuse, but an actual, real port of ZFS that compiles kernel support, which is called ZFSOnLinux. While the kernel-enabled version of ZFS is much faster than ZFS-Fuse, it is not optimized for speed at all. I have not really tested read performance, but I have tested write performance and I have found it to be roughly 8-10MB/sec when deduplication and compression are on. Without either of those, it soars to 20+MB/sec. One thing to note is that I am testing over 802.11n, so these numbers are probably very inaccurate. Basically, my testing consisted of writing to the drive from two local computers and one remote computer at a time.

One other performance related item that I have noticed is that deduplication and gzip compression together will lower write speed to 8-10MB/sec instead of well over 25MB/sec with both disabled. They also use 60% or so of each processor. All of this is expected with software RAID and the low processing power of a 18W TDP processor.

NOTE: I have verified the speeds using local dd testing. These tests above are an accurate representation of performance.

I have recently turned off atimes (access times) in hopes that it will give a small bump in performance. Since this machine will be used for backups and file storage, I am not very concerned with speed. What I am concerned about is drives being able to sleep. So far, they haven’t spun down a single time, despite the fact that I have all drives set to a 5-minute spin-down. I believe this to be mostly due to the CrashPlan engine. There seems to be a bug with CrashPlan where it will keep the drives running, but I have yet to confirm this. If anyone has ideas about how to stop CrashPlan from keeping the drives up, I would greatly appreciate the help.

Edit: I am pleased to report that my drive performance has been improved greatly, most likely due to an update to ZFS or SPL updates. I have been keeping my ZFS on Linux and SPL at the master branch in order to try to get fixes as soon as they are released. The data I have is of little importance and can be rebuilt easily, so I do not mind doing that so much.

I ran some more tests last night and received the following performance with compression=off dedup=off. For a RAID-Z this is expected performance since ZFS on Linux is still in its early stages (the performance is still a low priority):

cleteNAS media # dd if=/dev/zero of=/rpool/media/output.img bs=8k count=1024k
 1048576+0 records in
 1048576+0 records out
 8589934592 bytes (8.6 GB) copied, 150.951 s, 56.9 MB/s

With compression=gzip-1 and dedup=off (note any compression at all increases speed greatly when writing from /dev/zero):

cleteNAS media # dd if=/dev/zero of=/rpool/media/output.img bs=8k count=1024k
 1048576+0 records in
 1048576+0 records out
 8589934592 bytes (8.6 GB) copied, 37.8502 s, 227 MB/s