I hate backups.

Ah, backups. I hate backups. Everyone hates backups. But the only thing worse than taking backups is not taking backups.

I have a couple of scripts that I’ve written to take backups of my life and now I’ve gotten them to a point where I think other people might find them useful.

I used to use dar. dar is pretty cool because it lets you perform incremental, encrypted backups. It’s pretty not-cool because it’s hard as hell to use. In fact, I’ve never been able to get incremental backups working with encryption. I’ve gotten tired of trying.

So I decided to go back to the one true archiving utility, tar. Few people know that tar actually also supports incremental backups using the [cci]\g[/cci] command. So, for example, you can do:

[cc]$ tar -czp -g incrlist.snar -f first-backup.tgz /some/directory[/cc]

Edit files and stuff, and when you want to do the next backup, do

[cc]$ tar -czp -g incrlist.snar -f second-backup.tgz[/cc].

You can then restore these backups by first restoring first-backup.tgz and then second-backup.tgz, using the [cci]-g[/cci] switch as before. This will even delete files that you deleted between taking first-backup and second-backup.

Okay, cool, so let’s make that into a script. I wrote a script for backing up my home directory and other important stuff on my hard drive to an external drive. Unfortunately, I was running out of space on my RAID array, so I incorporated 640GB of a 1TB drive that I used to use as a “buffer zone” for backups into another drive for the array. So, now I just issue my backups directly to my external drive.

The system generates one full backup each month and then incremental backups for the rest of that month. It’s configured by sourcing a config file that sets the appropriate variables. An example config file might look like this:

[cc]backupdir=/mnt/backup
targetdir=/home/me
prefix=home
excludes=”–exclude=.VirtualBox –exclude=*.pkg.tar*”[/cc]

Line 1 specifies where to store the backup (and look for an existing full backup from the same month to diff against). 2 says what to backup. 3 specifies a prefix name to apply to the archive. Finally, 4 offers the ability to exclude files. Config files can use the variable [cci]$scriptdir[/cci] to reference the directory that the backup script is stored in and [cci]$configdir[/cci] to reference the directory the config file is stored in.

You can even specify commands to run before and after the backup by filling in the functions [cci]precmd()[/cci] and [cci]postcmd()[/cci]:

[cc]precmd() {
echo “This is run before tar.”
}

postcmd() {
echo “This is run after tar.”
}[/cc]

So what about encryption? Well, since I only take backups to one specific drive, I’ve decided to opt for drive-level encryption with LUKS/dm-crypt. This is a far more robust solution that has also proven to be secure.

I’ll talk about how I back up my server at some other point.

Making Xilinx Suck Less: A VHDL Preprocessor

Oh VHDL, you make my life so wonderful. I just want to be able to enable or disable different debugging features at compile time. But no, there’s no real preprocessor support in the VHDL world.

So my solution was to write a wrapper script that fakes out GCC into using its preprocessor and then generating a new VHDL file. It takes two files as input: the [cci]prj[/cci] file you would use for XST and a [cci]config[/cci] file that has a bunch of key=value pairs that get passed as directives for the GCC preprocessor.

For each file, it then invokes the following magic:

[cc lang=”bash”]gcc -Dwhatever=whatever -traditional-cpp -E -x c -P -C file.vhd[/cc]

This causes GCC to treat the VHDL file as just a generic C file and only processes the standard preprocessor directives like [cci lang=”c”]#if[/cci].

It then saves the result to a new file. So if you were processing [cci]file.vhd[/cci], it would generate (by default) [cci]file.gen.vhd[/cci]. It stores these files in the same directory as their pre-preprocessed versions.

Finally it creates a new [cci]prj[/cci] file that you should pass into XST.

So, it’s super easy to use. All you need to do is run [cci lang=”bash”]vhd-preproc.py -p /path/to/project.prj -c /path/to/config[/cci] and all your files will be preprocessed.

ACRIS 2.0 Firmware

So close and yet so far… Long story short: after assembling everything and fixing a whole host of firmware problems, ACRIS boards are about 0.002″ away from being done. Yup, one of the hole sizes is wrong, but other than that, the boards are perfect!

Buuuuut ignoring that, I’m very happy with the results. Take a look:

So what you have there is, among other things, 12-bit resolution! That’s right; the new firmware finally supports the full 4096 levels of brightness of the LED drivers instead of 256.

The new firmware supports a couple of commands: low res setting (8-bit like before) and high res setting (12-bit). For each of these, there is a subcommand to set either all LEDs on the board to be the same color or each LED separately.

The protocol is not very complicated: [cci]SYNC CMD ARG0 ARG1 … ARGn[/cci]

[cci]SYNC[/cci] is now [cci]0x55[/cci] instead of [cci]0xAA[/cci]. I made this revision backwards compatible by making the command the old [cci]SYNC[/cci] command. That way, as long as you send the new [cci]SYNC[/cci] command before that (which would just get ignored by the old firmware), you can still control both firmwares at the same time. This is probably entirely unnecessary, but whatever. I thought it was cute.

I describe how the commands work in detail in the firmware README.

The high res commands pack 12-bit values into a series of bytes and then those values are unpacked in the firmware. The code is not super fast, but it’s not horrible either. I wish I could do some kind of hashing thing to speed up the unpacking process, but the memory requirements there would be ginormous.

The new command system isn’t the only thing that I added. One common failure mode in the past was that the voltage would sag causing the micro to stop running and then the LED drivers would go full brightness. The problem was that when you plugged the board back in, they would retain that state and it would happen all over again. I thought the fix was to just add a pull-up to the blank pin which is usually just controlled by the micro. This would prevent the pin from floating when the micro was browning out and would therefore shut the drivers off. However after I tested this fix out (BTW I forgot to solder one of the resistor pads down so I was very confused for a while why it wasn’t pulling up at all) the failure changed to just continually resetting the micro followed by a brief draw of high current and voltage sagging.

The problem turned out to be that I completely forgot to clear out the LED driver shift registers BEFORE starting them… just a totally stupid error on my part that’s been since fixed.

After I finished writing this new firmware, I used a little pre-processor magic to handle the hardware differences between the two board revisions. So, making with [cci]BRDREV=1[/cci] makes the project for the first revision and [cci]BRDREV=2[/cci] makes it for the second version. This allows me to have unified firmware.

I want to do a few more things with the firmware. E.g. I want to add some commands that would allow me to read the status of a board back on the bus. The hardware is now all there to do this and it would be good to know when the board is overheating. I could also add firmware-side brightness limiting when overheating occurs. I also need to finish the bootloader at some point — it still lacks a mechanism for verification.

After getting everything running, I finally was able to solve the question that had been bothering me the most: would my board be able to sustain 5.4A, the maximum current for all channels?

For this test, I hooked up separate 5V for the logic and started the LED power voltage out low. I then told the LED controllers to output full brightness and started ramping up the voltage. Small problem: the resistive loss on the wire was sizable as the current draw grew. I had to get up to around 11V output to hit the correct voltage drop for the LEDs on the other side. Things got a little melty but I sustained this for almost a minute before turning it back down as I wasn’t heatsinking the drivers at all. The point is that the board itself could handle that kind of current… I feel like I actually designed something correctly for once. 🙂

Here is a list of the things that the new board revision fixes:

  • screw terminal option for LED outputs
  • if you don’t want to do that, then you can twist and solder the wires to the board securely
  • data direction pins on the RS485 chips have pull-ups on them to prevent multiple devices from trying to drive the bus on system startup
  • blank pin on the LED drivers has a pull-up on it to prevent the drivers from trying to drive the LEDs before the system has started sending data to them
  • power connectors have the right pin ordering
  • power planes reduce noise and can handle the full current rating of the drivers
  • error pins of the tlc are brought back to the micro in order to read thermal errors
  • cleaner routing
  • board fits within iteadstudio’s 10cm x 10cm $35 color soldermask option

I’m planning on doing one minor revision to the board to address these problems:

  • power connector + slot is too small
  • soldermask accidentally applied over the back terminals
  • some silkscreening is illegible

I probably won’t ever order that revision unless I somehow need more LED controllers for something. Maybe an externally-funded project? Hopefully?

How to Not Suck at Transcoding Music

Sometimes I get really interested in something and can’t sleep and end up spending all night working on it. This was one of those nights.

I wrote flacsync a while ago in order to make an MP3 with suitable tags for every FLAC I own. I do this so that I can actually play my favorite music on my iPhone and also so that I can keep a lightweight copy of my music database on my laptop, which has far less space than my desktop.

The idea was to keep a database containing an MD5 hash for every FLAC I have and whenever I run the script, check that hash to see if the FLAC has changed and needs to be re-transcoded. If so, transcode it and store the new hash to the database.

I made some remarkably stupid initial design choices. I knew that I wanted to thread it in order to maximize throughput, but I had some ridiculous bottlenecks. For example, for some reason, I thought it would be a good idea to use a SQLite database to store the hashes and then have a complicated DBWorker thread that would interface with all the processing threads. Although I got this original design working, it was slow. It took maybe 30 minutes to run through all my FLACs.

I later redesigned the script to just load a Python dictionary containing all the hashes from a file into memory. Then, I could update the table freely and wouldn’t even really need to worry about locking since only one thread acted on a track.

But this was still slow for a few reasons:

  • I wasn’t ordering the list of files intelligently at all. It would make sense to try to process the most recently changed files first, wouldn’t it?
  • Python’s [cci]threading[/cci] module isn’t actually capable of performing tasks on multiple processors. It can still only perform on at most one core.
  • Running [cci]md5sum[/cci] on an entire FLAC is slow and therefore dumb.

So, I redesigned the whole script to use the [cci]multiprocessing[/cci] module’s [cci]Pool[/cci] abstraction where you can apply a function onto a list with a pool of workers and then gather the results. Now, each worker returns either an indication that the file didn’t change or a new hash for the file. The system tallies up all the new hashes at the end, updates the database, saves it to disk, and exits. Oh and when it first starts up and finds all the FLACs in my music directory, it sorts them so that the most recently changed files are first.

Moreover, I was lazy in that I was just calling the [cci]md5sum[/cci] program on the entire FLAC, so I used Python’s [cci]hashlib[/cci] module to only take the MD5 of the first 4096 bytes of the FLAC. This is pretty okay because the header information is almost always entirely contained there.

The result is that I can fly through my entire music library in like a second (okay some of that is coming from disk cache — I haven’t tried it on a cold boot yet). Transcoding on my computer (piping [cci]flac[/cci] to [cci]lame[/cci] and then copying tags over) takes about 15 seconds on average.

So 30 minutes just to check a already-sync’d database to a few seconds. Pretty good speedup.

I guess now I should go to work.

ACRIS boards are here!

Take a look:

Overall the quality looks pretty excellent, though there are a few spots that I want to test — I think some vias may have spilled over. Also, it seems like the bottom solder mask was screwed up — the LED connector pads are supposed to go to the board edge.

I’m probably going to put in a big digikey order soon. I will probably go for slightly smaller micros because the program I built only takes about 2.5k of flash. I think I’ll also buy the expensive components on Avnet instead of Digikey because it works out to be cheaper that way, even accounting for shipping. If I were to buy everything on Digikey, it would be about $246 for 10 boards worth of parts. That makes the total per-board cost about $30.

At larger quantities, that price goes down significantly. I think I may finally be closing in on a finished design.

ACRIS 2.0 Ordered

After a lot of re-routing and a bit of resizing, I’ve finished version 2.0 of the ACRIS boards. Hopefully this version will feature things like no more goddamn reworks. I think I’ve fixed everything.

I’ve added a few new features. For example, I brought all the XERR pins of the LED controllers back to the microcontroller so that the micro can perform analysis to see if there are any thermal errors from the chips.

The LED controllers have enough room around them such that it’s possible to put cute little DIP heatsinks on them. Screw terminal banks can be used, or you can just use the provided pads, which extend to the edge of the board, and solder to them.

I’ve also started a new version of the firmware that features improved commands and setting LED values using full 12-bit precision.

ACRIS Boards Rev 2.0 Routed!

I’ve finally redesigned the ACRIS boards… It’s only been like half a year. This new design has a ton of fixes, such as adding a pull-up to the blank signal on the LED drivers to prevent the LEDs from flashing when the power supply starts up. The power subsystem is roughly the same, though I removed the jumper to bypass the voltage regulator (it’s easy to just jumper the regulator instead). All of the ports are on one side, as are the status LEDs.

And, most importantly, there’s now a set of screw terminals on the other side for the LED outputs. This makes wiring LEDs up a lot simpler. I’m also planning to redo the footprint a bit so that they can also be easily used as board edge connectors.

My plan is to make another Myro order fairly soon. Now that my current LED drivers are all up and running in our living room, I want quite a few more for new projects!

ubbcom gets an upgrade

So a while back, I made a usb-to-serial converter board using FTDI’s most beloved chip: the FT232RL. This is the same chip that Sparkfun uses in their breakout boards. I just didn’t want to pay Sparkfun prices for something I could etch myself.

Well, I’ve given ubbcom a little upgrade.

First, I added the REN and RTS pins. The former is good for hooking the board up to a RS-485 level convert chip, since it can control the tri-state direction. The second I use along with the other handshake signals to bitbang program the ATtinys for bcard.

Additionally, I decided to actually bite the bullet and buy USB connectors. PCB approximations of USB connectors just don’t work. Really. They’re useless. So, I bought some connectors off of Digi-Key and made a footprint. I managed to actually orient the footprint the wrong way around the first time I tried this. That’s like the second time I’ve done this to USB connectors now. Nothing like epoxy to save the day there:

I chose to go with USB A connectors because I don’t trust mini connectors, they’re cheaper than micro connectors, and because I have USB extension cables, which work great. But yeah, now that I have a couple of these guys, it’s easy to communicate with chips, bitbang program them, etc.

How I learned to stop worrying and love crazy, stupid kludges

So, I’m in some of the final development phases of bcard. I’m working on developing an optimal touch sensor. As you can see, I’ve tried a few:

Total attempts so far: 912. That’s right, I’m on the 12th revision.

One of the things I’ve been struggling with involves power consumption. I don’t have much power to work with so I want the micro to use as little power as possible when not running the LEDs.

So I came up with a semi-batshit-insane idea. If I could somehow trigger a level change on thee INT0 pin, I could design the firmware such that I can shut basically the entire micro off until the pin level change fires an external interrupt sensor on the micro. This would basically reduce the power usage of the micro to like nothing.

So my approach would be a kind of hybrid between resistive and capacitive sensing:

  1. Enable pin interrupt, set INT0 pin to be floating. Then power down most of the micro’s functionality.
  2. When the interrupt fires, the micro will wake up and start running the capacitive sense system.
  3. If the capacitive sense system doesn’t see a swipe, then go back to sleep.

This plan at first seems kind of stupid because in general you should never just try to detect something useful from a floating pin. They’re extremely volatile. But I figure that by combining resistive and capacitive touch sensing, I will get the power savings of the former and the reliability of the latter.

So I tried implementing this idea in a few ways. First of all, I built a new board like this:

So now there are three pads and a “ground guard” around the whole thing (this isn’t really necessary). I’ll call the big pads “left” and “right” and the center strip pad “middle.” I’ve built an even newer board since then, but it’s easier to explain what I’m doing with this older one, so I’m going to stick with that (they’re functionally identical; the newer one just looks nicer).

There are basically two options. When I’m in this resistive sensing mode I can either drive left and right high, shut the micro off, and tie center to INT0 so that it detects a float. Or I could do the reverse; I can assume that the user should always start his finger on left and therefore tie that to INT0 as an input and then tie center high.

I chose the former so that I can do a bit of analog magic with the floating pin. I want to be able to add a pull-down resistor of some value such that it prevents spurious interrupts, but still is very sensitive to the press of a finger. The pull-down resistor kind of makes the floating pin more likely to detect “0”, which means that you need a bit more of a “push” to make it detect a “1”. Tuning this is a matter of adjusting that resistor. So, the layout went something like this:

So, I wired it all up like the above diagram (I started out with no resistor on the floating pin) and then had a small test program that basically went as follows:

[cc lang=”c”]
int main(void) {
// initialize UART
uart_init();

// LED output
DDRB |= _BV(PD3);

// start resistive component
// enable INT0 pin
EIMSK = _BV(INT0);
EICRA = _BV(ISC01);
sei();

while (1);

return 0;
}

ISR(INT0_vect) {
uart_tx(‘*’);
PIND |= _BV(PD3); // toggle LED
}
[/cc]

All it does is enable the INT0 pin interrupt vector and then spin in circles. When the interrupt fires, it sends a character over serial.

Results of testing? Pretty dismal actually. The interrupt barely fired… It was more likely to when I pressed a large surface area, but my goal was to make the resistive pads as small as possible and solder mask the rest.

But I had one more trick up my sleeve. If I couldn’t get a reliable level change with just a single press, what if I could simulate a whole bunch of presses and hope like hell that one hit? Enter my next idea.

The left and right pads are just being driven high right now. But what if we could instead make them oscillate between Vcc and ground. Then, since it’s slamming between the two voltage rails, it’s more likely that when we’re pressing it, it is going trouble make enough chatter on the INT0 input pin to kick the interrupt awake.

But this uses more power. After all, I now have to get the micro to continually toggle the pin, right?

Welllllll yes and no. There is going to be more power usage because the pin has to switch. But I can still keep most of the micro off by using the timer to generate a waveform.

At this point, I had to look at the capabilities of the chip I’m using. I’ve been testing with an ATmega168 because it’s easy to debug when you have stuff like serial communication. However as I’ve mentioned before the final target will be the ATtiny10, which is basically the cheapest class of processors that Atmel makes. This chip has a single 16-bit timer that can drive two output ports. Perfect. One can be used to PWM the LEDs and the other can be used to drive the floating pin input. Moreover, given that the timer is 16-bit, even if the system clock speed is kind of fast, I can make the pin oscillate slowly to prevent too much switching loss.

So, I used the 16-bit timer on the ATmega to emulate what I would do on the ATtiny. I wrote a test program that looked like this:
[cc lang=”c”]
#define FLTPORT PORTD
#define FLTDDR DDRD
#define FLT PD2

#define FLTDRVPORT PORTB
#define FLTDRVPIN PINB
#define FLTDRVDDR DDRB
#define FLTDRV PB1

void enable_float_driver(void) {
// Set up floating pins
FLTDDR &= ~_BV(FLT); // float sense input
FLTPORT &= ~_BV(FLT); // don’t you dare pull that shit up
FLTDRVDDR |= _BV(FLTDRV);

OCR1A = 0x0800;
TCCR1A = _BV(COM1A0) | _BV(WGM11) | _BV(WGM10);
TCCR1B = _BV(WGM13) | _BV(WGM12) | _BV(CS12) | _BV(CS10);

// interrupt mask
// enable INT0 pin
EIMSK = _BV(INT0);
EICRA = _BV(ISC01);

return;
}

void disable_float_driver(void) {
// interrupt mask
EIMSK = 0;
EICRA = 0;

// shut off float driver
TCCR1A = 0;
TCCR1B = 0;
FLTDRVDDR &= ~_BV(FLTDRV);

return;
}

int main(void) {
// initialize UART
uart_init();

// LED output
DDRB |= _BV(PD3);

enable_float_driver();

while (1);

return 0;
}

ISR(INT0_vect) {
disable_float_driver();

uart_tx(‘*’);
PIND |= _BV(PD3); // toggle LED

enable_float_driver();
}
[/cc]

The results were almost too good to be true. The interrupt fired unbelievably reliably and there were relatively few spurious interrupts, too. I managed to get it even more stable by adding a 1Mohm pull-down resistor to the floating sensor pin.

So now the final step was to integrate the resistive phase with the capacitive phase, which I was developing earlier. The capacitive phase basically is a state machine. When the interrupt fires and kicks the capacitive phase into gear, the algorithm checks to see if the user is actually pressing the pad by measuring the time constant. If it is then it enters the state machine, which expects the following sequence of events to occur:

  1. User presses left pad
  2. User presses both left and right pads
  3. User presses right pad

In order for there to be a swipe, the user has to be pressing both pads at some point. I’ve designed the sensor with enough overlap between the two that this is guaranteed to happen (unless I just have super fat fingers, which is not unlikely given how shitty the emails I send from my phone are).

If any point the events in this sequence are violated, the micro drops out of the capacitive sensing phase, fires the floating pin driver, and enables the interrupt pin.

And that’s all there is to it. The ATmega implementation works quite nicely. So it’s time to start developing for the ATtiny, which should be more fun. I (unfortunately) have a lot of experience with microcontroller assembly, so I’m hoping it will go somewhat smoothly.

After I finish the implementation, I’m going to do as much power optimization and tweaking as I can. I’m basically pushing this processor to the limit of its capabilities, which is cool because I’ve never had to do something like that before. I’ve definitely learned a lot.

Here’s a preview of the current sensor design:

It’s double sided this time! The islands in the middle are the floating pin inputs. A little more tweaking to this new design and I think I can finally be happy. But I’ll get to how the double-sided business card is going to work out in another blog post.

Fixing a Seiko watch not reseting to 12

I was using my chronograph on my Seiko watch that I’ve had for a few years (7T92-0FX0) and noticed that when I reset it, the seconds hand was returning to the 57 second mark and not to 0. I Googled for a while and found this guide for another watch. This method didn’t work, so I tried a few other things.

Turns out the correct thing to do was to pull the stem out to the second click (for setting time) and then holding the top button for about 5 seconds. This will cause one of the dials to turn around fully. But, I was confused because it just put the dial back to the 57 second mark. Then, I tried hitting the bottom button, and it advanced. So, after two more presses, it was back at 0 and when I pushed the stem back in, the chronograph was reseting correctly again.

By pressing and holding the top button, you’re selecting which dial you want to reset. So if you press and hold the top button two times, it will make a different dial spin around, and then you can adjust that dial by hitting the bottom.

Magical.