todo.vim and vdb.vim: making reading stuff easier

Sometimes I get tired of hardware work, so I screw with vim for a while. I decided to write a few syntax highlighters for two of my common use cases: storing key:value pairs and making todo lists. Right now both bundles are in my .vim repo but I’ll probably split them out into their own repositories soon.

The vdb format is just a way to easily visualize hierarchical key:value pairs.

todo.vim provides some highlighting and key macros for making and editing todo lists.

Enabling the NRF24L01+ for ACRIS

Okay, it’s finally time for making a wireless version of ACRIS. I’m damn tired of stringing up CAT-5 cable everywhere.

So, I’m exploring possible wireless communications systems. My first choice is those based on the Nordic NRF24L01+ chip, which operates in the 2.4GHz spectrum. You can get a variety of turn-key modules from DealExtreme with PCB antennas or more powerful on-board amplifiers and external antennas for practically nothing.

I started by buying three PCB-antenna modules so that I could characterize them. Three weeks later, they finally showed up.

Designing a Reasonable Firmware Architecture

The next step was to write a library for ACRIS. Ultimately, the goal is to convert the whole system, bootloader and all, to be fully wireless. Of course, instead of just starting by writing some tests, I decided to redo the entire ACRIS firmware architecture. Originally, I was going to work on a separate branch, but then when I saw that Github doesn’t count non-master commits on your dashboard status, I got angry and merged it all into master. I also wrote a shitload of READMEs to describe what everything does (or rather, should do).

I finally learned how to make Makefiles from scratch instead of just going off of random other ones I found on the internet. I swear I’m going to start using make for everything ever now. But anyways, I wrote a framework for building and programming all projects in the avr/prj directory. The new firmware architecture is described in this README.

Writing the NRF24L01+ Driver

After porting all the existing code to the new architecture, it was time to start writing a driver for the NRF24L01+. The NRF communicates with an MCU over SPI. Unfortunately, the ATmega168 only has one SPI module and I wanted to dedicate it only to the TLC5940 LED drivers since they use a non-standard way of latching the data (it’s possible to work around this and support everything on the one SPI bus but this would generate a lot of extra traffic on a bus I’m trying to keep quiet).

So, I decided to use the USART in SPI mode, as Atmel documents in their datasheet. In this mode, a third pin is used for the clock (XCK) and TXD and RXD are used for MOSI and MISO. I was lucky enough to leave the XCK pin unused in the LED controllers, so I can actually rework all of my existing LED controllers really easily.

At the same time, though, I need to make a device that translates commands from UART or USB to packets for the NRF. This means that I must use the USART for regular serial communication and SPI to communicate with the NRF.

So, I wrote a communication layer: a common set of commands for sending and receiving bytes and streams of data. The underlying driver can be selected as either SPI or USART depending on which file is compiled, but the API is the same.

On top of this layer, I wrote the actual driver for the NRF24L01+. Unfortunately, the code so far is pretty rigid. It doesn’t easily handle cases like being able to reconfigure the chip quickly. But it does expose functions for reading and writing registers, flushing buffers, transmitting a packet, set up reception, etc. I still have plenty of work to do on it, such as enabling double-buffering for received payloads, setting up ACK payloads (e.g. for communicating LED controller status information), etc. Maybe I’ll add support for their MultiCeiver architecture (automatic handling of multiple transmitters). Depending on how I intend to use it in the future, I may also modify the SPI architecture to be interrupt/event-based.

After a bunch of debugging, I finally got some test firmware working. Building the project test-nrf with MODE=tx will set the project up to be a transmitter and MODE=rx will set it up as a receiver. When one of each is programmed onto two different boards, they’ll talk to each other and the receiver will spit data out via serial. The result?

The range of the modules is pretty decent. I can reach all the way to the other side of my apartment without any issues. But, the caveat is that the two must be in line-of-sight. Pretty much any wall will completely stop all communication. Now, this is using the 1Mbps data rate with Enhanced ShockBurst enabled. If I disable Enhanced ShockBurst and switch to 250Kbps, I might get better results.

OTOH, I also just bought a few of these modules. They use an external antenna and have onboard amplifiers which boost the gain by a whopping 30dB. So, I’m hoping that I’ll only need one of these and then can use the PCB-antenna modules in all of the lighting controllers for reception. The only possible problem with this is that the receivers may not be able to send back ACKs that the transmitter will hear. In this case, I’ll have to disable Enhanced ShockBurst and come up with some kind of isosynchronous transfer mode where the transmitter won’t care about ACKs. But then I won’t be able to enable auto-retransmission and reprogram the controllers wirelessly, which would really suck.

I’m waiting for some 3.3V LDO regulators to come in from Digikey so that I can rework the LED controllers. Right now, I’m using LM317s with a bunch of trimming resistors to make the supply voltage around 3V3.

Hard Drive Scroll Wheel Revisited

While I’m still thinking about the best algorithm for making proximity-based scrolling, I decided to revisit my somewhat failed attempt at using a hard drive motor as a scroll wheel. Last time, I had the scrolling algorithm working, but failed to get V-USB to work reliably with the ATMega I was using. However, since I was already working with the Stellaris boards for proximity scrolling, I decided to add in hard drive scrolling functionality at the same time.

The schematic is roughly the same as before, except the microcontroller has been replaced with the Stellaris Launchpad board.

I have a separate branch for this work and the code is really rough right now. I had to make some changes to the HID mouse implementation because it doesn’t support horizontal (or vertical) scrolling.

It works!

Scrolling with a Proximity Sensor

A long, long time ago (i.e. 6 months), a representative from Newark contacted me asking if I would like to review one of their products. I was interested in a few projects at the time that would require a proximity sensor, so I ordered this GP2D12 distance sensor from Sharp. Of course, I never actually got around to using it until… now. But finally this weekend I had a chance to try it out.

I decided to just go with something simple, with minimal parts, to try things out. I’ve had a Stellaris Launchpad lying around for a while that I hadn’t had a chance to use for anything yet, so I decided to go through the process of installing the toolchain on Linux.

After that, I started coding. The first step was to just modify an existing USB CDC example to spit out ADC samples at a pretty fast clip so that I could do algorithmic development on my computer.

Right now, I’ve got a fairly good prototype for an algorithm that does the following:

  1. Average bunch of samples as a background-noise baseline (i.e. without any hand over the sensor).
  2. Move into a state machine that waits until the readings are significantly higher than the baseline.
  3. Average a bunch of these samples as the “resting” hand position.
  4. Thereafter, average a bunch of samples and compare the difference between them and the resting position. Generate scroll events based on the magnitude and sign of the difference (with some deadzone in the center).
  5. If at any point, the value falls down close to the background noise baseline, assume the user has taken their hand away and wait to go back to calculate a new resting hand position.

Once I get the idea finalized out, I’ll draw out a state diagram and implement it directly on the Launchpad so that the device just shows up as a generic USB mouse.

In the meantime, I’ll talk about the proximity sensor. It’s more than just a photoresistor with an IR LED; it does some signal processing onboard to help eliminate noise from a lot of the usual background effects like ambient light or temperature.

I took some samples through the ADC on the Launchpad and got some interesting results.

  • (A) Baseline noise from the sensor staring at the ceiling.
  • (B) I placed my hand 10cm or so from the sensor.
  • (C) I move closer to the sensor.
  • (D) Eventually I move so close that I hit the other side of the curve where the voltage decreases.
  • (E) I start moving my hand away from the sensor.
  • (F) The sensor is staring at the ceiling again.

There’s a few interesting things to note. First, the periodic noise is coming from the fact that I’m powering the sensor with the 5V VBUS, which has the same noise. Unfortunately the sensor must be powered with somewhere between 4.5V and 5.5V, so I can’t power it from the regulated 3.3V bus. When powered with a regulated 5V bus, it works beautifully. I’m really impressed at how good it is at filtering out noise from environmental effects. One thing that this graph does not show is that the device is designed to be extremely accurate with respect to position. That is, you can easily make a digital ruler with it. I think I might try that out too at some point. And at the cost of like two burritos, this part is a pretty good deal.

ACRISifying an IKEA FADO Light

So we bought this FADO accent light from IKEA and frankly, it’s kind of… well… boring. I thought that maybe I could breathe new life into it by converting it into an ACRIS lighting instrument. For bonus points, I tried to do it in the least destructive way possible. Here’s what I did:

The first step was to rip out the old guts. I used a screwdriver to carefully open the tabs connecting the lightbulb assembly to the base.

Then I pulled out the wires that run through the base and into the lightbulb assembly.

Next, I used a dremel to grind down the side supports on the tabs to make a little shelf. I’m still letting this count as non-destructive because you can still assemble the original parts easily. Okay the assembly is ready to go.

Now came the fun part. Since it’s impossible to differentiate multiple LEDs in the light (I tried — looks like crap), it’s easiest to just solder all of the LED terminals together, and bring 4 wires out through the base.

It started with this:

And ended up like this (ugly, I know):

Then, I modified the controller a bit by wiring 3 pairs of channels together to handle the current from 3 LEDs. I screwed the board into the light base and carefully put the globe on.

The result isn’t half bad:

For hipster points, I took an instavine.

Unfortunately, there’s a few problems. First, I don’t have useful tools so I had to tape the damn thing together. Not good. Second, there’s no heatsink on the LEDs, so I’m not running them at full power. Third, I need to extend the feet on the base out a bit more so that the board is protected. Fourth, if I wanted to take it apart, I’d have to unplug all the LEDs and unscrew the board before I can reach the weird little springloaded thing that keeps the light bulb on the base.

But overall, it actually works quite nicely. I think when I bought them here in the bay area, they were like $15. I might buy some more!

Creating Terminal Color Palettes with Inkscape

So, I’m moving. I’m saying goodbye to my native land of Boston and pitching my tent in San Francisco for the forseeable future. Exciting! More on that later. The reason why I’m writing this now is because I’d like to point out that I get really, really bored on planes. So, I try to occupy myself with stupid things until I land or my battery runs out.

It started with the fact that I wanted to modify my terminal color scheme a bit. I opened up Inkscape, created a bunch of squares one for each color, and then carefully copied my current scheme over by editing the RGB values of the squares. Then, I messed with the colors, and copied all the RGB values back to my ~/.Xdefaults. This was pretty tedious. I wanted a better way to come up with new schemes.

So, I decided to write a simple script that would extract colors from an SVG file and generate lines for Xresources, TTY escape codes, etc. The result is color-control.

Using the provided example.svg, you can edit the colors to your liking with your favorite vector graphics editor and then run to generate configuration lines.

It’s not great, but whatever, it kept my mind occupied for a little while on the plane.

My super awesome vim config

So, I finally got around to figuring out how to come up with a easily transferable vim configuration. You can find it here on Github.

I had one major issue that I was trying to figure out how to solve for quite some time. I wanted my ~/.vim to be a Git repository. But I also use pathogen to manage my plugin bundles, many of which themselves are Git repositories. Not all of them are, though. Some of them are Mercurial repositories and others are just straight up directories. As you know, putting repositories inside repositories is a Bad Idea. WHAT DO?!

One option is to make all of the bundles that are Git repos submodules. I think that’s a stupid idea for two reasons:

  1. I disagree with tying a VCS with the stuff it’s trying to keep track of. Not everyone wants to make their Vim plugin a Git repo and converting all non-Git-repo plugins to Git repos also just seems like the Wrong Solution.
  2. This is not what submodules in Git were designed for. Seriously. There is no well-defined functionality in the Git user interface for removing a submodule. Using submodules for this purpose is just a bad idea.

So, perhaps I should just go with Vundle? Well that still doesn’t solve the first problem.

Of course, some stupid part of my brain told me to write my own solution. So, I created vim-pandemic, a program that lets you easily add, update, and remove bundles from lots of different types of sources. Now, all I have to do is keep a database in ~/.vim/pandemic-bundles and every time I copy my configuration over to a new computer or whatever, I can easily just run pandemic update to grab all my bundles. Yay!

I should also mention that there is a similar program written in Ruby called epidemic. I actually discovered this when I was trying to figure out how to name pandemic. It doesn’t do anything more than Git, though.

First Results from Beat Tracker!

So I finished a preliminary version of my live beat tracker, bt. I fed in a 126 BPM song and got:

The software reads a data stream from the FPGA and prints out a cute little UTF-8 table and keeps track of the best average tempo. The FPGA handles all of the hard work: running a lot of metronomes (I can fit upwards of like 80-100 on there), and classifying beats.

That’s the good news. The not so good news is that the beat classifier, the thing that says “I think I found a beat, does it match any of the metronomes?” needs some serious improvement and tweaking before it can actually classify most songs. K-Pop works really well right now because the differential between beat energy and not-beat energy is very large — a metronome is basically built right into the track. On the other hand, it doesn’t get dubstep at all.

bt – Live Hardware Beat Tracker

This hasn’t been a great semester for my blogging… I’ll probably post a massive update on stuff that’s happened so far in 2013 in a few weeks, but for now, I thought I’d talk a bit about my final project for 6.375 — a digital design class taught in Bluespec.

So first of all, Bluespec aims to provide a high-level approach to hardware design, which is cool and all, but frankly, I could write volumes on the poor design decisions it makes. In some ways, I regret taking this class (and I hope my instructors don’t come across this post until after the class is over because they’re responsible for the initial design of Bluespec). In fact, I think I’m going to compile my thoughts on hardware design in a separate post at some points, but let’s leave it at this: even if we ignore the fact that the language itself is incredibly poorly designed (really looks like it was pieced together at random), Bluespec has a lot of fundamental design flaws that make it difficult if not impossible to predict how a given design will look as it’s implemented in hardware. There are a lot of good philosophies that Bluespec brings to the table, and I think that many of them can be implemented in VHDL, but this fundamental overarching limitation is really, really frustrating. I believe that the analogy “Bluespec is to Verilog as Java is to assembly” is not accurate.

Anyways, whatever, the class has a final project and I like absurd final projects. I have to write (most of) this project in Bluespec. So, I came up with the idea of a piece of hardware that, in realtime, estimates the tempo of a live stream of audio.

I’ve seen a few methods for solving this problem before. Turns out that two of my friends independently wrote their own software versions. But I wanted to do a hardware one because I thought that I might be able to improve accuracy and convergence time by using more complicated signal processing algorithms. I’m going to also use a slightly novel architecture that takes advantage of the parallelism of hardware. The way it’s going to work is that audio data will be fed into a beat classifier, whose job is simply to identify potential “beats” in the stream. Ideally, it’d be able to identify a probability that it thought what it saw was a beat. At the same time, there are a bunch of metronomes that are just running freely at different tempos. E.g. one might be going at 140 BPM, another at 120 BPM, etc. The number of metronomes is probably going to be limited only by how many I can physically fit on the FPGA. I implemented a basic form of this system purely in Python, which you can find in the sw/sim/ directory of my github repository.

Each time the beat classifier thinks it sees a beat, it sends a signal to all of the metronomes, which then compute the phase error between a subdivision of the received beat and their internal sense of beat. The metronomes then adjust their phase to be closer to the current time based on how likely the beat classifier thought it saw a beat. They also send their phase error to a master metronome controller, which keeps track of this data and sends out essentially a probability distribution of potential tempos to the user.

The last thing is that the metronome controller will be able to adjust the target tempos of the metronomes so that it can start out over a broad range of potential tempos and then zoom into the most likely candidates to get a more accurate result.

Here’s a rough estimate of what the architecture will look like:

Okay, so I’ve got the algorithm idea, now what about the actual implementation? Instead of using the class hardware, I decided to use my own because I will probably be doing a lot more hardware testing than other groups (which will mainly just simulate their design and throw it on the FPGA at the end). I’m using the Spartan 3-AN Starter Kit, which has a bunch of useful features, such as an on-board ADC with pre-amp. The implementation architecture will look like this:

The first step was to set up the outer framework. I was hoping this was going to be easier than it actually was, but alas, apparently nobody has actually made an interface for the ADC and pre-amp. Well, one person did, but it was horribly written and didn’t work. So, I wrote my own controller from scratch. After startup, it waits for an instruction to initialize the pre-amp. When it receives that, it sets the pre-amp gain and then waits for a conversion request. Each time it receives one, it initiates a conversion to the ADC and returning samples for the two input channels. I’m also distributing the module separately (with interface instructions).

One slight issue with the ADC: it’s centered around 1.65V and audio is typically biased around 0V, so I had to add a little DC biaser to my board:

It simply divides VCC (3.3V) by 2 and mixes that with the two incoming channels.

So right now, the hardware framework is basically set up and ready to go.

knocker – Fast Port Knocking in Python

I use port knocking on my most sensitive systems. “Security through obscurity doesn’t work yadda yadda” aside, I think it’s a pretty effective way to hide ports from being open all the time.

Previously, I had written a shell script that invoked nmap to do the job for me, but I wanted to streamline it, make it faster, and more generic. So, I wrote knocker, a configurable port knocking tool.

Basically, you give it the following info (either in a ~/.knocker config file or on the command line):

  • target host
  • target port to open
  • command to run after opening
  • open knock sequence
  • close knock sequence

It then checks to see if the target port is already open. If it isn’t, it knocks the open sequence. It runs whatever command you want and then knocks the close sequence.

There are plenty of ways it can be improved, but it’s working happily for me right now.

I’d like to write a knock server (right now, I use knockd) that uses one-time authentication to cycle through sequences of ports, either as a time-based or counter-based system. Then, replay attacks are next to impossible!