Plans for Home PCB Fab

One thing I really miss about being back in Boston is MITERS. I only belonged to it for a few years, but I really enjoyed my time there. I spent a good summer perfecting DIY PCB fabbing, succeeding at both single-sided and double-sided boards, with trace widths below 10 mils.

Now that I’m out here in San Francisco, I haven’t joined a hackerspace yet. But I want to start producing PCBs here at home now in order to get moving on some projects I’ve been procrastinating on.

Here are sort of my plans for collecting the tools to fab PCBs.

Laser Printer

The process I use is toner-transfer and etching. So, I need to find a small laser printer that can print at 600 DPI. I was thinking of something cheap from Amazon, like this one from Canon.

Transfer

In the past, people have used irons to perform the actual transfer process. However, it’s hard to regulate the heat and pressure in this manner. So, recently, people have started using laminators, and that’s what we used at MITERS.

The GBC H220 is generally recommended for this purpose, although it’s hard to find. It requires some modification to pass PCBs through.

Etchant

I like ferric chloride over muriatic acid.

Shaker

A laboratory shaker is useful for speeding up the etching process. Without it, you have to sit and stir the tray constantly. I found a few fairly cheap ones on eBay.

Drill Press

I still work with a lot of through-hole components, so I plan to buy a mini drill press. We had this one at MITERS.

Paper

One last thing: choosing the right paper is tricky. I think we used a type of glossy paper from FedEx. I’ve seen people use glossy photo paper and magazine paper too. I plan to experiment with different types of paper.

todo.vim and vdb.vim: making reading stuff easier

Sometimes I get tired of hardware work, so I screw with vim for a while. I decided to write a few syntax highlighters for two of my common use cases: storing key:value pairs and making todo lists. Right now both bundles are in my .vim repo but I’ll probably split them out into their own repositories soon.

The vdb format is just a way to easily visualize hierarchical key:value pairs.

todo.vim provides some highlighting and key macros for making and editing todo lists.

Enabling the NRF24L01+ for ACRIS

Okay, it’s finally time for making a wireless version of ACRIS. I’m damn tired of stringing up CAT-5 cable everywhere.

So, I’m exploring possible wireless communications systems. My first choice is those based on the Nordic NRF24L01+ chip, which operates in the 2.4GHz spectrum. You can get a variety of turn-key modules from DealExtreme with PCB antennas or more powerful on-board amplifiers and external antennas for practically nothing.

I started by buying three PCB-antenna modules so that I could characterize them. Three weeks later, they finally showed up.

Designing a Reasonable Firmware Architecture

The next step was to write a library for ACRIS. Ultimately, the goal is to convert the whole system, bootloader and all, to be fully wireless. Of course, instead of just starting by writing some tests, I decided to redo the entire ACRIS firmware architecture. Originally, I was going to work on a separate branch, but then when I saw that Github doesn’t count non-master commits on your dashboard status, I got angry and merged it all into master. I also wrote a shitload of READMEs to describe what everything does (or rather, should do).

I finally learned how to make Makefiles from scratch instead of just going off of random other ones I found on the internet. I swear I’m going to start using make for everything ever now. But anyways, I wrote a framework for building and programming all projects in the avr/prj directory. The new firmware architecture is described in this README.

Writing the NRF24L01+ Driver

After porting all the existing code to the new architecture, it was time to start writing a driver for the NRF24L01+. The NRF communicates with an MCU over SPI. Unfortunately, the ATmega168 only has one SPI module and I wanted to dedicate it only to the TLC5940 LED drivers since they use a non-standard way of latching the data (it’s possible to work around this and support everything on the one SPI bus but this would generate a lot of extra traffic on a bus I’m trying to keep quiet).

So, I decided to use the USART in SPI mode, as Atmel documents in their datasheet. In this mode, a third pin is used for the clock (XCK) and TXD and RXD are used for MOSI and MISO. I was lucky enough to leave the XCK pin unused in the LED controllers, so I can actually rework all of my existing LED controllers really easily.

At the same time, though, I need to make a device that translates commands from UART or USB to packets for the NRF. This means that I must use the USART for regular serial communication and SPI to communicate with the NRF.

So, I wrote a communication layer: a common set of commands for sending and receiving bytes and streams of data. The underlying driver can be selected as either SPI or USART depending on which file is compiled, but the API is the same.

On top of this layer, I wrote the actual driver for the NRF24L01+. Unfortunately, the code so far is pretty rigid. It doesn’t easily handle cases like being able to reconfigure the chip quickly. But it does expose functions for reading and writing registers, flushing buffers, transmitting a packet, set up reception, etc. I still have plenty of work to do on it, such as enabling double-buffering for received payloads, setting up ACK payloads (e.g. for communicating LED controller status information), etc. Maybe I’ll add support for their MultiCeiver architecture (automatic handling of multiple transmitters). Depending on how I intend to use it in the future, I may also modify the SPI architecture to be interrupt/event-based.

After a bunch of debugging, I finally got some test firmware working. Building the project test-nrf with MODE=tx will set the project up to be a transmitter and MODE=rx will set it up as a receiver. When one of each is programmed onto two different boards, they’ll talk to each other and the receiver will spit data out via serial. The result?

The range of the modules is pretty decent. I can reach all the way to the other side of my apartment without any issues. But, the caveat is that the two must be in line-of-sight. Pretty much any wall will completely stop all communication. Now, this is using the 1Mbps data rate with Enhanced ShockBurst enabled. If I disable Enhanced ShockBurst and switch to 250Kbps, I might get better results.

OTOH, I also just bought a few of these modules. They use an external antenna and have onboard amplifiers which boost the gain by a whopping 30dB. So, I’m hoping that I’ll only need one of these and then can use the PCB-antenna modules in all of the lighting controllers for reception. The only possible problem with this is that the receivers may not be able to send back ACKs that the transmitter will hear. In this case, I’ll have to disable Enhanced ShockBurst and come up with some kind of isosynchronous transfer mode where the transmitter won’t care about ACKs. But then I won’t be able to enable auto-retransmission and reprogram the controllers wirelessly, which would really suck.

I’m waiting for some 3.3V LDO regulators to come in from Digikey so that I can rework the LED controllers. Right now, I’m using LM317s with a bunch of trimming resistors to make the supply voltage around 3V3.

Hard Drive Scroll Wheel Revisited

While I’m still thinking about the best algorithm for making proximity-based scrolling, I decided to revisit my somewhat failed attempt at using a hard drive motor as a scroll wheel. Last time, I had the scrolling algorithm working, but failed to get V-USB to work reliably with the ATMega I was using. However, since I was already working with the Stellaris boards for proximity scrolling, I decided to add in hard drive scrolling functionality at the same time.

The schematic is roughly the same as before, except the microcontroller has been replaced with the Stellaris Launchpad board.

I have a separate branch for this work and the code is really rough right now. I had to make some changes to the HID mouse implementation because it doesn’t support horizontal (or vertical) scrolling.

It works!

Scrolling with a Proximity Sensor

A long, long time ago (i.e. 6 months), a representative from Newark contacted me asking if I would like to review one of their products. I was interested in a few projects at the time that would require a proximity sensor, so I ordered this GP2D12 distance sensor from Sharp. Of course, I never actually got around to using it until… now. But finally this weekend I had a chance to try it out.

I decided to just go with something simple, with minimal parts, to try things out. I’ve had a Stellaris Launchpad lying around for a while that I hadn’t had a chance to use for anything yet, so I decided to go through the process of installing the toolchain on Linux.

After that, I started coding. The first step was to just modify an existing USB CDC example to spit out ADC samples at a pretty fast clip so that I could do algorithmic development on my computer.

Right now, I’ve got a fairly good prototype for an algorithm that does the following:

  1. Average bunch of samples as a background-noise baseline (i.e. without any hand over the sensor).
  2. Move into a state machine that waits until the readings are significantly higher than the baseline.
  3. Average a bunch of these samples as the “resting” hand position.
  4. Thereafter, average a bunch of samples and compare the difference between them and the resting position. Generate scroll events based on the magnitude and sign of the difference (with some deadzone in the center).
  5. If at any point, the value falls down close to the background noise baseline, assume the user has taken their hand away and wait to go back to calculate a new resting hand position.

Once I get the idea finalized out, I’ll draw out a state diagram and implement it directly on the Launchpad so that the device just shows up as a generic USB mouse.

In the meantime, I’ll talk about the proximity sensor. It’s more than just a photoresistor with an IR LED; it does some signal processing onboard to help eliminate noise from a lot of the usual background effects like ambient light or temperature.

I took some samples through the ADC on the Launchpad and got some interesting results.

  • (A) Baseline noise from the sensor staring at the ceiling.
  • (B) I placed my hand 10cm or so from the sensor.
  • (C) I move closer to the sensor.
  • (D) Eventually I move so close that I hit the other side of the curve where the voltage decreases.
  • (E) I start moving my hand away from the sensor.
  • (F) The sensor is staring at the ceiling again.

There’s a few interesting things to note. First, the periodic noise is coming from the fact that I’m powering the sensor with the 5V VBUS, which has the same noise. Unfortunately the sensor must be powered with somewhere between 4.5V and 5.5V, so I can’t power it from the regulated 3.3V bus. When powered with a regulated 5V bus, it works beautifully. I’m really impressed at how good it is at filtering out noise from environmental effects. One thing that this graph does not show is that the device is designed to be extremely accurate with respect to position. That is, you can easily make a digital ruler with it. I think I might try that out too at some point. And at the cost of like two burritos, this part is a pretty good deal.

ACRISifying an IKEA FADO Light

So we bought this FADO accent light from IKEA and frankly, it’s kind of… well… boring. I thought that maybe I could breathe new life into it by converting it into an ACRIS lighting instrument. For bonus points, I tried to do it in the least destructive way possible. Here’s what I did:

The first step was to rip out the old guts. I used a screwdriver to carefully open the tabs connecting the lightbulb assembly to the base.

Then I pulled out the wires that run through the base and into the lightbulb assembly.

Next, I used a dremel to grind down the side supports on the tabs to make a little shelf. I’m still letting this count as non-destructive because you can still assemble the original parts easily. Okay the assembly is ready to go.

Now came the fun part. Since it’s impossible to differentiate multiple LEDs in the light (I tried — looks like crap), it’s easiest to just solder all of the LED terminals together, and bring 4 wires out through the base.

It started with this:

And ended up like this (ugly, I know):

Then, I modified the controller a bit by wiring 3 pairs of channels together to handle the current from 3 LEDs. I screwed the board into the light base and carefully put the globe on.

The result isn’t half bad:

For hipster points, I took an instavine.

Unfortunately, there’s a few problems. First, I don’t have useful tools so I had to tape the damn thing together. Not good. Second, there’s no heatsink on the LEDs, so I’m not running them at full power. Third, I need to extend the feet on the base out a bit more so that the board is protected. Fourth, if I wanted to take it apart, I’d have to unplug all the LEDs and unscrew the board before I can reach the weird little springloaded thing that keeps the light bulb on the base.

But overall, it actually works quite nicely. I think when I bought them here in the bay area, they were like $15. I might buy some more!

Creating Terminal Color Palettes with Inkscape

So, I’m moving. I’m saying goodbye to my native land of Boston and pitching my tent in San Francisco for the forseeable future. Exciting! More on that later. The reason why I’m writing this now is because I’d like to point out that I get really, really bored on planes. So, I try to occupy myself with stupid things until I land or my battery runs out.

It started with the fact that I wanted to modify my terminal color scheme a bit. I opened up Inkscape, created a bunch of squares one for each color, and then carefully copied my current scheme over by editing the RGB values of the squares. Then, I messed with the colors, and copied all the RGB values back to my ~/.Xdefaults. This was pretty tedious. I wanted a better way to come up with new schemes.

So, I decided to write a simple script that would extract colors from an SVG file and generate lines for Xresources, TTY escape codes, etc. The result is color-control.

Using the provided example.svg, you can edit the colors to your liking with your favorite vector graphics editor and then run extract-colors.py to generate configuration lines.

It’s not great, but whatever, it kept my mind occupied for a little while on the plane.

My super awesome vim config

So, I finally got around to figuring out how to come up with a easily transferable vim configuration. You can find it here on Github.

I had one major issue that I was trying to figure out how to solve for quite some time. I wanted my ~/.vim to be a Git repository. But I also use pathogen to manage my plugin bundles, many of which themselves are Git repositories. Not all of them are, though. Some of them are Mercurial repositories and others are just straight up directories. As you know, putting repositories inside repositories is a Bad Idea. WHAT DO?!

One option is to make all of the bundles that are Git repos submodules. I think that’s a stupid idea for two reasons:

  1. I disagree with tying a VCS with the stuff it’s trying to keep track of. Not everyone wants to make their Vim plugin a Git repo and converting all non-Git-repo plugins to Git repos also just seems like the Wrong Solution.
  2. This is not what submodules in Git were designed for. Seriously. There is no well-defined functionality in the Git user interface for removing a submodule. Using submodules for this purpose is just a bad idea.

So, perhaps I should just go with Vundle? Well that still doesn’t solve the first problem.

Of course, some stupid part of my brain told me to write my own solution. So, I created vim-pandemic, a program that lets you easily add, update, and remove bundles from lots of different types of sources. Now, all I have to do is keep a database in ~/.vim/pandemic-bundles and every time I copy my configuration over to a new computer or whatever, I can easily just run pandemic update to grab all my bundles. Yay!

I should also mention that there is a similar program written in Ruby called epidemic. I actually discovered this when I was trying to figure out how to name pandemic. It doesn’t do anything more than Git, though.

I’m not dead (yet)

It’s almost over. I have to get my thesis signed, take a final, and give a demo and then that’ll be it. Degree #2 will be in my hands and I can begin the process of… well I’ll tell that story later.

I’ve been able to get practically nothing done on personal projects this year. I did manage to do some work with ACRIS and lpctrl to build an interface that would let me control my lighting system with my Novation Launchpad. I used this for a performance for MIT’s annual Steer Roast. I’ll do a demo video for this at some point soon. It was pretty cool, although I ran into some latency and crashing issues.

But I don’t want to talk about that either. I want to talk about the beat tracker I built and what it taught me about Bluespec. As I had posted before, I got it working a while ago and have been tweaking it a bit since then to make it work better. It figures out the tempo of most tracks pretty well, but there are some serious problems with the design and they all stem from the beat classifier module, i.e. the module that reports when it thought it detected a beat. First, it’s not very accurate — it should be using the variance of energies to determine how much higher the most recent energy has to be compared to the average energy. Secondly, it doesn’t report on how confident it thought it saw a beat. This is important because it should dictate how much the metronomes adjust their own phase.

I’m going to post a demo video up soon.

I made this project for Arvind’s Bluespec class, 6.375. Arvind was the professor whose grad student originally came up with the idea for Bluespec. The class had a bunch of labs where you’d learn different advantages of Bluespec over traditional hardware design. Theoretically, you’d do these labs, discover how magical this new programming paradigm is, and use it forever after for all your hardware design needs.

Now, don’t get me wrong. There’s some stuff that Bluespec does right. Or at least, there’s some stuff that the Bluespec developers have thought about that I think was good for them to think about. For example, Bluespec makes it easy to build complex state machines and circular pipelines using properly crafted rules. It features some nice libraries for manipulating numbers. But it doesn’t take these far enough. I had some thoughts with regard to developing a new language that would make it easy for developers to create their own number systems by specifying different attributes or capabilities (saturating vs. non-saturating arithmetic, unsigned vs. two’s complement vs. one’s complement vs. Gray coding number representation, etc.) One of these days, I’ll try to expand on that underlying idea to see if it’s actually useful… I think it will be.

Bluespec is great for prototyping and simulating your ideas — it allows you to produce a lot of hardware very fast and play around with subtle configuration changes. It makes producing a gold-standard, bit-accurate model really easy. This does not, however, mean that Bluespec is good for producing an actually implementable design. Something occurred to me while I was watching the other students in the group present on their project results: Bluespec adds a lot of overhead because it deceives people into thinking how much or little hardware a given portion of their design might produce.

For example, it’s pretty easy to make a FIFO in Bluespec. Do you want a FIFO? Easy, just mkFIFO(). But what does that actually produce? Well, it draws upon a 150-line verilog file corresponding to a FIFO of depth 1 (larger FIFOs are chained together, I think). And this file actually produces an enormous amount of hardware. But, it does make producing a nice pipeline much easier.

Most students in the class took an existing algorithm that solved some problem (like image compression, flow analysis, etc.), re-implemented it in Bluespec, and tested it on an FPGA to see if their results were actually accelerated. The interface that most students to push data onto and pull data off of the FPGA used was called SCE-MI; we had used this framework a few times during the class. But, it’s not really suitable for a lot of applications due to bandwidth and latency limitations.

It’s just… ugh, SO MUCH STUFF when the designs don’t have to be so large.

And it showed in the students’ results. Most students found while they achieved correctness, they got little to no actual speed-up. I also realized that most students in the class were algorithms/comp sci people and I was more or less the only hardware person in the group. It seemed like a lot of those people drew incorrect conclusions about hardware design being inefficient and not worth it. I just wanted to keep yelling “no, it’s Bluespec, I swear!”

I was the only one who had a live demo at the presentations. I was even lucky in a sense; I ran out of time to do a lot of the more complicated things that I wanted to do because it just took me forever to figure out how the hell Bluespec does things. For example, it turns out that their FixedPoint library makes some strange choices in its arithmetic library that make it impossible to do what I want it to do. Since my project also sat on a measly Spartan 3A FPGA, it was also hard to meet timing and resource constraints. In my talk, I kind of ripped on a few parts about BlueSpec that I really didn’t like and then later realized that Arvind had invited Bluespec engineers to the presentations. Oops. :)

They were actually surprised that I managed to get Bluespec to compile to a verilog module and include it in a VHDL project. Apparently that’s very rare. But to me, that’s probably the best way to use it: design optimized hardware with VHDL or verilog and then include the parts of your design that are complicated but not yet optimized (i.e. the Bluespec components). Then, as you optimize each of those modules, you can replace the Bluespec modules with low-level ones.

I think the summary here is this. At the beginning of the class, Arvind made the analogy that verilog is to Bluespec as assembly is to Java. I agree with this. Bluespec abstracts away a lot of low-level language features just as Java does to assembly; it makes accessing some of those low-level features possible, but inconvenient. The second part of his argument is that nowadays, we don’t care about how many registers our CPUs have if we write in Java, but we can still produce code that does things in a reasonable amount of time thanks to the fact that the tools that translate what we design into something machines understand are very good. I do not believe that this same philosophy ports to hardware. Hardware design is very different than software design. The reason this argument doesn’t hold in my opinion is that for the vast majority of hardware design cases, you are going to be pushing the very limits of what the hardware has available. In fact, given that microprocessors are getting better and encroaching on territories that FPGAs typically held, the remaining territories are areas where you’re trying to do super-optimized stuff. That doesn’t happen in software. In software, the majority of the use cases make it good enough to use a high level abstraction to accomplish things. In hardware, the majority of the use cases necessitate that you do a lot of low-level optimization. This is the fundamental fallacy of Bluespec, in my opinion — it does not properly reveal lower-level techniques for optimization.

Nevertheless, I had a lot of fun in this class and I learned a lot. My work with Bluespec really got me thinking about how I could design a hardware description language that could utilize some of the philosophies of cleanly defining state transitions and coming up with a better way of handling the translation between numeric representations.

First Results from Beat Tracker!

So I finished a preliminary version of my live beat tracker, bt. I fed in a 126 BPM song and got:

The software reads a data stream from the FPGA and prints out a cute little UTF-8 table and keeps track of the best average tempo. The FPGA handles all of the hard work: running a lot of metronomes (I can fit upwards of like 80-100 on there), and classifying beats.

That’s the good news. The not so good news is that the beat classifier, the thing that says “I think I found a beat, does it match any of the metronomes?” needs some serious improvement and tweaking before it can actually classify most songs. K-Pop works really well right now because the differential between beat energy and not-beat energy is very large — a metronome is basically built right into the track. On the other hand, it doesn’t get dubstep at all.