The Sweet Spot
On software, engineering leadership, and anything shiny.

Arduino and python-firmata

I just spent five hours trying to figure out why  none of the Firmata libraries for Python were working over my serial connection. I was wondering why the previous program remained on the board and none of the signals sent were hot.

Hint: you need to load Firmata first onto the board before it can understand the protocol. Oh. Duh.

[caption id=”attachment_1032” align=”alignnone” width=”500” caption=”Don’t forget to load up “OldStandardFirmata””][/caption]

Only “OldStandardFirmata” (Firmata 2.0) seems to work with my version of python-firmata. The newer Firmata PDEs can talk Firmata 2.1 and 2.2, but I’m too tired to figure them out.

Currently: frustrated

So I got the poster printed, and the LEDs currently show through the board pretty well. This is good:

Printed poster, testing the light

But last night I spent a good chunk of my evening and early morning hitting a lot of walls:

  • I might have to throw out the idea of using the Twitter Streaming API. For the kind of searches I need to do, I just can’t get enough granularity to use live information. Plus, I can only open one connection to the API at a time, which is not good if I need to run four listeners at a time.

  • I couldn’t get the Classifier Ruby gem to work; which looked like the easiest implementation of an LSI/naive Bayes classifier out there. The next closest thing was nltk, and there was no way in heck I had the time to figure that out. Plus, I realized that creating a training set was a LOT more work than I thought I had. So… scratch the machine intelligence out of this. I’m just going to manually search for specific search terms.

  • New solution: Periodically use the Search API to grab results. This allows more exact search results and gives me the ability to tweak the search terms while the demo loop is running.

  • Event-driven programming is throwing me for a loop (ha, get it?). After perusing the node.js docs for the better part of an evening, I think I need to re-architect the code. I need to create a simple event-driven queue, which is confusing because it seems like something simple, yet support isn’t built in. node provides so little out of the box.

  • It could be more difficult to set up a socket connection to Arduino than I thought. I may have to set up a socket connection in python with python-firmata to interface with the Arduino. Other Arduino/serial proxies report not working well with Snow Leopard.

  • I haven’t yet thought about the Web interface.

  • I haven’t thought about how I’m going to hang the piece. I have some scrap wood and fishing wire, but I haven’t thought about whether it’s possible to hang all those LED arrays w/o some weird gravity issues.

  • Woodwork help? I need to figure out how to use a rotary saw.

So uh, yeah, I’m getting nowhere but at least I know what I still need to do.

Update

A quick update on the art project:

  • I switched from my normal LEDs to “straw hat” wide-angle LEDs. The viewing angle is much better, meaning also that I can get a more diffuse glow @ closer distances so the light comes through better through the paper.

  • I found out my deadline is Sunday so I’m a little stressed.

  • No progress on Arduino/Processing or the Twitter feed. Still have to figure out whether I should use a Bayesian classifier for better fuzzy matching on tweets or if I should just rig the feed with a simple text filter on “I feel ___”. I have a feeling the former is nice in theory but isn’t good enough and the latter is probably best given this short time span.

  • No idea how I’m going to solder the LEDs to the breadboards. Heck, I think my breadboards aren’t even wide enough, meaning I either need to find a larger breadboard or cut one up and paste the two pieces together lengthwise.

  • I still need to find wood to mount/suspend the project.

  • I called around and got some good quotes from print shops. I’m probably going to go with Alameda Copy. Friendly staff, good Yelp review, best price I saw around. I’ll be printing the poster off their oversize printer on a roll of 70lb paper. Quote: $6-8 per sq. foot, color.

  • I’ve made some good progress on the design, though. I think I’m lengthening the poster but decreasing the width. I’ll post more screens if I get a chance.

[caption id=”” align=”alignnone” width=”500” caption=”Left: conventional LEDs. Right: straw hat LEDs”]Compare straw hat LEDs with conventional LEDs[/caption]

[caption id=”” align=”alignnone” width=”500” caption=”Final design”](Final) Remix[/caption]

The making of SOS: Intro

Save Our Souls - Logo

I’m starting a project for my church’s art show that integrates Twitter, print design and light. I’m titling it “Save Our Souls”.

The theme of the art show is “Instead of Ashes”, a reference from of Isaiah 61 which highlights of God’s promises of redemption for the world’s suffering through Jesus.

What does it do?

Essentially, what I want it to do is parse the global Twitter firehose and display a live stream of tweets that highlight brokenness and pain: tweets about the world’s injustices, breakups, deaths, disappointments, and even ennui. This is inspired by twistori.

At the same time these tweets are floating across the screen, I want to illuminate a part of the printed verse in physical space that corresponds to the tweet.

Imagine:

Tweet: I just want everyone to understand something……I’m SEVERELY depressed right now……my (ex)girlfriend is one of the greatest things to happen to me and she’s gone.

At the same time, the LED array behind “he has sent me to bind up the brokenhearted” from Isaiah 61:1 would light up and pulse.

And so on, for each tweet that flies across the screen.

Architecture

Isaiah 61 Architecture

node.js would process messages from the Twitter streaming API keyed off of certain keywords and filter them by (negative) sentiment, and pushed onto queues. Queues would be drained in a fair manner and pushed out to a browser frontend, while a socket connection to an Arduino script would be responsible for pulsing the LED array.

Of course, there’s a lot of questions left unanswered here: How do you make an LED array? How do you do (reliable) sentiment analysis? How big should the print graphic be? How far behind the print should the LED arrays be placed for the light to be sufficiently diffuse, yet bright enough to be visible? I’m not sure yet.

First steps

So I just bought an Arduino and a ton of LEDs off of eBay. I’m trying to relearn all the EE40 I tried so hard to forget way back in my undergrad days.

[caption id=”” align=”alignnone” width=”500” caption=”Arduinos are cool. I had to do my fair share of head-scratching to figure out resistor values.”]Arduino is hooked up[/caption]

[caption id=”” align=”alignnone” width=”500” caption=”A conceptual design for the print graphic.”]Print Concepts[/caption]

[caption id=”” align=”alignnone” width=”500” caption=”Basic node.js backend with a rudimentary frontend streams live tweets.”]Testing backend.[/caption]

See the (rudimentary) nodejs/client code at github.

See the rest of the photos.

More to come!

Using 37signals shorthand for UX flows

http://37signals.com/svn/posts/1926-a-shorthand-for-designing-ui-flows

I’ve found this technique useful for illustrating workflows quickly, particularly when brainstorming in teams.