The Sweet Spot

Andrew Hao's thoughts about software engineering, design, and anything shiny.

Save our souls - a Twitter art installation

| Comments

Here’s how the installation looked on the day of the art show.

[caption id=“” align=“alignnone” width=“333” caption=“We mounted the installation on the inside of the Regeneration cafe. The Arduino lies behind the Macbook behind the monitor.”]Installation[/caption]

[caption id=“” align=“alignnone” width=“333” caption=“The LEDs are mounted on breadboards suspended on fishing wire, binder clips, rubber bands, chopsticks, and a prayer.”]Mounted LED array[/caption]

[caption id=“” align=“alignnone” width=“333” caption=“Finished it just in time.”]In action[/caption]

Save our souls – Twitter art installation from Andrew Hao on Vimeo.

What are people saying about the ashes in the world today? This installation visualizes a live Twitter stream on heartache, injustice, loss, and our city and matches them up with the redemptive promises of Isaiah.

Life is difficult, and redemption is something we all long for. What changes do you hope for in your life or in the world? Send a response from your Twitter account to @sos_61 and watch the installation react. If you’d like to be kept anonymous, send your response in a DM to @sos_61.

“I hope for ” “I wish that ” “I want to see ____”

A few notes

  • Web interface is a fullscreen Google Chrome window. socket.io is the Websocket interface to the node.js backend. The slide transition is animated via a CSS3 animation, and the red overlay is a simple SVG shape plotted with the help of RaphaëlJS.

  • The Twitter backend is a collection of four self-updating Twitter searches, one for heartache (“i feel lonely, sad, depressed”), injustice (“violence, war, oppression, justice”), death (“rest in peace, passed away”), and Oakland (“oakland”). A blacklist filters out undesirable tweet keywords (“justin bieber”).

  • Additionally, the backend connects to Twitter via the Streaming API and displays a special animation for users who reply via tweet to the @sos_61 account.

  • The installation picks a tweet to display and pulses the LED array corresponding to the right tweet.

  • Communication to the Arduino happens via a python script over the Firmata protocol, using the python-firmata library. The nodejs server signals the script over a socket connection which will run the pulse animation on the correct pin.

  • I printed the graphic on an oversize printer with the good folks at Alameda Copy. Friendly service, fast turnaround, very reasonable prices. Ask for Joe.

Links

Arduino and python-firmata

| Comments

I just spent five hours trying to figure out why  none of the Firmata libraries for Python were working over my serial connection. I was wondering why the previous program remained on the board and none of the signals sent were hot.

Hint: you need to load Firmata first onto the board before it can understand the protocol. Oh. Duh.

[caption id=“attachment_1032” align=“alignnone” width=“500” caption=“Don’t forget to load up “OldStandardFirmata”“][/caption]

Only “OldStandardFirmata” (Firmata 2.0) seems to work with my version of python-firmata. The newer Firmata PDEs can talk Firmata 2.1 and 2.2, but I’m too tired to figure them out.

Currently: frustrated

| Comments

So I got the poster printed, and the LEDs currently show through the board pretty well. This is good:

Printed poster, testing the light

But last night I spent a good chunk of my evening and early morning hitting a lot of walls:

  • I might have to throw out the idea of using the Twitter Streaming API. For the kind of searches I need to do, I just can’t get enough granularity to use live information. Plus, I can only open one connection to the API at a time, which is not good if I need to run four listeners at a time.

  • I couldn’t get the Classifier Ruby gem to work; which looked like the easiest implementation of an LSI/naive Bayes classifier out there. The next closest thing was nltk, and there was no way in heck I had the time to figure that out. Plus, I realized that creating a training set was a LOT more work than I thought I had. So… scratch the machine intelligence out of this. I’m just going to manually search for specific search terms.

  • New solution: Periodically use the Search API to grab results. This allows more exact search results and gives me the ability to tweak the search terms while the demo loop is running.

  • Event-driven programming is throwing me for a loop (ha, get it?). After perusing the node.js docs for the better part of an evening, I think I need to re-architect the code. I need to create a simple event-driven queue, which is confusing because it seems like something simple, yet support isn’t built in. node provides so little out of the box.

  • It could be more difficult to set up a socket connection to Arduino than I thought. I may have to set up a socket connection in python with python-firmata to interface with the Arduino. Other Arduino/serial proxies report not working well with Snow Leopard.

  • I haven’t yet thought about the Web interface.

  • I haven’t thought about how I’m going to hang the piece. I have some scrap wood and fishing wire, but I haven’t thought about whether it’s possible to hang all those LED arrays w/o some weird gravity issues.

  • Woodwork help? I need to figure out how to use a rotary saw.

So uh, yeah, I’m getting nowhere but at least I know what I still need to do.

Update

| Comments

A quick update on the art project:

  • I switched from my normal LEDs to “straw hat” wide-angle LEDs. The viewing angle is much better, meaning also that I can get a more diffuse glow @ closer distances so the light comes through better through the paper.

  • I found out my deadline is Sunday so I’m a little stressed.

  • No progress on Arduino/Processing or the Twitter feed. Still have to figure out whether I should use a Bayesian classifier for better fuzzy matching on tweets or if I should just rig the feed with a simple text filter on “I feel ___”. I have a feeling the former is nice in theory but isn’t good enough and the latter is probably best given this short time span.

  • No idea how I’m going to solder the LEDs to the breadboards. Heck, I think my breadboards aren’t even wide enough, meaning I either need to find a larger breadboard or cut one up and paste the two pieces together lengthwise.

  • I still need to find wood to mount/suspend the project.

  • I called around and got some good quotes from print shops. I’m probably going to go with Alameda Copy. Friendly staff, good Yelp review, best price I saw around. I’ll be printing the poster off their oversize printer on a roll of 70lb paper. Quote: $6-8 per sq. foot, color.

  • I’ve made some good progress on the design, though. I think I’m lengthening the poster but decreasing the width. I’ll post more screens if I get a chance.

[caption id=“” align=“alignnone” width=“500” caption=“Left: conventional LEDs. Right: straw hat LEDs”]Compare straw hat LEDs with conventional LEDs[/caption]

[caption id=“” align=“alignnone” width=“500” caption=“Final design”](Final) Remix[/caption]

The making of SOS: Intro

| Comments

Save Our Souls - Logo

I’m starting a project for my church’s art show that integrates Twitter, print design and light. I’m titling it “Save Our Souls”.

The theme of the art show is “Instead of Ashes”, a reference from of Isaiah 61 which highlights of God’s promises of redemption for the world’s suffering through Jesus.

What does it do?

Essentially, what I want it to do is parse the global Twitter firehose and display a live stream of tweets that highlight brokenness and pain: tweets about the world’s injustices, breakups, deaths, disappointments, and even ennui. This is inspired by twistori.

At the same time these tweets are floating across the screen, I want to illuminate a part of the printed verse in physical space that corresponds to the tweet.

Imagine:

Tweet: I just want everyone to understand something……I’m SEVERELY depressed right now……my (ex)girlfriend is one of the greatest things to happen to me and she’s gone.

At the same time, the LED array behind “he has sent me to bind up the brokenhearted” from Isaiah 61:1 would light up and pulse.

And so on, for each tweet that flies across the screen.

Architecture

Isaiah 61 Architecture

node.js would process messages from the Twitter streaming API keyed off of certain keywords and filter them by (negative) sentiment, and pushed onto queues. Queues would be drained in a fair manner and pushed out to a browser frontend, while a socket connection to an Arduino script would be responsible for pulsing the LED array.

Of course, there’s a lot of questions left unanswered here: How do you make an LED array? How do you do (reliable) sentiment analysis? How big should the print graphic be? How far behind the print should the LED arrays be placed for the light to be sufficiently diffuse, yet bright enough to be visible? I’m not sure yet.

First steps

So I just bought an Arduino and a ton of LEDs off of eBay. I’m trying to relearn all the EE40 I tried so hard to forget way back in my undergrad days.

[caption id=“” align=“alignnone” width=“500” caption=“Arduinos are cool. I had to do my fair share of head-scratching to figure out resistor values.”]Arduino is hooked up[/caption]

[caption id=“” align=“alignnone” width=“500” caption=“A conceptual design for the print graphic.”]Print Concepts[/caption]

[caption id=“” align=“alignnone” width=“500” caption=“Basic node.js backend with a rudimentary frontend streams live tweets.”]Testing backend.[/caption]

See the (rudimentary) nodejs/client code at github.

See the rest of the photos.

More to come!

On selling my soul and switching to Mac

| Comments

A month ago I bought an ‘07 MacBook on Craigslist. No, it’s not one of those sexy aluminum hot rods, but it sure is pretty.

I was full of mixed emotions. I had finally joined the Tribe! I had sold my soul. I can now develop iPhone apps! I am property of SJ. Yes, I had interned at Apple in the summer of ‘08, but I haven’t really paid much attention to OS X until I’ve owned one of these suckers for myself.

It’s not easy making the switch. Here are some of my gripes with OSX and some stopgap replacements I’ve found:

  • No keyboard shortcut to maximize windows. (Solved with RightZoom)

  • MacPorts really sucks (that is, having to compile everything from source is really slow). (Still outstanding)

  • Switching windows between spaces is really clunky. (Sorta solved with Hyperspaces)

  • No keyboard shortcut to “move active window to space”. (Still outstanding)

  • No way to force windows to a grid. (Solved with Divvy)

  • No right Ctrl key makes things really painful in Emacs. (Remapped right “Enter” with DoubleCommand)

  • Aero Snap is cool. (Check out Cinch)

  • The MacBook keyboard keys feel cheap compared to a Thinkpad. (Still outstanding)

  • Really annoyed with Command-Tab switching. I just want to switch to a different window! (Still outstanding).

  • Expose feels limited compared to Compiz’ Scale plugin. Cannot Expose all windows in all spaces. (Still outstanding)

  • Closing window doesn’t kill the process. I suspect this is just a design philosophy I have to deal with. (Still outstanding)

  • Not sure why, but my wrists generally tend to hurt more after using the MB rather than my work Thinkpad.

Okay, I’m not going to do all complaining. Some things I really enjoy:

  • Drag-and-drop installation is totally elegant.

  • Time Machine just works, and the zoomable interface is totally slick.

  • Helvetica Neue, I love you.

  • Something about the font rendering is just amazing.

  • I get to look cool (and slightly cliche…) at coffee shops

  • Desktop graphics & animations render smoothly with CoreAnimation. This stuff just looks slick.

I guess it’s not as bad as I’ve made it out to be. Well here I am. Swore I’d never do it, but you got me with all that shiny, Mr. Jobs. You got me.

post-review, git-svn and Review Board

| Comments

Here’s how to set up the excellent VMware-developed open-source Review Board and its [post-review](http://www.reviewboard.org/docs/releasenotes/dev/rbtools/0.2/) command line review creation utility to work with git and git-svn on your computer.

My assumption is that you’re working with a local Git repository that is remotely linked to an SVN repository via the git-svn bridge. Let’s assume that your master branch is synced with the SVN repository, and you’re working on bug_12345_branch.

  1. Make sure you have RBTools installed (sudo easy_install rbtools for me on Ubuntu Linux), and Review Board set up elsewhere.

  2. Add a link to your Review Board URL in .git/config:

[reviewboard]
 url = <a href="#">https://(url_for_review_board)/</a>
  1. Make sure all your changes in bug_12345_branch have been locally committed.

  2. post-review -o, and…

  3. Voila! You should have a new review up on your Review Board instance.

  4. (After you get reviews, you can modify bug_12345_branch, pull the changes into master, and then git svn dcommit, blah, blah, blah.)