The Sweet Spot

Andrew Hao's thoughts about software engineering, design, and anything shiny.

mmtss, a collaborative loop station

| Comments

mmtss is a loop station built for live performances.

Let’s make music together! This project simplifies a traditional loop tracking station and is designed for interactive collaborative music performances.

The idea: Everybody adds or modifies one “part” of a 32-bar loop. The user gets to play an instrument over the existing mix and record the 32-bar phrase when she or he is ready. Once the person is finished, the project selects another instrument at random for the next viewer to record.

It’s an Ableton Live controller serving a Webkit view, backed by node.js on the backend and socket.io + RaphaelJS on the front. Communication is done through a LiveOSC Live plugin via sockets.

Displayed at the Regeneration “We Collaborate” art show in Oakland, CA. 9/24/2011.

Screenshots

Practice mode

mmtss in practice/playback mode. Here the user is able to practice/mess around with the current instrument to prepare to record the next track.

Cued mode

Pressing “record” puts the user in a wait state. They are prompted to begin recording when all the black boxes count down and disappear.

Record mode

mmtss in record mode.

More screenshots: http://www.flickr.com/photos/andrewhao/sets/72157627640840853/

Source code

Github: http://www.github.com/andrewhao/mmtss.

MIT/GPL-sourced for your coding pleasure.

Installation

  • Make sure you have npm installed: http://www.npmjs.org

  • Copy lib/LiveOSC into /Applications/Live x.y.z OS X/Live.app/Contents/App-Resources/MIDI\ Remote\ Scripts/ folder

  • Set it as your MIDI remote in the Ableton Live Preferences pane, in the “MIDI Remote” tab.

Running it

  • Open Mmtss_0.als as a sample Live project.

  • Install all project dependencies with npm install from the project root.

  • Start the Node server with node app.js from the root directory.

  • Open a Web browser and visit localhost:3000

Modifying the sample project

You can modify this project to suit your own needs. Note that there are two sets of tracks; instrument (MIDI input) tracks and loop tracks that actually store clips.

For n tracks, you can add or remove your own instruments. Just make sure that instrument at track x corresponds to track x + n.

Credits

License

MIT and GPLv3 licensed. Go for it.

You will, however, need to get a license for Ableton Live yourself.

The handsome collaborators

Introducing Boink, a photobooth for the rest of us.

| Comments

My friends were complaining that wedding photobooths were too expensive to rent. Could we make one for them?

Glen and I from the Porkbuns Initiative stepped up in full armor, ready to help.

What is it? It’s a self-running photobooth that uses your Mac for brains and DSLR for eyes and a Webkit browser for its clothes and a photo printer for… a printer. You can connect an iPad as the frontend for a nice visual touch (pun intended).

We built it on a backend Rails instance, pushing SVG+HTML5 in the frontend and using the gphoto4ruby gem as a camera library wrapper.

[caption id=“attachment_1105” align=“alignnone” width=“500” caption=“All dressed up and ready to go.”][/caption]

[caption id=“” align=“alignnone” width=“500” caption=“An early UI prototype.”]Boink Preview[/caption]

[caption id=“attachment_1106” align=“alignnone” width=“500” caption=“This comes out of the printer.”][/caption]

Try it out

Check it out on Github.

Chat App - Frontend Prototype

| Comments

Chat View - 1

Interview View

Some UI work I did for a stealth startup in early ‘11. Responsible for look & feel and frontend chat interactions. jQuery/UI communicating to a CakePHP/nodejs backend.

We developed this prototype with statecharts, a concept commonly found in event-driven programming and with which I first learned from Sproutcore. I found it really helped map out all the complex user interactions we had to deal with on the page.

See more screenshots from the set.

Code For Oakland Barcamp

| Comments

[caption id=“” align=“alignnone” width=“500” caption=“Is that Jon Chan I see? Yes it is. Photo credit Oakland Local.”][/caption]

A few notes from this one-day barcamp/hackfest. The goal was to create mobile apps for Oakland, with a special emphasis on serving underserved populations.

  • Text messaging is still king. While smartphones and their apps are at the leading edge and grabbing all the attention these days, most underserved populations don’t have access to these services. Lots of apps today are using Twilio or Tropo APIs as their SMS/voice gateways.

  • Open data is awesome. I learned about ScraperWiki, which basically puts site scraping code into the cloud so people can maintain your scraper script long after you’ve tossed it. Some staff from maplight.org are here.

  • Open data is also difficult to maintain. One presenter mentioned how Oakland keeps a database of social services—that’s great, but what if the organization shuts down and/or changes its hours? Who’s responsible for updating the information? IIRC, the estimate was that 30% of the listings kept by the City are no longer valid. There need to be active efforts to combat dataset decay.

  • By the way, here’s a list of Oakland GIS datasets. Some of it really sucks (filled with garbage spam data). Some of it is useful.

Today’s Projects:

  • comtxt: text gateway for community organizers. github.com/ryanjarvinen/comtxt

  • freexchange.org. mobile interface to donated goods.

  • oakland:pm. get oakland high-schoolers connected to afterschool programs. github.com/jedp/oakland.pm

  • txt2wrk: text-based job matching. this one is unique because it is a complete SMS/voice based interface. connects to craigslist. call it and it reads back craigslist job postings. targeted for parolees.

  • Oakland Food Finder: helping people find healthy, locally-grown food.

  • BettaSTOP: help people find buses, access bus schedules. in oakland, many underserved communities depend on making and finding the right bus. it also allows users to give feedback on buses, remark on their timeliness, and talk about bus route features. Live & in production: http://www.bettastop.net.

What I did

I’m helping out the Oakland:PM team, which is in the process of building out a service to get high schoolers connected to city-funded afterschool programs. The idea is that they can pull up their mobile phones and see what’s available to them while they’re kicking it with friends and bored out of their minds.

While the others hacked on wireframes and some code, I worked on a few user stories and resolved to interview a few of my contacts who work for the YMCA in East Oakland. We won a $500 grant from the City to see this thing through in July. Here’s hoping that we’ll make it.

So far the app is a nodejs app with pages served with the Express framework. We threw around ideas of using Sencha Touch, but I think that decision is out of my hands. We’ll see how we proceed.

Source code can be found at: http://www.github.com/jedp/oakland.pm

UN Declaration of Human Rights (Visualization)

| Comments

UN Declaration of Human Rights (Visualization)

This design was created from a Processing sketch that breaks up the preamble to the UN Declaration of Human Rights and connects adjacent words together with lines. More frequent word associations can be noted by darker, thicker lines.

The source code (albeit messy) can be found at www.github.com/andrewhao/freedom-sunday. I’m running the sketch in full Java mode, so be sure to compile with your Java IDE of choice (rather than the Processing IDE).

Inspiration taken from the designs of Harry Kao (http://www.hairycow.name) and Jer Thorp (http://blog.blprnt.com).

My fellow interns and I are headed off to Cebu for 10 days to observe organizations involved in anti-trafficking efforts. More info: interns.regenerationweb.com

Save our souls - a Twitter art installation

| Comments

Here’s how the installation looked on the day of the art show.

[caption id=“” align=“alignnone” width=“333” caption=“We mounted the installation on the inside of the Regeneration cafe. The Arduino lies behind the Macbook behind the monitor.”]Installation[/caption]

[caption id=“” align=“alignnone” width=“333” caption=“The LEDs are mounted on breadboards suspended on fishing wire, binder clips, rubber bands, chopsticks, and a prayer.”]Mounted LED array[/caption]

[caption id=“” align=“alignnone” width=“333” caption=“Finished it just in time.”]In action[/caption]

Save our souls – Twitter art installation from Andrew Hao on Vimeo.

What are people saying about the ashes in the world today? This installation visualizes a live Twitter stream on heartache, injustice, loss, and our city and matches them up with the redemptive promises of Isaiah.

Life is difficult, and redemption is something we all long for. What changes do you hope for in your life or in the world? Send a response from your Twitter account to @sos_61 and watch the installation react. If you’d like to be kept anonymous, send your response in a DM to @sos_61.

“I hope for ” “I wish that ” “I want to see ____”

A few notes

  • Web interface is a fullscreen Google Chrome window. socket.io is the Websocket interface to the node.js backend. The slide transition is animated via a CSS3 animation, and the red overlay is a simple SVG shape plotted with the help of RaphaëlJS.

  • The Twitter backend is a collection of four self-updating Twitter searches, one for heartache (“i feel lonely, sad, depressed”), injustice (“violence, war, oppression, justice”), death (“rest in peace, passed away”), and Oakland (“oakland”). A blacklist filters out undesirable tweet keywords (“justin bieber”).

  • Additionally, the backend connects to Twitter via the Streaming API and displays a special animation for users who reply via tweet to the @sos_61 account.

  • The installation picks a tweet to display and pulses the LED array corresponding to the right tweet.

  • Communication to the Arduino happens via a python script over the Firmata protocol, using the python-firmata library. The nodejs server signals the script over a socket connection which will run the pulse animation on the correct pin.

  • I printed the graphic on an oversize printer with the good folks at Alameda Copy. Friendly service, fast turnaround, very reasonable prices. Ask for Joe.

Links

Arduino and python-firmata

| Comments

I just spent five hours trying to figure out why  none of the Firmata libraries for Python were working over my serial connection. I was wondering why the previous program remained on the board and none of the signals sent were hot.

Hint: you need to load Firmata first onto the board before it can understand the protocol. Oh. Duh.

[caption id=“attachment_1032” align=“alignnone” width=“500” caption=“Don’t forget to load up “OldStandardFirmata”“][/caption]

Only “OldStandardFirmata” (Firmata 2.0) seems to work with my version of python-firmata. The newer Firmata PDEs can talk Firmata 2.1 and 2.2, but I’m too tired to figure them out.

Currently: frustrated

| Comments

So I got the poster printed, and the LEDs currently show through the board pretty well. This is good:

Printed poster, testing the light

But last night I spent a good chunk of my evening and early morning hitting a lot of walls:

  • I might have to throw out the idea of using the Twitter Streaming API. For the kind of searches I need to do, I just can’t get enough granularity to use live information. Plus, I can only open one connection to the API at a time, which is not good if I need to run four listeners at a time.

  • I couldn’t get the Classifier Ruby gem to work; which looked like the easiest implementation of an LSI/naive Bayes classifier out there. The next closest thing was nltk, and there was no way in heck I had the time to figure that out. Plus, I realized that creating a training set was a LOT more work than I thought I had. So… scratch the machine intelligence out of this. I’m just going to manually search for specific search terms.

  • New solution: Periodically use the Search API to grab results. This allows more exact search results and gives me the ability to tweak the search terms while the demo loop is running.

  • Event-driven programming is throwing me for a loop (ha, get it?). After perusing the node.js docs for the better part of an evening, I think I need to re-architect the code. I need to create a simple event-driven queue, which is confusing because it seems like something simple, yet support isn’t built in. node provides so little out of the box.

  • It could be more difficult to set up a socket connection to Arduino than I thought. I may have to set up a socket connection in python with python-firmata to interface with the Arduino. Other Arduino/serial proxies report not working well with Snow Leopard.

  • I haven’t yet thought about the Web interface.

  • I haven’t thought about how I’m going to hang the piece. I have some scrap wood and fishing wire, but I haven’t thought about whether it’s possible to hang all those LED arrays w/o some weird gravity issues.

  • Woodwork help? I need to figure out how to use a rotary saw.

So uh, yeah, I’m getting nowhere but at least I know what I still need to do.