The Sweet Spot
On software, engineering leadership, and anything shiny.

It's official: "Web site" becomes "website"

File this under “Nobody is going to care”, but I’ve always been instructed to write “Web site” in my technical communications class. I found it grating. Hooray for change!

I Love/Hate Marmite

I had my first taste of Marmite last year while in Botswana. My South African roommate loved it, and encouraged me to try it and spread it on my toast. One bite and I gagged. It was like somebody mixed soy sauce and peanut butter and cranked it to 11.

The Marmite company apparently recognizes this dichotomy as well and so runs the advertising campaign accordingly:

Every section of the site is written from a love and a hate perspective. Check it out.

Upgrading Wejoinin to unicorn

(Reposted from the Wejoinin Blog)

When you run a Web app like Wejoinin on minimal VPS resources (read: we’re too poor to get a beefy server), it forces you to go lean. We started to realize a year ago that our out-of-the-box Rails and nginx/Mongrel setup was starting to show its age; resource utilization would climb every so often and we’d have to kill and restart a Mongrel thread.

Well, with last night’s Wejoinin push, we’ve upgraded our server environment a few ways:

  1. We’ve switched to with a killer deal on a 512MiB VPS. This is a step up from the 256 slice we used to hold at Slicehost at almost half the price.

  2. We’ve switched from vanilla Ruby to Ruby Enterprise Edition – advertised to take “33% less memory [when used with Passenger]”. It’s got a tweaked garbage collector, memory allocator and the ability to go in a tweak memory usage settings for yourself.

  3. We’ve set up Unicorn, the new HTTP server on the block. It’s special in that each worker is in its own process, meaning that the load balancing is done natively by the OS. Also, this means that should a worker process start to get bloated, we can take it down gracefully without touching the others. Really. We can trust the OS. Plus, Git’s doing it.

With all these tweaks, we should be seeing Wejoinin rarin’ to go. Let us know what you think!


Shiny font overhaul!

A couple of months ago I switched over to TypeKit, a cloud-hosted font service (buzzword! “cloud” gets me all warm and fuzzy). They basically take your font stack and enhance it with @font-face goodness, pulling in lots of fonts from their library.

Why not roll a @font-face implementation yourself?

The availability of good, high-quality, licensed fonts out there is still pretty small. The hoopla over @font-face has been centered around the fact that the fonts are unobfuscated, and anybody with the gumption to do so could go out to your site and steal yer font. So I’d say if you have a font you want to put up on the Web, you’d better make sure the font publisher/licenseholder is kosher with that.

(Ironically, Microsoft got it pretty right with their EOT font format, drafted way back in the days of IE 4 (!). Oh yeah, TypeKit takes care of EOT fonts for you as well.)

(Also, Mozilla is pushing their WOFF format which is currently supported in FF 3.6. Typekit does that too!).

So, how’s TypeKit working out for you?

Well, TypeKit’s not bad. If you’re running a cutting-edge browser (Safari 4, FF 3+, Chromium 4), you’re going to see the font shininess. If you’re not, you still get the default stack.

A problem way early on was the fact that Firefox for Windows was rendering certain fonts with very, very jagged features (see comparisons). Turns out this was a fault of both the font and the OS/browser– certain browsers and OSes don’t render hints properly.

The best solution I’ve found? Use a better font with better hinting.

I’m using FF Nuvo Web and FF Enzo Web Pro:

All in all, I’m pretty satisfied.

Thoughts on rural computing in Botswana

I’m currently in Botswana, doing volunteer work at a nonprofit Christian agency called Love Botswana Outreach Mission. Among my responsibilities are helping out with the organization’s IT needs. I was pleasantly surprised to discover that the organization is wired for 24/7 Internet access, despite being located about 10km out from the center of Maun, a medium-sized village.


As far as I understand it, there is little-to-no telecom infrastructure here in Maun (i.e. there are no telephone, cable or fiber lines). Thus, most communication links happen via satellite or wifi radio.

The mission has a 100-ft radio tower, where a local ISP colocates a wireless repeater to hook into its wireless network. In exchange, the mission receives a special deal on its Internet access plan.

Existing network setup

Because of the relatively low price and quick setup of wireless access points, the mission currently uses a mix of wired switches and wireless repeaters to extend the reach of the network.

A directional antenna on the tower beams a signal to a directional antenna mounted above the IT office trailer to an access point connected through a gateway serving as the DHCP server and firewall. From there, the signal gets sent via a a wired Ethernet network to the other office buildings.

Wireless access points have been installed at various points along the wired office network, allowing wireless access onto the Net.

Only recently has Internet connectivity been extended to reach a cluster of family homes, situated about 200 meters away from the office buildings. A wireless access point is set up in the IT trailer, broadcasting on a +10dB omnidirectional antenna. The closest house to the office has an outdoor +10dB omnidirectional antenna hooked up to a wireless bridge/repeater. Thus, the link from the houses to the gateway is maintained via a wireless bridge. The houses themselves are connected on a hardwired Ethernet network.

Lessons from the field

Lesson: plan for environmental factors.

We observed that the wireless link from the homes to the offices would be intermittently reliable at best, often going down for no reason. We discovered later on that our link would fail on particularly humid days, because the signal quality over the wireless bridge would degrade significantly and decrease the range of the network.

Lesson: keep power voltages in mind.

I fried a power supply when I left it switched to a 115V (US) input, and plugged it into the 220V power plugs there. Sizzle. This led to a lot of apologizing and reassurances from the staff that this was a common thing for Americans to do.

Lesson: Clean computers frequently.

Dust is a common fact of life here. Thus, computer innards were frequently coated with dust, and more susceptible to overheating and/or a shorter lifecycle on moving parts (fans, etc).

Lesson: Scan for viruses, too

I don’t know why, but viruses just seem to be more common here. I’ve been passed a few flash drives that have been infected. In the states, I’d be (un)lucky to find one a year.

Lesson: Plan for low bandwidth and outages

In the States, we depend on always-on Internet. Here, however, the Internet seems to go down for any of a multitude of reasons. If it is up, it’s being shared by fifteen computers all downloading at once over an already-restricted pipeline. Thus, Web pages can’t be expected to load reliably, Skype sessions will cut out, and chat services won’t be accessible.

I’ve come to depend on Offline Gmail to compose email when the Internet’s down. Other productivity apps like Remember the Milk and Google Docs offer offline modes for their applications.

I rely on Bonjour-enabled chat (iChat on a Mac, Pidgin + Bonjour for Windows) for the times when the Internet or DHCP server goes down and I need to chat with somebody in another office or house.

When making large-ish downloads, I’ve gone to using a download manager (flashback to the days of the 56K modem trying to download a 20 MB file)! I’ve been using Free Download Manager. It’s not much of a name, but it’s a full-featured open source program allowing you to schedule, pause, resume downloads.

Just FYI: my downloads range between 900 bytes/sec up to 28KB/sec.

Lesson: Designing Web pages for low bandwidth is still a good idea

As a Web developer, I tend to make the erroneous assumption people have the same quality of high-speed connection as I do back home in the States. Now that I’m a continent away from most of the sites I visit, I’m beginning to realize that optimizing a site for bandwidth savings can be a lifesaver. Here, a round trip to and from the States can take up to 20 seconds. That adds up over the course of a page load. As a Web designer or developer, ask yourself if you can be doing anything for the “56K guy”: Are you using CSS sprites? A CDN? Compressing (gzip, mod_deflate) your output? Are you optimizing your image files for the Web?

Lesson: Knock out as many failure points as possible

Because the wireless bridge seemed to be the weakest link in our network, we decided to bypass it. We ran a really long Ethernet line from the IT office to the houses. This meant ordering a 200-meter length of cable, some sleeving and doing a lot of trench-digging to embed the cable under the dirt access roads. Our work paid off as everybody in the houses now enjoys a reliable network link to the main server (including me!).