The Sweet Spot

Andrew Hao's thoughts about software engineering, design, and anything shiny.

My own robot training buddy.

| Comments

As an ultra runner, I am really into the mountains. As a software engineer, I’m really into data. So naturally, I’m interested in the intersection of both.

I’ve particularly been interested in how systems like Strava work, especially when they quantify what is known as a “Suffer Score”, a single number denoting the amount of training stress (a.k.a. suffering) you put yourself through in a workout.

How does a track workout compare to a long day on the trails? Which is tougher: a 5m tempo road run in and around my neighborhood, or a tough 2m climb into a local regional park?

Data in…

I first attacked the problem of getting data off of my phone. I record my GPX tracks in Runmeter, a fantastic iPhone application with all sorts of metrics and data export capabilities. What I wanted was a seamless way to get the data off my phone without fuss after a hard workout.

The application has a nifty feature in which it can automatically send an email to an email address after a workout is completed.

I wrote an email ingester, Velocitas, with the help of Cloudmailin, which fires off a POST request to the Node application. Velocitas does the following:

  • curls and downloads the GPX link embedded in the email.
  • Saves the GPX file to a linked Dropbox account.
  • Republishes the GPX file to a linked Strava account.

Deriving the Training Stress Score

Next up: I wanted to do a quick and dirty implementation of the (run-based) Training Stress Score. Stressfactor, a Ruby gem, is what came out of it.

It implements the rTSS as detailed in this article.

1
2
(duration_seconds * normalized_graded_pace * intensity_factor) /
(functional_threshold_pace * 3600) * 100

Stressfactor is a higher-level toolbelt for deriving meaning from GPX tracks, so it, at the moment, attempts to calculate the stress score and grade adjusted pace.

The data still needs validation, so I’m eager to run it on my data set of GPX tracks from the past years.

Generating reports

I’m working on this part right now — I need to nicely display a report from my workout history in Dropbox and display per-GPX. I’ve started the project — Stressreport.

Some things I’ve learned and am learning

  • The human body is complex, and cannot be easily modeled without sufficient data. That said, what I’m doing now may be sufficient for basic training data.
  • The nature of parsing and generating higher-order stats from raw data may lend itself well to experimentation with functional languages. I’m interested in trying to reimplement Stressfactor in Scala, or a functional equivalent.
  • Deploying all my apps on Heroku’s free tier may actually be an interesting start to building a microservice architecture — with the space limitations on Heroku, I’m forced to build new features on new services. Call it cheapskate architecture.

Recap: QCon SF 2014

| Comments

Blurb sent me off to QCon SF 2014 for three days.

Notes

I took a series of notes each day in attendance:

Summary

  • Big trends in continuous delivery and deployment — deploy more often, smaller feedback loops
  • A lot of emphasis on event driven architectures + microservices. Lots of emphasis on DDD as a design tool.
  • Reactive systems with functional implementations were widely discussed as a scaling tool (backpressure-sensitive) and as a coordination tool between multiple async services.
  • Big data/realtime streaming talks were interesting — my personal experience with them is limited, but it seems there is a debate over the merits of existing Lambda architecture practice.
  • A lot of talk about microservice orchestration tools — acknowledging the pain of configuration and management of many services.
  • Scala got a lotttt of attention. Probably because of its presence in bigger companies like Netflix, Twitter, LinkedIn. Wonder what smaller startups are using.
  • Web Components were a big upcoming trend in frontend technologies. Strong modularization of views + behaviors in HTML documents.

Questions

  • If I could do a startup over again, would I begin an app in Rails? Where is the sweet spot for that sort of application architecture?
  • How can I design systems such that they can be extractible into focused components/services as early as possible?
  • How can we plan for failures (fault injection)?
  • How does one implement change in software engineering organizations? Bottom-up (organic initiatives bubbling up through management) vs top-down (management/software leaders direct org to implement).
  • How are we doing with encouraging women and minorities who traditionally are underrepresented in our industry?
  • What are places in our hiring funnels that, unbeknownst to us, may be turning away or detracting women and minorities?

Conway’s Law for humans

| Comments

If you’re familiar with Conway’s Law, it states:

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

Or in layman’s terms, your software systems reflect the structure of the teams that create them.

Think about it — do your teams prefer to do everything themselves? Or do they ask for help from other teams? In general, as a team, we prefer to have as little dependencies as possible. The work that the engineer has to do to send an email, or wait for work from another team (like an API change or design change) is usually time-consuming and burdensome. Therefore, it is not (usually) in the team’s best interests to cross silos and ask for help from others. Your teams, if they look like this, tend to work in codebases that are generally monolithic and wholly owned by the team.

It’s not wrong, or it’s not bad, it’s just a sociological observation. This is why companies like Spotify, Netflix or Amazon have embraced Conway’s Law and changed their organizations to match the microservice-driven architecture they want. Small teams work on small codebases and are empowered to make the changes they need.

A corollary and some observations about your company culture.

Here’s a corollary, which I’ve heard in various shapes and forms. Paraphrased:

An organization’s structure tends to pattern after its leaders’ communication patterns.

I’ve been pushing an effort to unify our company’s frontend styles and UX into a unified framework in an attempt to standardize the look and feel of the site.

However, in working with our designers, I realized that they weren’t talking to each other. Designers in one department had opposing design aesthetics from designers in another. This was causing problems in the code, because the frontend framework itself was becoming fragmented. You could see it in the code. Version A of the styleguide went into Department A’s products. Version B of the styleguide went to Department B’s products.

In this case, as we kept rolling out this framework, I realized our organization had no single owner for the design language of the site. Why? I had to wonder if it had to do with some deeper communication issues between the heads of the two departments.

Code can be a good canary to organizational issues, and call out larger human issues at hand. In this case, Conway’s Law helps us root out and bring into the light structural concerns. Leaders can pay attention to these concerns and check themselves to become more open communicators.

Mocks aren’t stubs: mockist & classic testing

| Comments

With the famed “TDD is dead” debate around the Rails community largely coming to an end, I found myself referencing Martin Fowler’s article, Mocks Aren’t Stubs a good deal, trying to make sense of it in terms of how I write tests and code.

In this post I’m going to talk about mocking and stubbing and their roots, paraphrase Fowler in an attempt to explain their differences, and walk through a couple of code examples. In each case, I’m going to attempt to build this example out in Ruby and RSpec 3.

Let’s assume this implementation in our code for a BookUpdater object in Ruby. Its job is to call through its collaborating ApiClient, which wraps some aspect of an API that we want to call.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Update a book's metadata in our systems.
class BookUpdater
  attr_accessor :book, :api_client, :response

  def initialize(book, api_client)
    @book = book
    @api_client = api_client
  end

  def updated?
    !!response.success?
  end

  def update!
    response = api_client.call_update_api(book)
  end
end

What they are

Mocks

Mocks are fake objects that verify that they have received messages. In RSpec, we traditionally use the mock object utility to create these objects.

1
2
3
4
5
6
7
api_client = mock('api client')
book = Book.new

expect(api_client).to receive(:call_update_api).with(book).and_return(true)

subject = BookUpdater.new(api_client, book)
subject.list!

What’s happening here? RSpec creates a mock api_client object that will verify that, after the test case executes, it has received the :call_update_api message with the correct arguments.

The main point of this style of testing is behavior verification — that is, that your object is behaving correctly in relation with its collaborators.

Double

Let’s take a look at a double — also known as a stub. A double is a fake object that is set up to respond to a certain message with a pre-canned response, each time. Let’s take a look at how I would set up a test using doubles:

1
2
3
4
5
6
api_client = double('api client')
response = double('response', :success? => true)
book = Book.new

allow(api_client).to receive(:call_update_api).with(book).and_return(response)
expect(subject.update!).to change(subject, :updated?).from(false).to(true)

Okay, so what’s the big deal here? My test case still passes. Note that I had to change my code to focus its expectation on the subject’s state instead of the api_client.

The focus of using doubles is for state verification — that is, that so long as everybody around me is behaving according to their contracts, the test merely has to verify that internal object state changes correctly.

A third way — real objects

I won’t cover this very much in depth, but with sufficiently simple objects, one could actually instantiate real objects instead of doubles, and test an object against all its collaborators. This is, in my experience, the most common experience of working in Rails + ActiveRecord applications.

Classic vs Mockist testing: different strokes for different folks

As we saw above, the key difference between the mock and the stub (the double). The focus of the test in the mock case is on the messages being sent to the collaborators. The focus of the test when using the double is on the the subject under test (SUT).

Mocks and stubs/doubles are tools that we can use under the larger umbrellas of two TDD philosophical styles: classic vs mockist styles.

Classic TDD

  • Classic TDDists like using doubles or real objects to test collaborators.
  • From personal experience, testing classicly is oftentimes the path of least resistance. There isn’t expectation setup and verification that mockist testing requires of you.
  • Classic TDD sometimes results in creating objects that reveal state — note how the BookUpdater needed to expose an updated? method.
  • Setting up the state of the world prior to your test may be complex, requiring setting up all the objects in your universe. This can be a huge pain (has anybody ever had this problem with overcomplicated Rails models with spidery associations? Raise your hands…). Classicists may argue that the root cause here is not paying attention to your model architecture, and having too many associations is an antipattern. Alternatively, classicists oftentimes generate test factories (e.g. Rails’ FactoryGirl gem) to manage test setup.
  • Tests tend to be treatable more like black boxes, testable under isolation (due to verifications on object state) and are more resistant to refactoring.

Mockist TDD

  • Mockist TDD utilizes mocks to verify behavior between objects and collaborators.
  • It can be argued to develop “purer” objects, that are mainly concerned with objects passing messages to each other. Fowler refers to these objects as preferring role-interfaces.
  • These tests are easier to set up, as they don’t require setting up the state of the world prior to test invocation.
  • Tests tend to be more coupled to implementation, and may be more difficult to refactor due to very specific requirements for message passing between collaborators.
  • Fowler brings up a point where being a mockist means that your objects prefer to Tell Don’t Ask. A nice side effect of TDA is you generally can avoid Demeter violations.

In conclusion

In coming from a classic TDD background, I’ve oftentimes viewed mockist testing with some suspicion, particularly around how much work is involved to bring them about. Over the years, I’ve warmed up to the usage of mockist testing, but have not been diligent enough at doing pure driving TDD with mocks. In reviewing Fowler’s comments, I’m intruiged at the possibilities of mockist TDD in affecting system design, particularly in their natural inclinations toward role interfaces. I look forward to trying pure mockist TDD in a future project.

Further reading:

Running Mocha tests with ES6/AMD modules

| Comments

In one of my personal projects (Chordmeister), I’ve been trying to upgrade the code to be written in ES6 modules and transpile down to AMD modules with Square’s very excellent es6-module-transpiler project.

Since I’ve already updated an Ember app of mine to try ES6, I figured it was high time to do it on another project.

Sorry Coffeescript, but I’m moving on.

First problem: Coffeescript seems indecisive with respect to ES6 support. In order to support import or export keywords, I had to wrap the statements in backticks, making the code look like this:

1
2
3
4
5
`import ClassifiedLine from "chordmeister/classified_line"`
class Parser
  # Implementation

`export default Parser`

Except this wasn’t being picked up by es6-module-transpiler, since Coffeescript wraps the entire declaration in a closure: I was finding myself having problems compiling from Coffeescript –> ES5 JS –> ES6 JS.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
define("chordmeister/parser",
  [],
  function() {
    "use strict";
    (function() {
      // Oops, I wasn't transpiled!
      import ClassifiedLine from 'chordmeister/classified_line';
      var Parser;
      Parser = (function() {
        // Implementation
      }
      )();
      // Oops, I wasn't transpiled!
      export default Parser;
      })()
  });

So the first call: ditch Coffeescript. Write this in pure ES6.

1
2
3
4
5
6
7
8
import ClassifiedLine from 'chordmeister/classified_line';
var Parser;

Parser = (function() {
  // implementation
})();

export default Parser;

Which transpiled nicely to:

1
2
3
4
5
6
7
8
9
10
11
define("chordmeister/parser",
  ["chordmeister/classified_line","exports"],
  function(__dependency1__, __exports__) {
    "use strict";
    var ClassifiedLine = __dependency1__["default"];
    var Parser;
    Parser = (function() {
      // Implementation
    })();
    __exports__["default"] = Parser;
    });

Next up: adding AMD support in Mocha

Okay, so we need to set up a few things to get Mocha playing well with RequireJS, the AMD loader.

Our plan of attach will be to leverage the generated AMD modules and load our tests up with them. We have the benefit of being able to specifically inject dependencies into our test suite.

The tricky parts will be:

Set up the Mocha index.html runner

Install mocha, require.js, and chai via bower, then plug them into the harness:

test/index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<!doctype html>
<html>
<head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
    <title>Mocha Spec Runner</title>
    <link rel="stylesheet" href="../bower_components/mocha/mocha.css">
</head>
<body>
    <div id="mocha"></div>
    <script src="../bower_components/mocha/mocha.js"></script>
    <script src="../bower_components/chai/chai.js"></script>
    <script data-main="test_helper" src="../bower_components/requirejs/require.js"></script>

</body>
</html>

Note the references to data-main="test_helper", which is require.js’s way of determining its entry point after it loads.

Set up a test runner.

test/test_runner.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// Configure and set up the tests.
require.config({
  baseUrl: "../build/"
})

// Load up the files to run against
var specs = [
  'chordmeister/chord_spec.js',
  'chordmeister/song_spec.js',
  'chordmeister/parser_spec.js',
  'chordmeister/classified_line_spec.js',
];

// Start up the specs.
require(specs, function(require) {
  mocha.setup('bdd');
  expect = chai.expect;
  // Why? Some async loading condition? Is there a callback I should be hooking into?
  setTimeout(function() {
    mocha.run();
  }, 100);
});

You’ll notice that I was having synchonicity issues between spec suite load and mocha.run(). Throwing everything back a few hundred ms seemed to have done the fix.

AMD gotchas

Pay attention to the default parameter that the module exports with. This is important to remember since native ES6 will allow you to directly import it with its native syntax:

1
import Parser from "chordmeister/parser"

But if you’re using RequireJS/AMD modules, you’ll need to explicitly call out the default namespace from the required module, so like so:

1
2
3
4
require(["chordmeister/parser"], function(parser) {
  Parser = parser.default;
  new Parser() // and do stuff.
});

Let me know if you have any questions!

Implementing DDD: Chapter 2

| Comments

Chapter 2: Domains, Subdomains, and Bounded Contexts

A Domain is what a business does and the surrounding context of how it does business. It is important to model out its supported Subdomains — that is, the smaller components of the business that collaborate in real, day-to-day operations of the business. The book describes these as Core Domains.

Finally, a Bounded Context is the physical manifestation of the solution space as software — models living in a real application. Its key feature in the context of DDD is as a linguistic barrier between different domains.

In an ideal world, each Subdomain maps directly to one Bounded Context. In the real world, this is less common since we tend to build things into monolithic systems. Still — many monolithic applications have several components that could in themselves be bounded contexts.

Side note: In Rails, one could think of Engines as a Bounded Context. But that might be a blog post for another time.

It is important to get these ideas and concepts down correctly because we need correct modeling of our systems to determine what they do.

Bounded Contexts and terms

It’s not usually realistic to get the entire organization agreeing on a universal linguistic definition for every term. Instead, DDD assumes that different terms have different meanings in different contexts.

The author then dives into the an example of a book, where a book means several different things to different people in different contexts. A book is touched upon by authors, graphic designers, editors, marketing folks. In each of these contexts, the features of a book mean different things at different times. It is impossible to develop an all-knowing Book model without disagreement between stakeholders. DDD, instead, acknowledges these differences and allows stakeholders to use linguistic terms from within their unique Bounded Contexts.

Bounded Contexts may include things like:

  • Object models
  • A database schema / persistence layer
  • A SOAP or REST API
  • A user interface

Bounded context antipatterns

You may be tempted to divide up a bounded context by architectural concerns, or because you want to assign smaller tasks to developers (resource allocation). Beware that this kind of work tends to fragment the language.

DDD operates on linquistic drivers. The unity, cohesion and domain-adherence of the bounded context should be the first priority in the design.

Assigning two teams to the same bounded context results in fragmentation of the bounded context.

Ideally: we strive to assign one team to work on one Bounded Context with one Ubiquitous Language at a time.

In Rails, what are the bounded contexts? It could be the top-level Rails application, or an engine, or a gem, that define the context boundaries._

A story…

The chapter then goes on to describe their fictional team designing through three iterations off their DDD strategy:

A system in which all domains live within the same bounded context. They see the folly of this and refactor with some tactical patterns, like creating services.

This is, however, missing the point. They realized they needed to listen to business and their domain experts to find out exactly where the right places were to segregate the contexts. The team discovers that the business has a desire to go in a new direction which allows them to segregate the domain in such a way that would enable future directions for the business.

How often are we as developers in conversation with our product owners and asking where the business wants to go in the future?

Conversations with the business reveal an intention to develop an add-on product to the core product. This implies the development of two subdomains. However, further investigation reveals that the shared overlap of certain domain models (like users, roles, and access management) cannot simply be identically shared between two systems, since their linguistic meanings in the two systems differ slightly. Instead, the developers use the linguistic problem to develop a third bounded context.

The developers separate their app into three bounded contexts built as three software systems.

Six months as a manager

| Comments

It’s been approximately six months since I’ve entered engineering management. Here are some thoughts reflecting back on that season now.

I didn’t like it at first.

Let’s face it: I didn’t like the feeling of being a “manager” now. It comes with too much baggage of pointy-haired bosses or ineffective waste layers. Why do we need managers anyways?

More to the point: I love coding. I love working on rewarding projects. I hated the thought of being tied up in meetings while day by day getting more and more disconnected from code.

So I struggled a lot in those first few months (still do), trying to maintain the balance of working with product management to get a strategic vision communicated, or with other engineering teams to get a tactical vision in place, versus trying to be deeply involved with the code base and doing code reviews.

You can’t code anymore.

In the end? I had to come to grips with the reality that my team needed me in a different role — a strategic role, helping them see the big picture, sniffing out dependencies and destroying blockers before they got to them.

But I still needed to be spot-on with the code — I still needed to be in the know with new product features as they are being developed. So it’s my priority to be in design discussions, but I can’t be the implementer anymore.

So I’ve started a change in mindset — how can offload all the random codebase knowledge I’ve acquired over the years to the team? How can I effectively share my expertise so I’m out of a (coding) job? I’m starting to be more convinced that the answer to that is pair programming.

Pairing your way out of a job

Nowadays if there’s a story or task in which I am the only one (or one of few) with domain knowledge about the feature, I’ll ask a team member to pair with me on the feature. That way we get a really effective teaching tool about the domain model, and the extra plus in that if I get pulled into a meeting, development can continue.

But you still have to code.

So I code on the weekends (not everybody can do this!)

We’re a Rails team taking on a new Ember project. I need to get my chops up around Ember so I’ve decided to pull in a side project to hack on the weekends. My side work on Wejoinin lets me do all the Rails things I don’t get to do anymore at work.

And it works for me because I love to code, and to be honest, I don’t ever want to get weak in that area.

To be honest, I’d love more feedback from other engineering managers in that area. How do you keep your chops sharp?

People are your priority.

I’ve read often that empathy is everything when it comes to management, and it still rings true. No matter how this product launch goes, in the end your team will remember how you treated them, how their thoughts and feelings were taken into account.

One thing I’m trying to check myself is my tendency to jump in on other people’s sentences and cut them off. It sounds silly, but sometimes I realize I like to talk a lot more than listen. As a manager, sometimes you need to dwell on the feedback of a team member for some time before you write it off. There’s usually a core message in there that’s completely valid — a frustration, a desire for change. Empathy means communicating the message: “I heard you, and I respect your thoughts.”

It’s your attitude

And finally, here’s what I think: your attitude toward your company and the team make ALL the difference.

  • Is your orientation truly to support your team? How will it be clear in the long run that your supported someone in their career?
  • Where are places in your workday where you can connect and sympathize with people — share a joke, listen to someone’s frustration, or just simply go out to lunch?
  • How are you giving people something to work toward to: a vision of changing the world (for Blurb, it’s about transforming the publishing industry).
  • How are you addressing negativity? Grumbling is important to address head on in its own space, but how are you empowering people to take charge of issues of culture or organizational friction?
  • How are you checking your own negativity — sometimes we forget that we’re the solutions to our own problems, and I’ve found oftentimes that the very issues that you assume are impossible to change are crackable by having the right relationships with the right people.

The little things matter.

Growth areas

Lord knows I have a lot to learn here. One area I’m learning to grow in is how to give honest and accurate feedback to team members, without fearing how they’re going to receive it.

Another area? Delegation. I’m learning to delegate the little micro-responsibilities in my job that I just kind of expect that I’m supposed to do. Case in point: every week we accept a translation import from a third-party translation service. I realized that I was burning a lot of time reviewing hundreds of lines of translation keys every week, and the repetition of it was sapping a lot of my energy. I had to learn to ask a team member for help and give them responsibility to own that function of the process.

Blogging through: Implementing Domain-Driven Design

| Comments

In recent conversations with coworkers, the topic of Domain-Driven Design has arisen on more than a few occasions in design and architecture meetings. “Have you read it?” a coworker asked, “I think it’d help us a lot.”

I’ve gotten my hands on a copy of Implementing Domain-Driven Design by Vaughn Vernon, which is a more pragmatic approach to DDD than the original Domain-Driven Design book by Eric Evans.

My desire is to share my outlines of the book chapter-by-chapter, hopefully once a week.

Chapter 1: Getting Started with DDD

Can I DDD?

  • DDD helps us design software models where “our design is exactly how the software works” (1).
  • DDD isn’t a technology, it’s a set of principles that involve discussion, listening, and business value so you can centralize knowledge.
  • The main principle here is that we must “understand the business in which our software works” (3). This means we learn from domain experts in our field.
  • What is a domain model? an object model where objects have data/persistence concerns with an accurate business meaning.

Why You Should Do DDD

  • Domain experts and devs on same playing field, cooperation required as one team. (Agile teams, anyone?)
  • The business can learn more about itself through the questions asked about itself.
  • Knowledge is centralized.
  • Zero translations between domain experts and software devs and software.
  • “The design is the code, and code is the design.” (7)
  • It is not without up-front cost

The problem

  • The schism between business domain experts and software developers puts your project (and your business) at a risk.
  • The more time passes, the greater the divide grows.

Solution

  • DDD brings domain experts and software developers together to develop software that reflects the business domain mental model.
  • Oftentimes this requires that they jointly develop a “Ubiquitous Language” – a shared vocabulary and set of concepts that are jointly spoken by everybody.
  • DDD produces software that is better designed & architected –> better testable –> clearer code.
  • Take heed: DDD should only be used to simplify your domain. If the net cost of implementing DDD is only going to add complexity, then you should stay away.

Domain model health

  • As time passes, our domain models can become anemic, and lose their expressive capabilities and clean boundaries. This can lead to spaghetti code and a violation of object responsibilities.
  • Why do anemic domain models hurt us? They claim to be well-formed models but they hide a badly designed system that is still unfocused in what it does. (Andrew: I’ve seen a lot of Service objects that claim to be services but really are long scripts to get things done. There might be a cost of designing the Service interface, but inside things are just as messy as before we got there.)
  • Seems like Vernon is blaming the influence of IDEs for Visual Basic as they influenced Java libraries — too many explicit getters and setters.
  • Vernon throws up some code samples comparing two different code samples — one with an anemic model that looks like a long string of commands and another with descriptive method names. He makes the case that simply reading the code is documentation of the domain itself.

How to do DDD

  • Have a Ubiquitous Language where the team of domain experts share the language together, from domain experts to programmers.
  • Steps to coming up with a language:

    1. Draw out the domain and label it.
    2. Make a glossary of terms and definitions.
    3. Have the team review the language document.
  • Note that a Ubiquitous Language is specific to the context it is implemented in. In other words, there is one Ubiquitous Language per Bounded Context.

Business value of DDD

  1. The organization gains a useful model of its domain
  2. The precise definition of the business is developed
  3. Domain experts contribute to software design.
  4. A better user experience is gained.
  5. Clean boundaries for models keep them pure.
  6. Enterprise architecture is better designed.
  7. Continuous modeling is used — the working software we produce is the model we worked so hard to create.
  8. New tools and patterns are used.

Challenges

  • The time and effort required to think about the busines domain, research concepts, and converse with domain experts.
  • It may be hard to get a domain expert involved due to their availability.
  • There is a lot of thought required to clarify pure models and do domain modeling.

Tactical modeling

  • The Core Domain is the part of your application that has key and important business value — and may require high thought and attention to design.
  • Sometimes DDD may not be the right fit for you — if you have a lot of experienced developers who are very comfortable with domain modeling, you may be better off trusting their opinion.

DDD is not heavy.

  • It fits into any Agile or XP framework. It leans into TDD, eg: you use TDD to develop a new domain model that describes how it interacts with other existing models. You go through the red-green-refactor cycle.
  • DDD promotes lightweight development. As domain experts read the code, they are able to provide in-flight feedback to the development of the system.

Moving to Ember App Kit

| Comments

I’ve noticed a bit of the buzz around Ember App Kit recently and decided to move Hendrix, my music management app, over from a Yeoman-generated Ember app to EAK with all its bells and whistles.

What’s the difference?

Well on the surface, the two frameworks aren’t very different. The standard Yeoman build tool sets you up with Grunt and Bower, which is what EAK provides you out of the box. The cool stuff happens when you dive under the hood: ES6 module transpilation and an AMD-compatible Ember Resolver, built-in Karma integration and a built-in API stub framework for development and test environments.

The joys of modules

What I didn’t realize was that compiling to ES6 modules required that my filenames be renamed exactly how the modules were going to be placed, with the extra caveat that resource actions needed to live in their own directories. Recall that in the old way of doing things with globals and namespaces, you could get away with throwing a route file like this in your app directory:

1
2
routes/
  songs_index_controller.js

And inside:

1
2
3
MyApp.SongsIndexRoute = Ember.Route.extend({
  //...
});

In EAK’s world, you need to nest the file under the songs/ directory, and strip the type from the filename, like so:

1
2
3
routes/
  songs/
    index.js

Inside the file, you assign the function to a variable and let it be exported in the default namespace.

1
2
3
4
var SongsIndexRoute = Ember.Route.extend({
  //...
});
export default SongsIndexRoute;

File name matters

The new Ember resolver loads modules in a smart way — according to how the framework structures resources, controllers and their corresponding actions. So visiting #/songs from my app caused the app to look up and load appkit/routes/songs/index. What I didn’t realize was this module must live at a very specific place in the file directory structure. I realized that I left the module type in the file name the first time around, like this:

1
2
3
routes/
  songs/
    index_route.js

There are no types in the module names — or the filenames, for that matter. I had not realized this (I’m also an AMD newbie) — so I had left my files un-renamed as songs_index_route, which meant that the module loader had stored the SongsIndexRoute module under appkit/routes/songs/index_route, but was doing a route lookup through the Resolver for: appkit/routes/songs/index. Renaming the file to:

1
2
3
routes/
  songs/
    index.js

did the trick.

Ember Data, Rails, CORS, and you!

| Comments

I’m starting up a new personal project involving Ember-Data and Rails (more to come). The gist of it is that it’s a pure frontend app engine built in Yeoman and Grunt, and designed to talk to a remote API service built on Rails.

So since it’s a remote API, I’ve got to enable CORS, right?

Install CORS via rack-cors

Gemfile.rb
1
gem "rack-cors", :require => "rack/cors"
config/application.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
config.middleware.use Rack::Cors do
  allow do
    origins "*"

    resource "*",
      :headers => :any,
      :methods => [:get, :post, :put, :delete, :options, :patch]
    end

  allow do
    origins "*"
    resource "/public/*",
      :headers => :any,
      :methods => :get
  end
end

A very naive implementation with zero security whatsoever. Anyways. Onward!

Get Ember-Data DS.RESTAdapter talkin’ CORS

I saw conflicting documentation on Ember-Data and CORS — it seemed like it should support CORS out of the box. Apparently this is not so.

In my ember app’s store.js (or anywhere your app loads before the application adapter is defined, do this:

store.js
1
2
3
4
5
6
7
8
9
10
11
$.ajaxSetup({
  crossDomain: true,
  xhrFields: {
    withCredentials: true
  }
});

Hendrix.Store = DS.Store.extend();
Hendrix.ApplicationAdapter = DS.RESTAdapter.extend({
  host: "http://localhost:3000",
})

$.ajaxSetup, though its usage is not recommended, is designed to set global options on the jQuery ajax object. It provides some information on the options you can modify.

Why doesn’t Ember support this out of the box? I think it’s because they cannot support IE, where one must use an XDR object to support CORS.

I’ve posted an Ember follow-up question in the forums for discussion.

Get Rails talking JSON out of its mimetype confusion.

Did you know that if you rely on the Accepts: header in HTTP that Rails does not obey its ordering*? I was trying to figure out why my Rails controllers were trying to render HTML instead of JSON when the headers were:

'Accept: application/json, text/javascript, */*; q=0.01'

A very long winded discussion on the Rails project reveals that, well, nobody has it figured out yet. Most modern browsers do obey Accepts: specificity, but for the sake of older browser compatibility, the best practice for browsers is still to return HTML when */* is specified.

What does this mean for Rails developers who want to use Accepts: mimetype lists? Well, we either wait for the Rails projects to support mimetype specificity (and for older browsers to die out), or we are encouraged to include the format explicitly in the URI.

I chose to have Ember append the .json suffix to the URL, thanks to this SO post

store.js
1
2
3
4
5
6
7
Hendrix.ApplicationAdapter = DS.RESTAdapter.extend({
  host: "http://localhost:3000",
  // Force ember-data to append the `json` suffix
  buildURL: function(record, suffix) {
    return this._super(record, suffix) + ".json";
  }
})

More to come how how this app works.