The Sweet Spot
On software, engineering leadership, and anything shiny.

Blogging through: Implementing Domain-Driven Design

In recent conversations with coworkers, the topic of Domain-Driven Design has
arisen on more than a few occasions in design and architecture meetings.
“Have you read it?” a coworker asked, “I think it’d help us a lot.”

I’ve gotten my hands on a copy of Implementing Domain-Driven Design
by Vaughn Vernon, which is a more pragmatic
approach to DDD than the original Domain-Driven Design book by Eric Evans.

My desire is to share my outlines of the book chapter-by-chapter,
hopefully once a week.

Chapter 1: Getting Started with DDD

Can I DDD?

  • DDD helps us design software models where “our design is exactly how
    the software works” (1).
  • DDD isn’t a technology, it’s a set of principles that involve
    discussion, listening, and business value so you can centralize
    knowledge
    .
  • The main principle here is that we must “understand the business in
    which our software works” (3). This means we learn from domain experts
    in our field.
  • What is a domain model? an object model where objects have
    data/persistence concerns with an accurate business meaning.

Why You Should Do DDD

  • Domain experts and devs on same playing field, cooperation required as
    one team. (Agile teams, anyone?)
  • The business can learn more about itself through the questions asked
    about itself.
  • Knowledge is centralized.
  • Zero translations between domain experts and software devs and
    software.
  • “The design is the code, and code is the design.” (7)
  • It is not without up-front cost

The problem

  • The schism between business domain experts and software developers
    puts your project (and your business) at a risk.
  • The more time passes, the greater the divide grows.

Solution

  • DDD brings domain experts and software developers together to develop
    software that reflects the business domain mental model.
  • Oftentimes this requires that they jointly develop a “Ubiquitous
    Language” - a shared vocabulary and set of concepts that are jointly
    spoken by everybody.
  • DDD produces software that is better designed & architected -> better testable ->
    clearer code.
  • Take heed: DDD should only be used to simplify your domain. If the net
    cost of implementing DDD is only going to add complexity, then you
    should stay away.

Domain model health

  • As time passes, our domain models can become
    anemic,
    and lose their expressive capabilities and clean boundaries. This can
    lead to spaghetti code and a violation of object responsibilities.
  • Why do anemic domain models hurt us? They claim to be well-formed
    models but they hide a badly designed system that is still unfocused
    in what it does. (Andrew: I’ve seen a lot of Service objects that
    claim to be services but really are long scripts to get things done.
    There might be a cost of designing the Service interface, but inside
    things are just as messy as before we got there.)
  • Seems like Vernon is blaming the influence of IDEs for Visual Basic as
    they influenced Java libraries – too many explicit getters and
    setters.
  • Vernon throws up some code samples comparing two different code
    samples – one with an anemic model that looks like a long string of
    commands and another with descriptive method names. He makes the case
    that simply reading the code is documentation of the domain itself.

How to do DDD

  • Have a Ubiquitous Language
    where the team of domain experts share the language together, from
    domain experts to programmers.
  • Steps to coming up with a language:

    1. Draw out the domain and label it.
    2. Make a glossary of terms and definitions.
    3. Have the team review the language document.
  • Note that a Ubiquitous Language is specific to the context it is
    implemented in. In other words, there is one Ubiquitous Language per
    Bounded Context.

Business value of DDD

  1. The organization gains a useful model of its domain
  2. The precise definition of the business is developed
  3. Domain experts contribute to software design.
  4. A better user experience is gained.
  5. Clean boundaries for models keep them pure.
  6. Enterprise architecture is better designed.
  7. Continuous modeling is used – the working software we produce is the
    model we worked so hard to create.
  8. New tools and patterns are used.

Challenges

  • The time and effort required to think about the busines domain,
    research concepts, and converse with domain experts.
  • It may be hard to get a domain expert involved due to their
    availability.
  • There is a lot of thought required to clarify pure models and do
    domain modeling.

Tactical modeling

  • The Core Domain is the part of your application that has key and
    important business value – and may require high thought and attention
    to design.
  • Sometimes DDD may not be the right fit for you – if you have a lot of
    experienced developers who are very comfortable with domain modeling,
    you may be better off trusting their opinion.

DDD is not heavy.

  • It fits into any Agile or XP framework. It leans into TDD, eg: you use
    TDD to develop a new domain model that describes how it interacts with
    other existing models. You go through the red-green-refactor cycle.
  • DDD promotes lightweight development. As domain experts read the code, they
    are able to provide in-flight feedback to the development of the
    system.

Moving to Ember App Kit

I’ve noticed a bit of the buzz around Ember App Kit
recently and decided to move Hendrix, my music management app, over from
a Yeoman-generated Ember app to EAK with all its
bells and whistles.

What’s the difference?

Well on the surface, the two frameworks aren’t very different. The
standard Yeoman build tool sets you up with Grunt and Bower, which is
what EAK provides you out of the box. The cool stuff happens when you
dive under the hood: ES6 module transpilation and an AMD-compatible
Ember Resolver, built-in Karma integration and a built-in API stub
framework for development and test environments.

The joys of modules

What I didn’t realize was that compiling to ES6 modules required that my
filenames be renamed exactly how the modules were going to be placed,
with the extra caveat that resource actions needed to live in their own
directories. Recall that in the old way of doing things with globals and
namespaces, you could get away with throwing a route file like this in
your app directory:

1
2
routes/
  songs_index_controller.js

And inside:

1
2
3
MyApp.SongsIndexRoute = Ember.Route.extend({
  //...
});

In EAK’s world, you need to nest the file under the songs/ directory,
and strip the type from the filename, like so:

1
2
3
routes/
  songs/
    index.js

Inside the file, you assign the function to a variable and let it be
exported in the default namespace.

1
2
3
4
var SongsIndexRoute = Ember.Route.extend({
  //...
});
export default SongsIndexRoute;

File name matters

The new Ember resolver
loads modules in a smart way – according to how the framework
structures resources, controllers and their corresponding actions. So
visiting #/songs from my app caused the app to look up and load
appkit/routes/songs/index. What I didn’t realize was this module must
live at a very specific place in the file directory structure
.
I realized that I left the module type in the file name the first time
around, like this:

1
2
3
routes/
  songs/
    index_route.js

There are no types in the module names – or the filenames, for that
matter. I had not realized this (I’m also an AMD newbie) – so I had
left my files un-renamed as songs_index_route, which meant that
the module loader had stored the SongsIndexRoute module under
appkit/routes/songs/index_route, but was doing a route lookup through
the Resolver for: appkit/routes/songs/index. Renaming the file to:

1
2
3
routes/
  songs/
    index.js

did the trick.

Ember Data, Rails, CORS, and you!

I’m starting up a new personal project involving Ember-Data and Rails
(more to come). The gist of it is that it’s a pure frontend app engine
built in Yeoman and Grunt, and designed to talk to a remote API service
built on Rails.

So since it’s a remote API, I’ve got to enable CORS, right?

Install CORS via rack-cors

1
2
# Gemfile
gem "rack-cors", :require => "rack/cors"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# config/application.rb
config.middleware.use Rack::Cors do
  allow do
    origins "*"

    resource "*",
      :headers => :any,
      :methods => [:get, :post, :put, :delete, :options, :patch]
    end

  allow do
    origins "*"
    resource "/public/*",
      :headers => :any,
      :methods => :get
  end
end

A very naive implementation with zero security whatsoever. Anyways.
Onward!

Get Ember-Data DS.RESTAdapter talkin’ CORS

I saw conflicting documentation on Ember-Data and CORS – it seemed like
it should support CORS out of the box. Apparently this is not so.

In my ember app’s store.js (or anywhere your app loads before the
application adapter is defined, do this:

1
2
3
4
5
6
7
8
9
10
11
12
# store.js
$.ajaxSetup({
  crossDomain: true,
  xhrFields: {
    withCredentials: true
  }
});

Hendrix.Store = DS.Store.extend();
Hendrix.ApplicationAdapter = DS.RESTAdapter.extend({
  host: "http://localhost:3000",
})

$.ajaxSetup, though its
usage is not recommended, is designed to set global options on the
jQuery ajax object. It provides some information on the options you can modify.

Why doesn’t Ember support this out of the box? I think it’s because they
cannot support IE, where one must use an XDR object to support CORS.

I’ve posted an Ember follow-up question in the
forums
for discussion.

Get Rails talking JSON out of its mimetype confusion.

Did you know that if you rely on the Accepts: header in HTTP that
Rails does not obey its ordering*? I was trying to figure out why my
Rails controllers were trying to render HTML instead of JSON when the
headers were:

'Accept: application/json, text/javascript, */*; q=0.01'

A very long winded
discussion
on the Rails
project reveals that, well, nobody has it figured out yet. Most modern
browsers do obey Accepts: specificity, but for the sake of older
browser compatibility, the best practice for browsers is still to return
HTML when */* is specified.

What does this mean for Rails developers who want to use Accepts:
mimetype lists? Well, we either wait for the Rails projects to support
mimetype specificity (and for older browsers to die out), or we are
encouraged to include the format explicitly in the URI.

I chose to have Ember append the .json suffix to the URL, thanks to
this SO
post

1
2
3
4
5
6
7
8
# store.js
Hendrix.ApplicationAdapter = DS.RESTAdapter.extend({
  host: "http://localhost:3000",
  // Force ember-data to append the `json` suffix
  buildURL: function(record, suffix) {
    return this._super(record, suffix) + ".json";
  }
})

More to come how how this app works.

Decomposing Fat Models

Heard an awesome Ruby Rogues podcast recently: “Decomposing Fat Models”.

Essentially, they’re talking through Bryan Helmkamp’s Code Climate blog entry “7 ways to decompose fat ActiveRecord models”, which sums up a few strategies that mainly involve extracting objects from your existing code, value, service, policy, decorator objects and the like. Give the entry a read-through, it’s opened my eyes a lot to rethinking my architecture of my Rails models.

A few interesting thoughts that came up in the podcast:

  • The “Skinny Controller, Fat Model” mantra has hurt the Rails community because we start getting these bloated AR classes. “‘fat-‘ anything is bad” one of the hosts mentions in the blog. The smaller your models, the more manageable, readable and testable they become.

  • Rubyists don’t like the term “Factory”, even though in Helmkamp’s opinion, Ruby classes are factories. “We call them “builders”” one of the hosts jokes.

  • The Open/Closed Principle as applied to Ruby: using delegators, decorators.

Deploying Janky on Ubuntu

Janky is a Github-developed Hubot + Jenkins control interface. It’s developed to be deployed on Heroku. However, what if you need it to live on an internal VM? Here’s how I got it running on a Ubuntu (12.04 Precise) VM.

Make sure you have the correct MySQL libs installed:

sudo apt-get install mysql-server libmysqlclient-dev

Clone janky from the Github repository

git clone https://github.com/github/janky.git
cd janky

Bootstrap your environment

The following steps are taken nearly verbatim from the “Hacking” section on the Janky README:

script/bootstrap

mysqladmin -uroot create janky_development
mysqladmin -uroot create janky_test

RACK_ENV=development bin/rake db:migrate
RACK_ENV=test bin/rake db:migrate

RACK_ENV=development bundle exec rake db:seed

Configure Thin

Open Gemfile in your text editor and add:

gem "foreman"

Then install it:

bundle install

Then create a Procfile:

touch Procfile

Open the Procfile in your text editor and add the following line:

web: bundle exec thin start -p $PORT

Add the JANKY_* variables to your environment according to the janky README. I use zsh, so I added these as export statements in my ~/.zshenv

Start your server

bundle exec foreman start

Note that the server starts on port 5000 by default, and you can override it like so:

PORT=8080 bundle exec foreman start

That’s it!

Let me know how that works for you!