The Sweet Spot
On software, engineering leadership, and anything shiny.

Conway's Law for humans

If you’re familiar with Conway’s Law, it states:

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

Or in layman’s terms, your software systems reflect the structure of the teams that create them.

Think about it – do your teams prefer to do everything themselves? Or do they ask for help from other teams? In general, as a team, we prefer to have as little dependencies as possible. The work that the engineer has to do to send an email, or wait for work from another team (like an API change or design change) is usually time-consuming and burdensome. Therefore, it is not (usually) in the team’s best interests to cross silos and ask for help from others. Your teams, if they look like this, tend to work in codebases that are generally monolithic and wholly owned by the team.

It’s not wrong, or it’s not bad, it’s just a sociological observation. This is why companies like Spotify, Netflix or Amazon have embraced Conway’s Law and changed their organizations to match the microservice-driven architecture they want. Small teams work on small codebases and are empowered to make the changes they need.

A corollary and some observations about your company culture.

Here’s a corollary, which I’ve heard in various shapes and forms. Paraphrased:

An organization’s structure tends to pattern after its leaders’ communication patterns.

I’ve been pushing an effort to unify our company’s frontend styles and UX into a unified framework in an attempt to standardize the look and feel of the site.

However, in working with our designers, I realized that they weren’t talking to each other. Designers in one department had opposing design aesthetics from designers in another. This was causing problems in the code, because the frontend framework itself was becoming fragmented. You could see it in the code. Version A of the styleguide went into Department A’s products. Version B of the styleguide went to Department B’s products.

In this case, as we kept rolling out this framework, I realized our organization had no single owner for the design language of the site. Why? I had to wonder if it had to do with some deeper communication issues between the heads of the two departments.

Code can be a good canary to organizational issues, and call out larger human issues at hand. In this case, Conway’s Law helps us root out and bring into the light structural concerns. Leaders can pay attention to these concerns and check themselves to become more open communicators.

Mocks aren't stubs: mockist & classic testing

With the famed “TDD is dead” debate around the Rails community largely
coming to an end, I found myself referencing Martin Fowler’s article,
Mocks Aren’t Stubs a good deal, trying to make sense of it in terms of how I write tests and code.

In this post I’m going to talk about mocking and stubbing and their
roots, paraphrase Fowler in an attempt to explain their differences, and
walk through a couple of code examples. In each case, I’m going to
attempt to build this example out in Ruby and RSpec 3.

Let’s assume this implementation in our code for a BookUpdater object in Ruby. Its job is to call through its collaborating ApiClient, which wraps some aspect of an API that we want to call.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Update a book's metadata in our systems.
class BookUpdater
  attr_accessor :book, :api_client, :response

  def initialize(book, api_client)
    @book = book
    @api_client = api_client
  end

  def updated?
    !!response.success?
  end

  def update!
    response = api_client.call_update_api(book)
  end
end

What they are

Mocks

Mocks are fake objects that verify that they have received messages. In
RSpec, we traditionally use the mock object utility to create these objects.

1
2
3
4
5
6
7
api_client = mock('api client')
book = Book.new

expect(api_client).to receive(:call_update_api).with(book).and_return(true)

subject = BookUpdater.new(api_client, book)
subject.list!

What’s happening here? RSpec creates a mock api_client object that will verify that, after the test case executes, it has received the :call_update_api message with the correct arguments.

The main point of this style of testing is behavior verification – that is, that your object is behaving correctly in relation with its collaborators.

Double

Let’s take a look at a double – also known as a stub. A double is a fake object that is set up to respond to a certain message with a pre-canned response, each time. Let’s take a look at how I would set up a test using doubles:

1
2
3
4
5
6
api_client = double('api client')
response = double('response', :success? => true)
book = Book.new

allow(api_client).to receive(:call_update_api).with(book).and_return(response)
expect(subject.update!).to change(subject, :updated?).from(false).to(true)

Okay, so what’s the big deal here? My test case still passes. Note that
I had to change my code to focus its expectation on the subject’s
state instead of the api_client.

The focus of using doubles is for state verification – that is, that so long as everybody around me is behaving according to their contracts, the test merely has to verify that internal object state changes correctly.

A third way – real objects

I won’t cover this very much in depth, but with sufficiently simple objects, one could actually instantiate real objects instead of doubles, and test an object against all its collaborators. This is, in my experience, the most common experience of working in Rails + ActiveRecord applications.

Classic vs Mockist testing: different strokes for different folks

As we saw above, the key difference between the mock and the stub (the double). The focus of the test in the mock case is on the messages being sent to the collaborators. The focus of the test when using the double is on the the subject under test (SUT).

Mocks and stubs/doubles are tools that we can use under the larger umbrellas of two TDD philosophical styles: classic vs mockist styles.

Classic TDD

  • Classic TDDists like using doubles or real objects to test collaborators.
  • From personal experience, testing classicly is oftentimes the path of least resistance. There isn’t expectation setup and verification that mockist testing requires of you.
  • Classic TDD sometimes results in creating objects that reveal state – note how the BookUpdater needed to expose an updated? method.
  • Setting up the state of the world prior to your test may be complex, requiring setting up all the objects in your universe. This can be a huge pain (has anybody ever had this problem with overcomplicated Rails models with spidery associations? Raise your hands…). Classicists may argue that the root cause here is not paying attention to your model architecture, and having too many associations is an antipattern. Alternatively, classicists oftentimes generate test factories (e.g. Rails’ FactoryGirl gem) to manage test setup.
  • Tests tend to be treatable more like black boxes, testable under isolation (due to verifications on object state) and are more resistant to refactoring.

Mockist TDD

  • Mockist TDD utilizes mocks to verify behavior between objects and collaborators.
  • It can be argued to develop “purer” objects, that are mainly concerned with objects passing messages to each other. Fowler refers to these objects as preferring role-interfaces.
  • These tests are easier to set up, as they don’t require setting up the state of the world prior to test invocation.
  • Tests tend to be more coupled to implementation, and may be more difficult to refactor due to very specific requirements for message passing between collaborators.
  • Fowler brings up a point where being a mockist means that your objects prefer to Tell Don’t Ask. A nice side effect of TDA is you generally can avoid Demeter violations.

In conclusion

In coming from a classic TDD background, I’ve oftentimes viewed mockist testing with some suspicion, particularly around how much work is involved to bring them about. Over the years, I’ve warmed up to the usage of mockist testing, but have not been diligent enough at doing pure driving TDD with mocks. In reviewing Fowler’s comments, I’m intruiged at the possibilities of mockist TDD in affecting system design, particularly in their natural inclinations toward role interfaces. I look forward to trying pure mockist TDD in a future project.

Further reading:

Running Mocha tests with ES6/AMD modules

In one of my personal projects (Chordmeister), I’ve been trying to
upgrade the code to be written in ES6 modules and transpile down to AMD modules with Square’s very excellent es6-module-transpiler project.

Since I’ve already updated an Ember app of mine to try ES6, I figured it was high time to do it on another project.

Sorry Coffeescript, but I’m moving on.

First problem: Coffeescript seems indecisive with respect to ES6
support. In order to support import or export keywords, I had to
wrap the statements in backticks, making the code look like this:

1
2
3
4
5
`import ClassifiedLine from "chordmeister/classified_line"`
class Parser
  # Implementation

`export default Parser`

Except this wasn’t being picked up by es6-module-transpiler, since
Coffeescript wraps the entire declaration in a closure: I was
finding myself having problems compiling from Coffeescript -> ES5 JS -> ES6 JS.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
define("chordmeister/parser",
  [],
  function() {
    "use strict";
    (function() {
      // Oops, I wasn't transpiled!
      import ClassifiedLine from 'chordmeister/classified_line';
      var Parser;
      Parser = (function() {
        // Implementation
      }
      )();
      // Oops, I wasn't transpiled!
      export default Parser;
      })()
  });

So the first call: ditch Coffeescript. Write this in pure ES6.

1
2
3
4
5
6
7
8
import ClassifiedLine from 'chordmeister/classified_line';
var Parser;

Parser = (function() {
  // implementation
})();

export default Parser;

Which transpiled nicely to:

1
2
3
4
5
6
7
8
9
10
11
define("chordmeister/parser",
  ["chordmeister/classified_line","exports"],
  function(__dependency1__, __exports__) {
    "use strict";
    var ClassifiedLine = __dependency1__["default"];
    var Parser;
    Parser = (function() {
      // Implementation
    })();
    __exports__["default"] = Parser;
    });

Next up: adding AMD support in Mocha

Okay, so we need to set up a few things to get Mocha playing well with RequireJS, the AMD loader.

Our plan of attach will be to leverage the generated AMD modules and load our tests up with them. We have the benefit of being able to specifically inject dependencies into our test suite.

The tricky parts will be:

Set up the Mocha index.html runner

Install mocha, require.js, and chai via bower, then plug them into the harness:

test/index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<!doctype html>
<html>
<head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
    <title>Mocha Spec Runner</title>
    <link rel="stylesheet" href="../bower_components/mocha/mocha.css">
</head>
<body>
    <div id="mocha"></div>
    <script src="../bower_components/mocha/mocha.js"></script>
    <script src="../bower_components/chai/chai.js"></script>
    <script data-main="test_helper" src="../bower_components/requirejs/require.js"></script>

</body>
</html>

Note the references to data-main="test_helper", which is require.js’s way of determining its entry point after it loads.

Set up a test runner.

test/test_runner.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// Configure and set up the tests.
require.config({
  baseUrl: "../build/"
})

// Load up the files to run against
var specs = [
  'chordmeister/chord_spec.js',
  'chordmeister/song_spec.js',
  'chordmeister/parser_spec.js',
  'chordmeister/classified_line_spec.js',
];

// Start up the specs.
require(specs, function(require) {
  mocha.setup('bdd');
  expect = chai.expect;
  // Why? Some async loading condition? Is there a callback I should be hooking into?
  setTimeout(function() {
    mocha.run();
  }, 100);
});

You’ll notice that I was having synchonicity issues between spec suite load and mocha.run(). Throwing everything back a few hundred ms seemed to have done the fix.

AMD gotchas

Pay attention to the default parameter that the module exports with. This is important to remember since native ES6 will allow you to directly import it with its native syntax:

1
import Parser from "chordmeister/parser"

But if you’re using RequireJS/AMD modules, you’ll need to explicitly call out the default namespace from the required module, so like so:

1
2
3
4
require(["chordmeister/parser"], function(parser) {
  Parser = parser.default;
  new Parser() // and do stuff.
});

Let me know if you have any questions!

Implementing DDD: Domains, Subdomains and Bounded Contexts

Chapter 2: Domains, Subdomains, and Bounded Contexts

A Domain is what a business does and the surrounding context of how it does business. It is important to model out its supported Subdomains – that is, the smaller components of the business that collaborate in real, day-to-day operations of the business. The book describes these as Core Domains.

Finally, a Bounded Context is the physical manifestation of the solution space as software – models living in a real application. Its key feature in the context of DDD is as a linguistic barrier between different domains.

In an ideal world, each Subdomain maps directly to one Bounded Context. In the real world, this is less common since we tend to build things into monolithic systems. Still – many monolithic applications have several components that could in themselves be bounded contexts.

Side note: In Rails, one could think of Engines as a Bounded Context. But that might be a blog post for another time.

It is important to get these ideas and concepts down correctly because we need correct modeling of our systems to determine what they do.

Bounded Contexts and terms

It’s not usually realistic to get the entire organization agreeing on a universal linguistic definition for every term. Instead, DDD assumes that different terms have different meanings in different contexts.

The author then dives into the an example of a book, where a book means several different things to different people in different contexts. A book is touched upon by authors, graphic designers, editors, marketing folks. In each of these contexts, the features of a book mean different things at different times. It is impossible to develop an all-knowing Book model without disagreement between stakeholders. DDD, instead, acknowledges these differences and allows stakeholders to use linguistic terms from within their unique Bounded Contexts.

Bounded Contexts may include things like:

  • Object models
  • A database schema / persistence layer
  • A SOAP or REST API
  • A user interface

Bounded context antipatterns

You may be tempted to divide up a bounded context by architectural concerns, or because you want to assign smaller tasks to developers (resource allocation). Beware that this kind of work tends to fragment the language.

DDD operates on linquistic drivers. The unity, cohesion and domain-adherence of the bounded context should be the first priority in the design.

Assigning two teams to the same bounded context results in fragmentation of the bounded context.

Ideally: we strive to assign one team to work on one Bounded Context with one Ubiquitous Language at a time.

In Rails, what are the bounded contexts? It could be the top-level Rails application, or an engine, or a gem, that define the context boundaries._

A story…

The chapter then goes on to describe their fictional team designing through three iterations off their DDD strategy:

A system in which all domains live within the same bounded context. They see the folly of this and refactor with some tactical patterns, like creating services.

This is, however, missing the point. They realized they needed to listen to business and their domain experts to find out exactly where the right places were to segregate the contexts. The team discovers that the business has a desire to go in a new direction which allows them to segregate the domain in such a way that would enable future directions for the business.

How often are we as developers in conversation with our product owners and asking where the business wants to go in the future?

Conversations with the business reveal an intention to develop an add-on product to the core product. This implies the development of two subdomains. However, further investigation reveals that the shared overlap of certain domain models (like users, roles, and access management) cannot simply be identically shared between two systems, since their linguistic meanings in the two systems differ slightly. Instead, the developers use the linguistic problem to develop a third bounded context.

The developers separate their app into three bounded contexts built as three software systems.

Six months as a manager

It’s been approximately six months since I’ve entered engineering management. Here are some thoughts reflecting back on that season now.

I didn’t like it at first.

Let’s face it: I didn’t like the feeling of being a “manager” now. It comes with too much baggage of pointy-haired bosses or ineffective waste layers. Why do we need managers anyways?

More to the point: I love coding. I love working on rewarding projects. I hated the thought of being tied up in meetings while day by day getting more and more disconnected from code.

So I struggled a lot in those first few months (still do), trying to maintain the balance of working with product management to get a strategic vision communicated, or with other engineering teams to get a tactical vision in place, versus trying to be deeply involved with the code base and doing code reviews.

You can’t code anymore.

In the end? I had to come to grips with the reality that my team needed me in a different role – a strategic role, helping them see the big picture, sniffing out dependencies and destroying blockers before they got to them.

But I still needed to be spot-on with the code – I still needed to be in the know with new product features as they are being developed. So it’s my priority to be in design discussions, but I can’t be the implementer anymore.

So I’ve started a change in mindset – how can offload all the random codebase knowledge I’ve acquired over the years to the team? How can I effectively share my expertise so I’m out of a (coding) job? I’m starting to be more convinced that the answer to that is pair programming.

Pairing your way out of a job

Nowadays if there’s a story or task in which I am the only one (or one of few) with domain knowledge about the feature, I’ll ask a team member to pair with me on the feature. That way we get a really effective teaching tool about the domain model, and the extra plus in that if I get pulled into a meeting, development can continue.

But you still have to code.

So I code on the weekends (not everybody can do this!)

We’re a Rails team taking on a new Ember project. I need to get my chops up around Ember so I’ve decided to pull in a side project to hack on the weekends. My side work on Wejoinin lets me do all the Rails things I don’t get to do anymore at work.

And it works for me because I love to code, and to be honest, I don’t ever want to get weak in that area.

To be honest, I’d love more feedback from other engineering managers in that area. How do you keep your chops sharp?

People are your priority.

I’ve read often that empathy is everything when it comes to management, and it still rings true. No matter how this product launch goes, in the end your team will remember how you treated them, how their thoughts and feelings were taken into account.

One thing I’m trying to check myself is my tendency to jump in on other people’s sentences and cut them off. It sounds silly, but sometimes I realize I like to talk a lot more than listen. As a manager, sometimes you need to dwell on the feedback of a team member for some time before you write it off. There’s usually a core message in there that’s completely valid – a frustration, a desire for change. Empathy means communicating the message: “I heard you, and I respect your thoughts.”

It’s your attitude

And finally, here’s what I think: your attitude toward your company and the team make ALL the difference.

  • Is your orientation truly to support your team? How will it be clear in the long run that your supported someone in their career?
  • Where are places in your workday where you can connect and sympathize with people – share a joke, listen to someone’s frustration, or just simply go out to lunch?
  • How are you giving people something to work toward to: a vision of changing the world (for Blurb, it’s about transforming the publishing industry).
  • How are you addressing negativity? Grumbling is important to address head on in its own space, but how are you empowering people to take charge of issues of culture or organizational friction?
  • How are you checking your own negativity – sometimes we forget that we’re the solutions to our own problems, and I’ve found oftentimes that the very issues that you assume are impossible to change are crackable by having the right relationships with the right people.

The little things matter.

Growth areas

Lord knows I have a lot to learn here. One area I’m learning to grow in is how to give honest and accurate feedback to team members, without fearing how they’re going to receive it.

Another area? Delegation. I’m learning to delegate the little micro-responsibilities in my job that I just kind of expect that I’m supposed to do. Case in point: every week we accept a translation import from a third-party translation service. I realized that I was burning a lot of time reviewing hundreds of lines of translation keys every week, and the repetition of it was sapping a lot of my energy. I had to learn to ask a team member for help and give them responsibility to own that function of the process.