The Sweet Spot
On software, engineering leadership, and anything shiny.

TensorFlow For Tears: Part 1

An introduction to every parent’s trial and travails

When our son was born early last year, I admit I wasn’t ready for it. Fatherhood was not the kind of thing I was ready for (and who really can ever be ready for parenthood, anyways?).

It so turns out that the vast majority of the first year of parenting is simply enduring the gut-wrenching cries of your little one. And cry they do - crying when they are too tired, screaming when they are too energetic, crying when they are gassy, screaming when they are bored, and crying when they just pooped.

The trials that Annie and I went through with our little guy was particularly difficult on us (you can ask me in person if we ever get to chat). The little guy was a prolific screamer and absolutely. hated. sleep.

What’s a geeky dad to do? Quantify household suffering by leveraging machine learning, of course.

I set out to build a system that would in the end determine how well our little guy slept through (or didn’t sleep through) the night. I started by building a system that naively parsed audio samples from his nursery, and then trained a TensorFlow model to do more accurate detection of his cries. Here’s how it worked:

Act 1: The Misery Meter

In version 1 of the system, I bought a cheap USB microphone and hooked it up to a Raspberry Pi 3.

In it, I loaded up a script to record a 30-second audio sample, using the arecord UNIX command line tool:

1
2
3
4
5
6
7
8
9
#!/usr/bin/env bash

set -euxo pipefail

DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
RECORDING_FILE="${DIR}/recordings/sample.wav"
DATE=$(date "+%Y-%m-%d %H:%M:%S")
UPLOAD_RECORDING_FILENAME=$(printf %q "${DATE}.wav")
arecord --device=hw:1,0 --format S16_LE --rate 48000 -c1 -d 30 --quiet "${RECORDING_FILE}"

…then I ran the sox command-line tool to grab some simple loudness statistics out from it:

1
sox -V3 ${RECORDING_FILE} -n stats 2>&1 | grep dB

In case you’re curious, here’s the full output from sox -V3 FILE -n stats. Pretty nifty:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
➜ sox -V3 sample.wav -n stats
# ... Truncated for brevity ...

sox INFO sox: effects chain: input        48000Hz  1 channels
sox INFO sox: effects chain: stats        48000Hz  1 channels
sox INFO sox: effects chain: output       48000Hz  1 channels
DC offset   0.000017
Min level  -0.141083
Max level   0.135651
Pk lev dB     -17.01
RMS lev dB    -29.32
RMS Pk dB     -27.37
RMS Tr dB     -30.83
Crest factor    4.12
Flat factor     0.00
Pk count           2
Bit-depth      14/16
Num samples     480k
Length s      10.000
Scale max   1.000000
Window s       0.050

Here, we’re really only interested in RMS dB, which is the relative loudness levels within this 30-second sample. I chose to push these three stats up to a web service which I use to aggregate and graph these metrics. RMS lev is the average, RMS Pk is the peak, and RMS Tr is the trough (the floor).

I’m not showing the entirety of the script, but the last thing it does is parse and push the results of the analysis of this audio sample to a Web-based metrics aggregation service! In case you were wondering, I have an API service sitting between the Raspberry Pi and a time-series API supplied by Keen.io. But the main point is that now I can load up a cute JS widget that graphs these data points!

Audio crying graph

Now, how does one read this graph?

  • We can follow the peaks of the audio signal and assume that any noise over a certain dB threshold is the little dude’s screaming.
  • We can follow the troughs of the graph and assume that if the trough jumps, then man there is some serious crying going on, since the audio floor of the soundscape has been bumped up!
  • Or we can follow the average RMS reading and assume some combination of the two?

The truth of the matter, none of the readings from the Misery Meter (so I called it) were particularly reliable indicators of “the little buddy is crying his little head off”. Sometimes, his crying was at the same volume level as other ambient noises in the house (say, when he’s playing in the other room and someone shuts a door). So it turns out that using volume as a proxy for crying is insufficient to give us reliable results.

Act 2: Enter TensorFlow… next!

In my next post, I’ll discuss how I modified this script to use TensorFlow to train a model that could then be used to enhance the accuracy of little dude’s crying. Stick around, it’ll be fun!

Elixir and Elm things I've written about elsewhere

While it’s been pretty quiet around these parts, I’ve kept the technical blogging up over on my employer’s blog. I’ve got quite a few posts around Elixir:

Lightweight dependency injection in Elixir (without the tears): What are some ways we can use the language features of Elixir to apply small-scale dependency injection patterns, without a ton of ceremony?

Functional Mocks with Mox in Elixir: This post discusses a mocking library in Elixir called mox that adheres to the Elixir Way(tm) of mocking (that is, the usage of fakes or doubles).

Comparing Dynamic Supervision Strategies in Elixir 1.5 and 1.6: In this blog post, I discuss the benefits of the new DynamicSupervisor module in Elixir 1.6 and how it makes the supervision of a dynamically-scaled supervision tree easier to set up.

Finally, I wrote “Taking Elm for a Test Drive” about playing around with Elm, giving my take on the trajectory of the language.

Pitfalls to avoid when moving to async systems

I recently published a post on the Carbon Five blog titled “Evented Rails: Decoupling complex domains in Rails with Domain Events” that takes some of my thoughts about moving a Rails app to use Domain Events - leveraging the power of Sidekiq (or your job runner of choice) to send async messages between different domains of your app.

This approach always seems nice from the outset, but can hide some painful complexities if you go too far down the rabbit hole. Here is a repost of the latter half of that article, which is worth repeating:

Big win[s of the async model]: speed & scalability

By splitting out domain logic into cohesive units, we’ve just designed our systems to farm out their workloads to a greater scalable number of workers. Imagine if you had a web request thread that would take 500ms to return, but 150ms of that time was spent doing a round trip to a different service. By decoupling that work from the main request thread and moving it to a background job – we’ve just sped up the responsiveness of our system for our end user, and we know that studies have shown that page speed performance equals money.

Additionally, making our application calls asynchronous allows us to scale the number of processing power we allocate to our system. We now have the ability to horizontally scale workers according to the type of job, or the domain they are working from. This may result in cost and efficiency savings as we match processing power to the workload at hand.

Big challenge: dealing with asynchronous data flows

Once things go async, we now have a fundamentally different data design. For example, say you implemented an HTTP API endpoint that performed some action in the system synchronously. However, now you’ve farmed out the effects of the action to background processes through domain events. While this is great for response times, you’ve now no longer got the guarantees to the caller that the desired side effect has been performed once the server responds back.

Asynchronous polling

An option is to implement the Polling pattern. The API can return a request identifier back to the caller on first call, with which which the caller can now query the API for the result status. If the result is not ready, the API service will return with a Nack message, or negative Ack, implying that the result data has not arrived yet. As soon as the results in the HTTP API are ready, the API will correctly return the result.

Pub/Sub all the way down

Another option is to embrace the asynchronous nature of the system wholly and transition the APIs to event-driven, message-based systems instead. In this paradigm, we would introduce an external message broker such as RabbitMQ to facilitate messages within our systems. Instead of providing an HTTP endpoint to perform an action, the API service could subscribe to a domain event from the calling system, perform its side effect, then fire off its own domain event, to which the calling system would subscribe to. The advantage of this approach is that this scheme makes more efficient use of the network (reducing chattiness), but we trade off the advantages of using HTTP (the ubiquity of the protocol, performance enhancements like layered caching).

Browser-based clients can also get in on the asynchronous fun with the use of WebSockets to subscribe to server events. Instead of having a browser call an HTTP API, the browser could simply fire a WebSocket event, to which the service would asynchronously process (potentially also proxying the message downstream to other APIs with messages) and then responding via a WebSocket message when the data is done processing.

Big challenge: data consistency

When we choose an asynchronous evented approach, we now have to consider how to model asynchronous transactions. Imagine that one domain process charges a user’s credit card with a third party payment processor and another domain process is responsible for updating it in your database. There are now two processes updating two data stores. A central tenet in distributed systems design is to anticipate and expect failure. Let’s imagine any of the following scenarios happens:

  1. An Amazon AWS partial outage takes down one of your services but not the other.
  2. One of your services becomes backed up due to high traffic, and no longer can service new requests in a timely manner.
  3. A new deployment has introduced a data bug in a downstream API that your teams are rushing to fix, but will requiring manual reconciling with the data in the upstream system.

How will you build your domain and data models to account for failures in each processing step? What would happen if you have one operation occur in one domain that depends on data that has not yet appeared in another part of the system? Can you design your data models (and database schema) to support independent updates without any dependencies? How will you handle the case when one domain action fails and the other completes?

First approach: avoid it by choosing noncritical paths to decouple, first

If you are implementing an asynchronous, evented paradigm for the first time, I suggest you carefully begin decoupling boundaries with domain events only for events that lie outside the critical business domain path. Begin with some noncritical aspect of the system — for example, you may have a third party analytics tracking service that you must publish certain business events to. That would be a nice candidate to decouple from the main request process and move to an async path.

Second approach: enforce transactional consistency within the same process/domain boundary

Although we won’t discuss specifics in this article, if you must enforce transactional consistency in some part of your system (say, the charging of a credit card with the crediting of money to a user’s account) then I suggest that you perform those operations within the same bounded context and same process, leaning on transactional consistency guarantees provided by your database layer.

Third approach: embrace it with eventual consistency

Alternatively, you may be able to lean on “eventual consistency” semantics with your data. Maybe it’s less important that your data squares away with itself immediately — maybe it’s more important that the data, at some guaranteed point in time — eventually lines up. It may be OK for some aspect of your data (e.g. notifications in a news feed) and may not be appropriate for other data (e.g. a bank account balance).

You may need to fortify your system to ensure that data eventually becomes consistent. This may involve building out the following pieces of infrastructure.

  1. Messages need to be durable — make sure your job enqueuing system does not drop messages, or at least has a failure mode to re-process them when (not if!) your system fails.
  2. Your jobs should be designed to be idempotent, so they can be retried multiple times and result in the correct outcome.
  3. You should easily be able to recover from bad data scenarios. Should a service go down, it should be able to replay messages, logs, or the consumer should have a queue of retry-able messages it can send.
  4. Eventual consistency means that you may need an external process to verify consistency. You may be doing this sort of verification process in a data warehouse, or in a different software system that has a full view of all the data in your distributed system. Be sure that this sort of verification is able to reveal to you holes in the data, and provide actionable insights so you can fix them.
  5. You will need to add monitoring and logging to measure the failure modes of the system. When errors spike, or messages fail to send (events fail to fire), you need to be alerted. Once alerted, your logging must be good enough to be able to trace the source and the data that each request is firing.

The scale of this subject is large and is under active research in the field of computer science. A good book to pick up that discusses this topic is Service-Oriented Design with Ruby on Rails. The popular Enterprise Integration Patterns book also has a great topic on consistency (and is accompanied by a very helpful online guide as well).

JHipster & Spring Boot for Rails developers

The first question you may be asking is - why would I want to go from Rails to Java?

Maybe you don’t have a choice. Maybe you started a new job. Maybe you heard Java was the new, old hotness. In any case, you’re a Ruby on Rails developer and you’re staring Java in the face.

The languages

First off, Rails and Java share similar philosophies.

You may argue that Java is the One True Language for old-fashioned Object-Oriented Programming. The existence of strong types lead to powerful expressions around OOP concepts like inheritance, polymorphism, and the like.

Java and Ruby share similar philosophies - in that Everything Is An Object. In Ruby, the nil value is modeled as a NilClass. In Java, objects abound everywhere.

The frameworks

I can’t speak from firsthand experience, but Java developers will tell you that developing Java apps in the early 2000s was like configuration soup. Everything was explicit and configured in XML.

Spring Boot arguable moves the state of the art in Java frameworks more towards’ Rails’ philosophies - that convention trumps configuration. It accomplishes this through annotations - more on this later.

Rails was precisely so groundbreaking and exciting in 2005 because it was everything Java was not - terse, expressive, and unapologetically convention-driven. Where in Java, everything was explicitly traceable through system calls, Rails used magic methods of dynamically-defined methods, monkey-patching and big ol’ global God Objects to accomplish its magic.

Liquibase vs Active Record

In Active Record, database changes are called migrations.

Migrations are only run from migration files, and may optionally be generated from the CLI.

In Liquibase, these are called changesets, and the files are called changelogs.

These migrations are either generated from the Liquibase CLI, or there is a nifty tool that reads Hibernate persistence entities and generates a “diff” against a known database, writing the migration to a file.

Rails checks in a schema.rb file, encompassing the canonical definition of the DB schema. There is no such equivalent in Liquibase (* I may be wrong).

To be continued…

Rails, meet Phoenix: Migrating to Phoenix with Rails session sharing

You’ve resolved to build your company’s Next Big Thing in Phoenix and Elixir. That’s great! You’re facing a problem though - all user authentication and access concerns are performed on your Rails system, and the work to reimplement this in Phoenix is significant.

Fortunately for you, there is a great Phoenix plug to share session data between Rails and Phoenix. If you pull this off, you’ll be able to build your new API on your Phoenix app, all while letting Rails handle user authentication and session management.

Before we begin

In this scenario, you want to build out a new API in Phoenix that is consumed by your frontend single-page application, whose sessions are hosted on Rails. We’ll call the Rails app rails_app and your new Phoenix app phoenix_app.

Additionally, each app will use a different subdomain. The Rails app will be deployed at the www.myapp.com subdomain. The Phoenix app will be deployed at the api.myapp.com subdomain.

We are going to take Chris Constantin’s excellent PlugRailsCookieSessionStore plug and integrate it into our Phoenix project. Both apps will be configured with identical cookie domains, encryption salts, signing salts, and security tokens.

In the examples that follow, I’ll be using the latest versions of each framework at the time of writing, Rails 4.2 and Phoenix 1.2.

Our session data is stored on the client in a secure, encrypted, validated cookie. We won’t cover the basics of cookies here, but you can read more about them here.

Our approach will only work if your current Rails system utilizes cookie-based sessions. We will not cover the use case with a database-backed session store in SQL, Redis, or Memcache.

Step 1: Configure Rails accordingly

Let’s set up your Rails app to use a JSON cookie storage format:

1
2
3
4
5
6
7
8
9
# config/initializer/session_store.rb

# Use cookie session storage in JSON format. Here, we scope the cookie to the root domain.
Rails.application.config.session_store :cookie_store, key: '_rails_app_session', domain: ".#{ENV['DOMAIN']}"
Rails.application.config.action_dispatch.cookies_serializer = :json

# These salts are optional, but it doesn't hurt to explicitly configure them the same between the two apps.
Rails.application.config.action_dispatch.encrypted_cookie_salt = ENV['SESSION_ENCRYPTED_COOKIE_SALT']
Rails.application.config.action_dispatch.encrypted_signed_cookie_salt = ENV['SESSION_ENCRYPTED_SIGNED_COOKIE_SALT']

Your app may not be configured with a SESSION_ENCRYPTED_COOKIE_SALT and SESSION_ENCRYPTED_SIGNED_COOKIE_SALT. You may generate a pair with any random values.

Some speculate that Rails does not require the two salts by default because the SECRET_KEY_BASE is sufficiently long enough to not require a salt. In our example, we choose to supply them anyways to be explicit.

Another important value to note here is that we have chosen a key for our session cookie - _rails_app_session. This value will be the shared cookie key for both apps.

Step 2: Configure the plug for Phoenix

Turning our attention to our Phoenix app, in the mix.exs file, add the library dependency:

1
2
3
4
5
6
7
8
# mix.exs
defmodule PhoenixApp
  defp deps do
    # snip
    {:plug_rails_cookie_session_store, "~> 0.1"},
    # snip
  end
end

Then run mix deps.get to fetch the new library.

Now in your web/phoenix_app/endpoint.ex file, remove the configuration for the existing session store and add the configuration for the Rails session store.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# lib/phoenix_app/endpoint.ex
defmodule PhoenixApp.Endpoint do
  plug Plug.Session,
    # Remove the original cookie store that comes with Phoenix, out of the box.
    # store: :cookie,
    # key: "_phoenix_app_key",
    # signing_salt: "M8emDP0h"
    store: PlugRailsCookieSessionStore,
    # Decide on a shared key for your cookie. Oftentimes, this should
    # mirror your Rails app session key
    key: "_rails_app_session",
    secure: true,
    encrypt: true,
    # Specifies the matching rules on the hostname that this cookie will be valid for
    domain: ".#{System.get_env("DOMAIN")}",
    signing_salt: System.get_env("SESSION_ENCRYPTED_SIGNED_COOKIE_SALT"),
    encryption_salt: System.get_env("SESSION_ENCRYPTED_COOKIE_SALT"),
    key_iterations: 1000,
    key_length: 64,
    key_digest: :sha,
    # Specify a JSON serializer to use on the session
    serializer: Poison
end

We set a DOMAIN environment variable with the value
myapp.com. The goal is for these two apps to be able to be deployed at any subdomain that ends in myapp.com, and still be able to share the cookie.

The secure flag configures the app to send a secure cookie, which only is served over SSL HTTPS connections. It is highly recommended for your site; if you haven’t upgraded to SSL, you should do so now!

Our cookies are signed such that their origins are guaranteed to have been computed from our app(s). This is done for free with Rails (and Phoenix’s) session libraries. The signature is derived from the secret_key_base and signing_salt.

The encrypt flag encrypts the contents of the cookie’s value with an encryption key derived from secret_key_base and encryption_salt. This should always be set to true.

key_iterations, key_length and key_digest are configurations that dictate how the signing and encryption keys are derived. These are configured to match Rails’ defaults (see also: defaults). Unless your Rails app has custom configurations for these values, you should leave them be.

Step 3: Configure both apps to read from the new environment variables

Be sure your development and production versions of your app are configured with identical values for DOMAIN, SESSION_ENCRYPTED_COOKIE_SALT and SESSION_ENCRYPTED_SIGNED_COOKIE_SALT. You’ll want to make sure your production apps store identical key-value pairs.

Step 4: Change Phoenix controllers to verify sessions based on session data.

Now when the Phoenix app receives incoming requests, it can simply look up user session data in the session cookie to determine whether the user is logged in, and who that user is.

In this example, our Rails app implements user auth with Devise and Warden. We know that Warden stores the user ID and a segment of the password hash in the warden.user.user.key session variable.

Here’s what the raw session data looks like when the PlugRailsCookieSessionStore extracts it from the cookie:

1
2
3
%{"_csrf_token" => "ELeSt4MBUINKi0STEBpslw3UevGZuVLUx5zGVP5NlQU=",
  "session_id" => "17ec9b696fe76ba4a777d625e57f3521",
  "warden.user.user.key" => [[2], "$2a$10$R/3NKl9KQViQxY8eoMCIp."]}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
defmodule PhoenixApp.SomeApiResourceController do
  use PhoenixApp.Web, :controller

  def index(conn, _params) do
    {:ok, user_id} = load_user(conn)

    conn
    |> assign(:user_id, user_id)
    |> render("index.html")
  end

  plug :verify_session

  # If we've found a user, then allow the request to continue.
  # Otherwise, halt the request and return a 401
  defp verify_session(conn, _) do
    case load_user(conn) do
      {:ok, user_id} -> conn
      {:error, _} -> conn |> send_resp(401, "Unauthorized") |> halt
    end
  end

  defp load_user(conn) do
    # => The Warden user storage scheme: [user_id, password_hash_truncated]
    # [[1], "$2a$10$vnx35UTTJQURfqbM6srv3e"]
    warden_key = conn |> get_session("warden.user.user.key")

    case warden_key do
      [[user_id], _] -> {:ok, user_id}
      _ -> {:error, :not_found}
    end
  end
end

A very naive plug implementation simply renders a 401 if the session key is not found in the session, otherwise it allows the request through.

Step 5: Move session concerns into its own module

Let’s move session concerns around session parsing out of the controller into its own Session module. Additionally, we include two helpers, current_user/1 and logged_in?/1.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# web/models/session.ex
defmodule PhoenixApp.Session do
  use PhoenixApp.Web, :controller
  def current_user(conn) do
    # Our app's concept of a User is merely whatever is stored in the
    # Session key. In the future, we could then use this as the delegation
    # point to fetch more details about the user from a backend store.
    case load_user(conn) do
      {:ok, user_id} -> user_id
      {:error, :not_found} -> nil
    end
  end

  def logged_in?(conn) do
    !!current_user(conn)
  end

  def load_user(conn) do
    # => The Warden user storage scheme: [user_id, password_hash_truncated]
    # [[1], "$2a$10$vnx35UTTJQURfqbM6srv3e"]
    warden_key = conn |> get_session("warden.user.user.key")

    case warden_key do
      [[user_id], _] -> {:ok, user_id}
      _ -> {:error, :not_found}
    end
  end
end

This leaves the controller looking skinnier, implementing only the Plug. Extracted methods are delegated to the new Session module.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
defmodule PhoenixApp.SomeApiResourceController do
  use PhoenixApp.Web, :controller
  alias PhoenixApp.Session

  def index(conn, _params) do
    IO.inspect conn.private.plug_session
    user_id = Session.current_user(conn)

    conn
    |> assign(:user_id, user_id)
    |> render("index.html")
  end

  plug :verify_session

  # Future refinements could extract this into its own Plug file.
  defp verify_session(conn, _) do
    case Session.logged_in?(conn) do
      false -> conn |> send_resp(401, "Unauthorized") |> halt
      _ -> conn
    end
  end
end

Finally, we implement some nice helpers for your APIs:

1
2
3
4
5
6
7
8
# web/web.ex

def view do
  quote do
    # snip
    import PhoenixApp.Session
  end
end

This gives you the ability to call logged_in?(@conn) and current_user(@conn) from within your views, should you desire to.

Step 6: Fetching additional information from the backend

Let’s enhance our Session module with the capability to fetch additional information from another resource.

In this case, we’ll model a call an external User API to fetch extended data about the User, potentially with some sensitive information (that’s why we didn’t want to serialize it into the session).

1
2
3
4
5
6
7
8
9
10
11
12
13
# web/models/user.ex
defmodule PhoenixApp.User do
  # Gets some user identity information like email, avatar image.
  # For this example, we'll use a random user generator.
  #
  # This example hits an API, but this could just as easily be something that hits
  # the database, or Redis, or some cache.
  def fetch(user_id) do
    %{ body: body } = HTTPotion.get("https://randomuser.me/api?seed=#{user_id}")
    [result | _ ] = body |> Poison.decode! |> Map.get("results")
    result
  end
end

Now our Session can be extended to return the proper User, which may provide more utility to us as we implement our Phoenix feature.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
defmodule PhoenixApp.Session do
  use PhoenixApp.Web, :controller
  alias PhoenixApp.User

  def current_user(conn) do
    case load_user(conn) do
      # Changed current_user/1 to now return a User or a nil.
      {:ok, user_id} -> user_id |> User.fetch
      {:error, :not_found} -> nil
    end
  end

  # snip
end

Here’s the two apps in action:

Flipping between the two apps, logged in and out.

Heroku deployment gotchas

If you are deploying this to Heroku with the popular Heroku Elixir buildpack, please be aware that adding or changing environment variables that are required at build time require that the new environment variables outlined here are added to your elixir_buildpack.config file in your repository.

1
2
# elixir_buildpack.config
config_vars_to_export=(SECRET_KEY_BASE SESSION_ENCRYPTED_COOKIE_SALT SESSION_ENCRYPTED_SIGNED_COOKIE_SALT DOMAIN)

Caveats and considerations

CSRF incompatibilites

At the time of this writing, Phoenix and Rails overwrite each others’ session CSRF tokens with incompatible token schemes. This means that you are not able to make remote POST or PUT requests across the apps with CSRF protection turned on. Our current approach will work best with a read-only API, at the moment.

Cookies themselves have their own strengths and drawbacks. We should note that you should be judicious about the amount of data you store in a session (hint: only the bare minimum, and nothing sensitive).

The OWASP guidelines also provide some general security practices around cookie session storage.

Moving beyond session sharing

Even though this scheme may work in the short run, coupling our apps at this level in the long run will result in headaches as the apps are coupled to intricate session implementation details. If, in the long run, you wanted to continue scaling out your Phoenix app ecosystem, you may want to look into the following authentication patterns, both of which move your system toward a microservices architecture.

1) Develop an API gateway whose purpose is to be the browser’s buffer to your internal service architecture. This one gateway is responsible for identity access and control, decrypting session data and proxying requests to an umbrella of internal services (which may be Rails or Phoenix). Internal services may receive user identities in unencrypted form.

2) Consider implementing a JWT token implementation across your apps, in which all session and authorization claims are stored in the token itself, and encrypted in the client and server.. This scheme may still rely on cookies (you may store the token in a cookie, or pass it around in an HTTP header). The benefits of this scheme is the ability for your app(s) to manage identity and authentication claims on their own without having to verify against a third party. Drawbacks of this scheme are the difficulty around revoking or expiring sessions.

Each of these approaches is not without overhead and complexity; be sure to do your homework before your proceed.

Conclusion

That’s it! I hope I’ve illustrated a quick and easy way to get a working Phoenix app sharing sessions with Rails app(s), should you decide to prototype one in your existing system. I’ve also pushed up a sample app if you want to cross-reference the code. Good luck!