DocuSign Dev Blog

Brought to you by the development teams at DocuSign

Topic: Redis

The new DocuSign experience, all in Javascript

by Ben Buckman

Screenshot of The New DocuSign Experience

We recently launched The New DocuSign Experience, a reimagining of our primary web application used by 45 million users.

The app is built in Node.js and uses the REST API as its data store. We’re proud to have built the entire app on an open-source stack, and plan to share a lot of our code in the coming months.

We aimed to follow best practices throughout: DRY modular code; automated deployment, provisioning, and continuous integration; (a goal of) 100% automated front-end testing; two-week sprints and releases, with scrums combining dev+QA+UX+product; utilizing the latest browser capabilities while degrading gracefully for older browsers; and using Git-flow and Github-style workflows.

The project first began in June 2012 with one developer and a vision of building the next-generation web app as a front-end layer on top of the API. After a period of prototyping and setting the groundwork, the project (codenamed “Martini”) kicked into high gear around a year ago. The team now consists of twelve developers, including three dedicated to QA automation; a team of designers, two product managers, and all the other amazing support (ops, program management, facilities) that keep the servers humming and the stars aligned.

We launched a public beta in September, and became DocuSign Beta; now we’re official, but we still affectionately call the project and team Martini.

Ubiquitous Javascript

Martini is built on an all-Javascript (actually CoffeeScript) stack. On the backend that includes Node.js, Express, Request, and Winston, among many other libraries. Ubiquitous JS means we can share code and patterns between the front- and back-ends: Backbone.js handles models, views, and routing client-side, as well as parallel models server-side. All our coffee/js files use the same module pattern, bundled by Stitch for the browser. Templates on both sides are in Jade. Stylus preprocesses our CSS. We use Redis for sessions and caching. The app uses the DocuSign REST API (a highly scalable, carrier grade system built on the .NET stack) as its data store, so we don’t need a full database.

To support the entire new experience, our API grew from serving a relatively narrow set of third-party use cases to supporting every customer-facing application function. Not all of these API features are public yet, but they will be soon.

more »

Building a Redis Sentinel Client for Node.js

by Ben Buckman

We use Redis for sessions and for a short-lived data cache in our node.js application. Like any component in the system, there’s a potential risk of failure, and graceful failover to a “slave” instance is a way to mitigate the impact. We use Redis Sentinel to help manage this failover process.

As the docs describe,

Redis Sentinel is a distributed system, this means that usually you want to run multiple Sentinel processes across your infrastructure, and this processes will use agreement protocols in order to understand if a master is down and to perform the failover.

Essentially, each node server has its own sentinel corresponding to each redis cluster [master and slave(s)] that it connects to. We have one redis cluster, so for N node servers, there are N sentinels. (This isn’t the only way to do it - there could be only one sentinel, or any other configuration really, but the 1:1 ratio seems to be the simplest.) Each sentinel is connected to the master and slaves to monitor their availability, as well as to the other sentinels. If the master goes down, the sentinels establish a “quorum” and agree on which slave to promote to master. They communicate this through their own pub/sub channels.

The sentinel is not a proxy - the connection to the sentinel doesn’t replace the connecton to the master - it’s a separate instance with the sole purpose of managing master/slave availability. So the app connects to the sentinel in parallel with the master connection, and listens to the chatter on the sentinel channels to know when a failover occurred. It then has to manage the reconnection to the new master on its own.

Redis Sentinel Client flow diagram

We’re using the standard node_redis library, which is robust, easy to use, and works “out of the box” for things like sessions. But a year ago, when Sentinel started to gain adoption, the best approach for adding Sentinel awareness to node_redis clients wasn’t clear, so a thread started on Github to figure it out.

One simple approach was for the application to simply hold two connections, for sentinel and master, and when the sentinel reports a failover, to reconnect the master. But the way node_redis works, any data in transit during the failover is lost. Also with this approach, the code listening to the Sentinel’s pub/sub chatter lived in the application, and wasn’t as encapsulated as we thought it should be.

So we decided to create a middle tier, a redis sentinel client, that would handle all this automatically.

more »