The Future of Deployment

Posted on March 18, 2014

Over the past few years we have witnessed an explosion of new technologies and services for building, testing, and deploying Internet applications. The deployment landscape is growing increasingly confusing and changing so rapidly that there is a hot new project almost every week. A sample includes

  • Infrastructural primitives for compute, storage, and networking such as AWS, Openstack, and Google Cloud Services,
  • VM automation technologies like Vagrant,
  • Automated machine configuration frameworks such as Puppet, Chef, Ansible, and Salt,
  • Execution and orchestration frameworks like Hadoop, Docket, Mesos, and Flynn,
  • Automated testing and integration systems such as Jenkins,, and Travis CI,
  • Monitoring and metrics systems like New Relic, Sentry, Boundary, Runscope, Nagios, Graphite and Sensu.

We’re surrounded by change and making technological and process decisions is incredibly difficult. This post explores a few trends that may give us some insight into what the future holds.

Common deployment pipeline

Common Development Environments

First, while the development and deployment of Internet applications is a complex process that is different for every organization, there is commonality in the pipeline through which software moves; (1) code is typically written in a Local Development environment, (2) it moves into a Staging environment that facilitates additional testing and integration, and (3) is finally deployed to a Production environment.

Common environmental requirements

Common Requirements

Second, within each environment there are common requirements. A production environment typically provides components to support logging, monitoring, orchestration, configuration, and application execution. These components are ideally mirrored in a staging environment that allow testing before deployment but are often not all present within the local environment of an individual developer.

Below is an example of commonly used software for implementing different components of the development, deployment and hosting stack within the three environments.

Example Tools

Standardization and automation

One takeaway from the example above is the huge number of moving parts. For example, the software used to manage the local development environment differs greatly from that used in the staging and production environment. This complexity and the vast number of different software pieces that must be integrated together present a real challenge. Software companies large and small spend significant resources building and maintaining their application development and management stacks which inevitably become out of date shortly after they are conceived and often before they are rolled out.

So how can we ease the pain? One of the best ways to solve a hard problems is to simplify. Perhaps we could reduce complexity by removing one or more the components like logging, monitoring, orchestration? While a noble goal, there don’t appear to be many shortcuts. Running large scale Internet applications is an inherently complex job and it would be challenging to create a scalable and reliable system without one of the components described above.

A second option to try to hide the complexity behind layers of abstraction that can be decoupled and could potentially be run as third party services. This is the direction the industry appears to be headed. The problem is that few standards exist and today each component has its own management nuances.

The future of deployment

Lets pull back for a minute and imagine a world in which our deployment pipeline is standardized in a way that supports full automation. What is the input to this system, what is the output, and who is the customer?

The biggest challenge in defining such an interface is that there are generally two constituents with very different goals, application developers and infrastructure operators. Generally speaking, application developers seek to deploy and iterate quickly while operators seek stability and predictability. Deployment management systems of the future must meet the requirements of both application developers and operators. Let’s explore what such a system might look like.

Ideal developer interface

First, let’s think about the deployment system from the perspective of an application developer trying to quickly iterate on customer-facing features.

  • Single interface integrated into source control – First, the primary interface to code is the source control system and today that is often git. An ideal system would simply be an extension of the existing source control infrastructure. Heroku got this right years ago when they extended git to make deployment as easy as git push app. Today we see other components such as a continuous integration with Travis CI being integrated into git and Github. This trend will likely continue and hopefully we will see interfaces for logging, monitoring, orchestration, configuration, and the execution engine designed for developers and extended from the source management system.
  • Identical local, staging and production environments. – Second, a huge pain point for developers are the differences between local, staging, and production environments. A single uniform stack that provides the same testing, logging, monitoring, orchestration, configuration and execution engine on a laptop as on the staging and production servers create a predictable substrate upon which to deploy. This is done well by many opinionated hosting environments such as Google App Engine and will hopefully improve more generally with frameworks such as Docker.
  • Single button deployment. – Application developers tasked with deploying new functionality or shoring up existing features can see huge benefits from the ability to independently and autonomously push features to customers in production. With appropriate controls for code review, testing, and security policy, a new deployment system should provide developers the ability to control the movement of application code from local development machines to staging and on to production. Etsy’s deployinator is an example of how this process can work. Integration such as this enables cross-layer automation allowing high-level application metrics to control roll-backs and other low-level operation.

Ideal operator interface

The perspective of the operator can differ greatly from that of an application developer and instead focus on achieving long-term stability and predictability.

  • Fine-grained visibility – Application and infrastructure metrics drive a significant part of the operational workflow. This includes high-level end-to-end metrics such as user request latency down to detailed machine-level metrics such as TCP/UDP performance. Visibility data drives alerting and warning systems that can identify problems and help stop issues before they become customer-facing issues. Visibility data also drives security processes and infrastructure cost auditing and management. A major challenge in collecting and storing this data is that many metrics are needed in realtime and must be robust major infrastructure partitions and failures.
  • Ultimate control – Once of biggest causes of outages are changes to software or infrastructure that have unexpected consequences. A classic example being the addition of a new column to a large database table that locks a table and hangs server processes knocking a site offline (been there, done that). While testing with a representative dataset will detect many problems before they reach production, some will inevitably sneak by. During such events the ability to quickly and verifiably freeze all changes to the system and rollback to a known good state is an essential part of restoring service.

Unified deployment and tomorrow

Unified Deployment

The next few years will likely be even more confusing and complex. We will see projects like Docker and Mesos mature and gain adoption and we will witness the birth of many new infrastructure management and automation tools. My hope is that the industry will start thinking more holistically about the entire stack from the perspective of the user rather than infrastructure.

Amazon AWS is fantastic from an operators perspective because of the incredible amount of control it provides, however, logging into AWS as developer and seeing the literally thousands of knobs and dials can be a daunting experience. In contrast, Google App Engine is incredibly easy to use as a developer but provides almost no visibility or control for operators.

I think that the right way forward is a notion of unified deployment — providing identical interfaces for local, stage, and production environments while meeting the needs of both application developers and infrastructure operators.

My personal belief is that source control systems such as Github are the right starting point. Source control is the dominate interface to the developer and the ideas we discussed can be integrated into or extended from the source control platform. That doesn’t mean cluttering git with a million knobs but rather starting from the notion that a repository should be a deployable entity that can be pushed to an environment such as local, staging or production for execution and supported by robust testing and by metrics and monitoring. Source control is the entry point to a deployment pipeline that needs common standards from the broader community. Long live git push.

Thank you to the reviewers of this article.