Clouds Watching Clouds
Posted on July 01, 2014

Cloud services built to audit other cloud services have recently been gaining momentum. Cloudability is a service that plugs into your existing Amazon Web Services (AWS) account and gives you recommendations on how to lower your AWS bills. monitors your AWS footprint looking for potential security weaknesses and anomalous behavior.

Several years ago we witnessed a first generation of cloud services like Rightscale and Cloudkick that augmented other cloud services by providing a better management interface. The need for such tools was reduced when cloud providers such as Amazon released more robust administrative portals.

As the number and complexity of cloud services has proliferated, we appear to be seeing a second generation of cloud-on-cloud services. As of this post, there are at least 29 distinct AWS services. Each service has its own metering, management and security model and APIs. Even services we imagine to be trivially simple, like file storage with Dropbox or, become complex when we dig into user access rules and externally shared documents. With this complexity comes the need to ensure compliance with organizational polices such as security, data retention, performance and cost management.

AWS Services

Cloud services that audit other cloud services can increase trust and enhance predictability for customers. This, in turn, helps businesses move more complex and business-critical workflows to the cloud. Providers that expose APIs enabling 3rd party audits can make a compelling case to customers that their services are more trustworthy than services that don’t provide such visibility.

A key enabler of this audit model is the availability of read-only API credentials that can be delegated to a 3rd party to enumerate configuration and logs without risk of service-impacting side effects. Amazon’s Identity and Access Management (IAM) is a good example of an API that enables such delegated access and I hope more cloud vendors adopt similar approaches.

I am also hopeful we’ll see more services that audit and provide intelligent recommendations for other services. There is inherent value in having an independent 3rd party attest to the correctness, security and efficiency of another service.

Have at it:
$ whois
Domain is available for purchase

The Future of Deployment
Posted on March 18, 2014

Over the past few years we have witnessed an explosion of new technologies and services for building, testing, and deploying Internet applications. The deployment landscape is growing increasingly confusing and changing so rapidly that there is a hot new project almost every week. A sample includes

  • Infrastructural primitives for compute, storage, and networking such as AWS, Openstack, and Google Cloud Services,
  • VM automation technologies like Vagrant,
  • Automated machine configuration frameworks such as Puppet, Chef, Ansible, and Salt,
  • Execution and orchestration frameworks like Hadoop, Docket, Mesos, and Flynn,
  • Automated testing and integration systems such as Jenkins,, and Travis CI,
  • Monitoring and metrics systems like New Relic, Sentry, Boundary, Runscope, Nagios, Graphite and Sensu.

We’re surrounded by change and making technological and process decisions is incredibly difficult. This post explores a few trends that may give us some insight into what the future holds.

Common deployment pipeline

Common Development Environments

First, while the development and deployment of Internet applications is a complex process that is different for every organization, there is commonality in the pipeline through which software moves; (1) code is typically written in a Local Development environment, (2) it moves into a Staging environment that facilitates additional testing and integration, and (3) is finally deployed to a Production environment.

Common environmental requirements

Common Requirements

Second, within each environment there are common requirements. A production environment typically provides components to support logging, monitoring, orchestration, configuration, and application execution. These components are ideally mirrored in a staging environment that allow testing before deployment but are often not all present within the local environment of an individual developer.

Below is an example of commonly used software for implementing different components of the development, deployment and hosting stack within the three environments.

Example Tools

Standardization and automation

One takeaway from the example above is the huge number of moving parts. For example, the software used to manage the local development environment differs greatly from that used in the staging and production environment. This complexity and the vast number of different software pieces that must be integrated together present a real challenge. Software companies large and small spend significant resources building and maintaining their application development and management stacks which inevitably become out of date shortly after they are conceived and often before they are rolled out.

So how can we ease the pain? One of the best ways to solve a hard problems is to simplify. Perhaps we could reduce complexity by removing one or more the components like logging, monitoring, orchestration? While a noble goal, there don’t appear to be many shortcuts. Running large scale Internet applications is an inherently complex job and it would be challenging to create a scalable and reliable system without one of the components described above.

A second option to try to hide the complexity behind layers of abstraction that can be decoupled and could potentially be run as third party services. This is the direction the industry appears to be headed. The problem is that few standards exist and today each component has its own management nuances.

The future of deployment

Lets pull back for a minute and imagine a world in which our deployment pipeline is standardized in a way that supports full automation. What is the input to this system, what is the output, and who is the customer?

The biggest challenge in defining such an interface is that there are generally two constituents with very different goals, application developers and infrastructure operators. Generally speaking, application developers seek to deploy and iterate quickly while operators seek stability and predictability. Deployment management systems of the future must meet the requirements of both application developers and operators. Let’s explore what such a system might look like.

Ideal developer interface

First, let’s think about the deployment system from the perspective of an application developer trying to quickly iterate on customer-facing features.

  • Single interface integrated into source control – First, the primary interface to code is the source control system and today that is often git. An ideal system would simply be an extension of the existing source control infrastructure. Heroku got this right years ago when they extended git to make deployment as easy as git push app. Today we see other components such as a continuous integration with Travis CI being integrated into git and Github. This trend will likely continue and hopefully we will see interfaces for logging, monitoring, orchestration, configuration, and the execution engine designed for developers and extended from the source management system.
  • Identical local, staging and production environments. – Second, a huge pain point for developers are the differences between local, staging, and production environments. A single uniform stack that provides the same testing, logging, monitoring, orchestration, configuration and execution engine on a laptop as on the staging and production servers create a predictable substrate upon which to deploy. This is done well by many opinionated hosting environments such as Google App Engine and will hopefully improve more generally with frameworks such as Docker.
  • Single button deployment. – Application developers tasked with deploying new functionality or shoring up existing features can see huge benefits from the ability to independently and autonomously push features to customers in production. With appropriate controls for code review, testing, and security policy, a new deployment system should provide developers the ability to control the movement of application code from local development machines to staging and on to production. Etsy’s deployinator is an example of how this process can work. Integration such as this enables cross-layer automation allowing high-level application metrics to control roll-backs and other low-level operation.

Ideal operator interface

The perspective of the operator can differ greatly from that of an application developer and instead focus on achieving long-term stability and predictability.

  • Fine-grained visibility – Application and infrastructure metrics drive a significant part of the operational workflow. This includes high-level end-to-end metrics such as user request latency down to detailed machine-level metrics such as TCP/UDP performance. Visibility data drives alerting and warning systems that can identify problems and help stop issues before they become customer-facing issues. Visibility data also drives security processes and infrastructure cost auditing and management. A major challenge in collecting and storing this data is that many metrics are needed in realtime and must be robust major infrastructure partitions and failures.
  • Ultimate control – Once of biggest causes of outages are changes to software or infrastructure that have unexpected consequences. A classic example being the addition of a new column to a large database table that locks a table and hangs server processes knocking a site offline (been there, done that). While testing with a representative dataset will detect many problems before they reach production, some will inevitably sneak by. During such events the ability to quickly and verifiably freeze all changes to the system and rollback to a known good state is an essential part of restoring service.

Unified deployment and tomorrow

Unified Deployment

The next few years will likely be even more confusing and complex. We will see projects like Docker and Mesos mature and gain adoption and we will witness the birth of many new infrastructure management and automation tools. My hope is that the industry will start thinking more holistically about the entire stack from the perspective of the user rather than infrastructure.

Amazon AWS is fantastic from an operators perspective because of the incredible amount of control it provides, however, logging into AWS as developer and seeing the literally thousands of knobs and dials can be a daunting experience. In contrast, Google App Engine is incredibly easy to use as a developer but provides almost no visibility or control for operators.

I think that the right way forward is a notion of unified deployment — providing identical interfaces for local, stage, and production environments while meeting the needs of both application developers and infrastructure operators.

My personal belief is that source control systems such as Github are the right starting point. Source control is the dominate interface to the developer and the ideas we discussed can be integrated into or extended from the source control platform. That doesn’t mean cluttering git with a million knobs but rather starting from the notion that a repository should be a deployable entity that can be pushed to an environment such as local, staging or production for execution and supported by robust testing and by metrics and monitoring. Source control is the entry point to a deployment pipeline that needs common standards from the broader community. Long live git push.

Thank you to the reviewers of this article.

Charging Bankruptcy and the Case for Energy Harvesting
Posted on June 20, 2013

How many phones, tablets, laptops, and eReaders do you plug in before bed? Two, five, ten? Each night, we perform a carefully choreographed routine of plugging in tiny connectors after searching through couches and bags to find phones and devices. Failure to perform this dance has dire consequences. It could mean a day of crouching near outlets to juice-up dying devices or even worse, the dreaded no-battery shutoff.

The problem is even worse for the early adopter crowd whose collection of Fitbits, Pebble watches, Google Glasses, and extra laptops can easily total more than ten devices that need charging at least once per week. We are approaching charging bankruptcy.

Almost every day we hear of a new battery-powered device with fantastic promises like activity tracking for dogs but at some point will we simply be unable to handle yet another device in our life? Even though we may desperately want Google Monocle, the thought of charging yet another device every night may cause us to abandon the purchase. The charging problem has already become an important impediment to the nascent hardware renaissance. What would happen in a world in which there are 20, 50 or even hundreds of connected personal sensors and actuators that need power1? We can’t continue to plug in each device, so we must find alternatives.

One piece of technology that has been receiving recent attention is wireless charging or inductive charging. Inductive wireless chargers promise to cut the final wire and free us from the pain of wired charging. While wireless chargers are indeed a significant improvement, they really only solve the ‘final inch’ problem. That is, users will still need to perform the interruptive and time-consuming tasks of (1) remembering to charge each device and (2) finding the devices and bringing them to the induction charger. With current technology like that from WiTricity or PowerMat devices need to be brought within several millimeters of the charger. While this might be a step in the right direction, this technology certainly won’t help us move from having three battery-powered devices to 20 or 100 devices unless we end up building induction coils into every chair, table, and wall in our homes and offices.

To realize a world in which a hundred tiny discrete devices work for us 24 hours a day we need a new approach. We can’t assume that humans will remember where all their devices are located or remember to periodically charge them. Nor can we assume devices will always be placed within inches of a battery or inductive charger. Devices must be become power self-sufficient.

That means that either devices must come from the factory with enough energy to last the lifetime of the device or they must dynamically harvest enough energy from the surrounding environment. Both approaches are still far from being widely available or utilized. Wider adoption will require new ways of looking at power and energy consumption. For example, we often focus on the power consumption of computation rather than the power consumption of communication. This is crucial considering that the latter can be five orders of magnitude greater than the former. Professor Prabal Dutta, an expert in ultra-low power systems wrote that

“For the same chunk of energy a mote [wireless sensor] could perform 100,000 operations on its CPU but only transmit one bit of information to the outside world.”

Developing the technology that supports the construction of self-sustaining systems that do useful work power on 100 or 1000 times less power will require advances in ultra low-power material science, ultra low-power electronic devices, ultra low-power communications, and updated tooling that makes energy consumption a fine-grained metric that is tracked and optimized at all levels of hardware and software. An example of this evolution is the energy-centric toolchain provided by Energy Micro that can track per-function realtime energy consumption in a processor.

I am personally fascinated by the concept of completely self-sufficient systems powered by energy harvested from the environment. We already build multi-million dollar satellite platforms that last decades and are powered only by the sun. I believe that simple versions of similar technology that is designed to be completely self-sufficient for years and available for a few dollars could radically transform homes, offices, and the lives of rural communities around the world. This is exciting. We know the problem, we have many of the pieces of a solution, and now we just need to start putting them together.

1 Not everyone agrees this will be a problem. One could argue that most of the sensors and actuators we’ll ever need can be easily and effectively integrated into our existing phones and tablets. There may never be a need for a world in which we have 20 or more personal devices. For example, why buy a separate Fitbit when we could just reuse the accelerometers in our existing mobile phones?

Life on Air - David Attenborough
Posted on May 14, 2013

The Life on Earth series starring Sir David Attenborough and broadcast on PBS was for many their first real exposure to a comprehensive natural history of Earth and its vast plant and animal life. Attenborough’s infectious curiosity for nature, his clear explanations of geology, botany, and zoology, and his ability to rapidly take you to the most exotic corners of the globe did more to help many of us understand humanity’s place in the universe than any textbook or lecture.

It was with many fond childhood memories that I recently sat down to read the autobiography of this khaki-wearing British scientist first published by BBC Books in 2002. The story begins with several vignettes demonstrating Attenborough’s early curiosity for the natural world. He relates stories of fossil hunting in rural England with the same indomitable positive outlook and curiosity that characterizes his global adventures. This drive to explore and explain the world was eventually too strong to keep him a behind a desk in his early bureaucratic role as controller of BBC2. Attenborough came across as a rare leader possessing not only an incredible dedication to scientific and natural exploration but also the ability to motivate people on a grand scale.

While it appears Attenborough could have continued in a comfortable management role as a television executive, he followed his passion to be outdoors and to write and narrate. It was there he could exercise his greatest gift, the ability to synthesize the essence of animals, plants, and natural phenomenon into engaging accessible narratives. Attenborough is a deeply moral person who cares about the subjects in front and behind the camera. His presentation is genuine, honest, and intensely positive. Clearly uncomfortable situations that easily could have been described as woeful personal suffering — the imagery of a damp night spent by Attenborough in a South American mud hut where he slept mere inches from a moving wall of cockroaches comes to mind — are instead transformed into positive scenes of scientific endeavor. It is, oh so British.

The book is one of immense positive energy and I very much enjoyed the light narrative style and antidotes Attenborough shares from a lifetime of travel. It provides a unique window into the early years of television programming and stories of adventure from before the days of globalization. This is not a tale of overcoming a disadvantaged background; Attenborough grew up relatively privileged and clearly benefited from his education at Cambridge University. However, there is no doubting the impact Attenborough has made on the world through his nature programs and tireless advocacy for environmental efforts such as global warming awareness. This is a story of how one person can change the world with genuine passion, honesty, and ceaseless energy.

Sir Attenborough has been a personal hero of mine since childhood. His approach to public scientific discourse that manages to both entertain and inform is unfortunately a rarity these days. The productions from big media and even PBS lack the same inspirational personas and content. Fortunately, social media is surfacing new heroes such as Chris Hadfield, the International Space Station astronaut and scientist whose enthusiastic science experiments (and rendition of the David Bowie song Space Oddity) is now inspiring millions. Here’s to Sir Attenborough and the next generation of inspirational science heroes like Commander Hadfield.

The Next Wave Will Be Physical: A Future You Can Touch
Posted on August 10, 2012

In the past, making complex physical goods for which you didn’t have the recipe could be challenging or impossible. How would you make your own barrel from scratch?

Hundreds of years ago you might seek out a master craftsman who may to may not tell you the secrets of his or her trade. More recently, best way to answer such a question was go to your local library and find a book on the subject. However, a reference on the topic might not even exist for more esoteric goods.

Within the last few years, the ability of the Internet to facilitate global communication has supported the rapid growth of maker/doer communities around the world. Want to make a barrel today, no problem, just check out the 37,000 videos on the YouTube!

Just as the ability of the Internet to facilitate communication has supported the exponential growth of complex software through open source communities, we appear to be at the start of similar revolution for the production of physical goods.

Increasingly popular communities of makers have grown up around popular topics such as DIY electronics. These are commercial efforts focused on education, community and sharing. For example, SparkFun and Adafruit have huge educational efforts that support their main business of selling electronics gear.

Just as the software industry is moving toward a model where the tools are open but the operation of those tools is the product (see cloud companies like Twitter), I hope there is a similar approach for the world of physical goods. A compelling model has yet to emerge. The dream of finding the product you want on Thingiverse, sending it your 3D printer and having it produced in minutes certainly is exciting.

The challenge is that centralized manufacturing will likely have a fundamental cost and complexity advantage for the foreseeable future — 3D-printing the latest iPhone with a 16-layer PCB and a custom 32nm chip in the comfort of your living room would certainly be cool. Fingers crossed for that desktop atom-by-atom builder! In the near term, manufacturers will have an incentive to keep their recipes hidden since only they can produce the cheapest and most advanced parts.

That said, the rate at which we are sharing information on how to build stuff is getting faster and faster. Making simple products yourself will get simpler and simpler as designs, parts and manufacturing techniques being more accessible.

Investors looking to capitalize on this trend might look toward the success of RedHat and Github. Bet on organizations that are leading the communities that support individuals and small companies manufacturing physical stuff. The long-tail will lead the adoption curve as more and more people and companies become their own builders.

Here’s to a future you can touch. Maybe it’s time to get that CNC machine?

People, Process, Product: Where to Spend Time Growing Engineering
Posted on May 09, 2012

Over the past four years I’ve found that one of the most challenging aspects of growing an engineering team is determining where I should personally spend time. New engineers need to be hired, source control and bug tracking tools need to be improved, features need to be designed and coded ASAP, etc etc. It can be easy to focus on short-term features and forget to invest time growing the team and processes.

A simple framework that has been critical in helping me prioritize my time is focused on people, process, and product. It is similar to frameworks in project management and for lack of a better title i’ll call it The Three P’s of Engineering Management.

  • People – are we attracting and retaining the best people for the organization?
  • Process – are we building an engineering process that provides high quality, velocity and efficiency?
  • Product – are we designing, architecting and building highly scalable, available, and maintainable systems?

Prioritizing between different projects can also be difficult. As a general rule, I try to make sure I spend about 50% of my time recruiting and building the team and roughly an even amount distributed across the other themes. The amount of hiring can vary quite a bit as a startup goes through growth spurts but in general spending half my time on hiring has felt like the right investment.

An important outcome of using this framework is that I’ve felt able to continually make progress across wide range of projects.

Gamification of Cooking
Posted on January 22, 2012

A lot has been said about gamification and the process of using game mechanics to engage customers and audiences. Seth Priebatsch’s TED talk titled “The game layer on top of the world” provides a excitable overview of the idea.

One of the core components of many games, especially RPGs, is the idea of progressive skill enhancement. For example, in the popular RPG title Skyrim, there are a wide variety of skills that can be incrementally improved as you progress through the game. Below is an example of the Shield skill tree from Skyrim in which each node represents an enhancement that can be “purchased” using a finite set of skill points acquired in the game.

The process of learning to cook in the real world seems like an excellent candidate for gamification. Imagine a mobile game that presents a series of “quests” that are targeted to help you learn cooking. As you complete cooking tasks you progress to more complex recipes. As you progress through quests, you could specialize in certain areas such as Desserts, Vegetarian, etc.

There are wide range of great mobile cooking apps but they are mostly search-based. They have a searchable index of recipes with user-contributed comments and rating but little higher-level organization. I think there is an opportunity to provide a more structured learning process for cooking and to expand the social aspects of cooking. For example, when you finish your recipe why not provide a simple mechanisms to take photos and share your progress in the cooking game with friends and family.

Copyright © 2015