27 May 2012

Continuous delivery/deployment

Have been reading the http://continuousdelivery.com/ book.  Really like the ideas:


Infrastructure as code

The desired state of your infrastructure should be specified through version-controlled configuration.

You should always know the desired state of your infrastructure through your instrumentation and monitoring.

Automated deployment of infrastructure includes
  • operating system configuration
  • middleware stack and its configuration: app servers, messaging systems,  databases
  • network infrastructure: routers, firewalls, switches, DNS, DHCP, DMZs
Acceptance tests

Keeping the complex acceptance tests running will take time from your development team. However, this cost is in the form of an investment which, in our experience, is repaid many times over in reduced maintenance costs, the protection that allows you to make wide-ranging changes to your application, and significantly higher quality. This follows our general principle of bringing the pain forward in the process. We know from experience that without excellent automated acceptance test coverage, one of three things happens:
  • Either a lot of time is spent trying to find and fix bugs at the end of the process when you thought you were done, or 
  • You spend a great deal of time and money on manual acceptance and regression testing, or 
  • You end up releasing poor-quality software.
It is difficult to enumerate the reasons that make a project warrant such an investment in automated acceptance testing. For the type of projects that we usually get involved in, our default is that automated acceptance testing and an implementation of the deployment pipeline are usually a sensible starting point. For projects of extremely short duration with a small team, maybe four or fewer developers, it may be overkill—you might instead run a few end-to-end tests as part of a single-stage CI process. But for anything larger than that, the focus on business value that automated acceptance testing gives to developers is so valuable that it is worth the costs. It bears repeating that large projects start out as small projects, and by the time a project gets large, it is invariably too late to retro-fit a comprehensive set of automated acceptance tests without a Herculean level of effort. We recommend that the use of automated acceptance tests created, owned, and maintained by the delivery team should be the default position for all of your projects.


Capacity Testing System

The capacity test system is usually the closest analog to your expected production system. As such, it is a very valuable resource. Further, if you follow our advice and design your capacity tests as a series of composable, scenario-based tests, what you really have is a sophisticated simulation of your production system. This is an invaluable resource for a whole variety of reasons. We have discussed already why scenario-based capacity testing is of importance, but given the much more common approach of benchmarking specific, technically focused interactions, it is worth reiterating. Scenario-based testing provides a simulation of real interactions with the system. By organizing collections of these scenarios into complex composites, you can effectively carry out experiments with as much diagnostic instrumentation as you wish in a production-like system. We have used this facility to help us perform a wide variety of activities:
  • Reproducing complex production defects
  • Detecting and debugging memory leaks
  • Longevity testing
  • Evaluating the impact of garbage collection
  • Tuning garbage collection
  • Tuning application configuration parameters
  • Tuning third-party application configuration, such as operating system, application server, and database configuration
  • Simulating pathological, worst-day scenarios
  • Evaluating different solutions to complex problems
  • Simulating integration failures
  • Measuring the scalability of the application over a series of runs with different hardware configurations
  • Load-testing communications with external systems, even though our capacity tests were originally intended to run against stubbed interfaces
  • Rehearsing rollback from complex deployments.
  • Selectively failing parts or the application to evaluate graceful degradation of service
  • Performing real-world capacity benchmarks in temporarily available production hardware so that we could calculate more accurate scaling factors for a longer-term, lower-specification capacity test environment
This is not a complete list, but each of these scenarios comes from a real project.

Release Strategy

The most important part of creating a release strategy is for the application’s stakeholders to meet up during the project planning process. The point of their discussions should be working out a common understanding concerning the deployment and maintenance of the application throughout its lifecycle. This shared understanding is then captured as the release strategy. This document will be updated and maintained by the stakeholders throughout the application’s life. When creating the first version of your release strategy at the beginning of the project, you should consider including the following:
  • Parties in charge of deployments to each environment, as well as in charge of the release.
  • An asset and configuration management strategy.
  • A description of the technology used for deployment. This should be agreed upon by both the operations and development teams.
  • A plan for implementing the deployment pipeline.
  • An enumeration of the environments available for acceptance, capacity, integration, and user acceptance testing, and the process by which builds will be moved through these environments.
  • A description of the processes to be followed for deployment into testing and production environments, such as change requests to be opened and approvals that need to be granted.
  • Requirements for monitoring the application, including any APIs or services the application should use to notify the operations team of its state.
  • A discussion of the method by which the application’s deploy-time and runtime configuration will be managed, and how this relates to the automated deployment process.
  • Description of the integration with any external systems. At what stage and how are they tested as part of a release? How do the operations personnel communicate with the provider in the event of a problem?
  • Details of logging so that operations personnel can determine the application’s state and identify any error conditions.
  • A disaster recovery plan so that the application’s state can be recovered following a disaster.
  • The service-level agreements for the software, which will determine whether the application will require techniques like failover and other high-availability strategies.
  • Production sizing and capacity planning: How much data will your live application create? How many log files or databases will you need? How much bandwidth and disk space will you need? What latency are clients expecting?
  • An archiving strategy so that production data that is no longer needed can be kept for auditing or support purposes.
  • How the initial deployment to production works.
  • How fixing defects and applying patches to the production environment will be handled.
  • How upgrades to the production environment will be handled, including data migration.
  • How application support will be managed.
Release Plan
  • The steps required to deploy the application for the first time
  • How to smoke-test the application and any services it uses as part of the deployment process
  • The steps required to back out the deployment should it go wrong
  • The steps required to back up and restore the application’s state
  • The steps required to upgrade the application without destroying the application’s state
  • The steps to restart or redeploy the application should it fail
  • The location of the logs and a description of the information they contain
  • The methods of monitoring the application
  • The steps to perform any data migrations that are necessary as part of the release
  • An issue log of problems from previous deployments, and their solutions
...

There are several methods of performing a rollback that we will discuss here.

The more advanced techniques—blue-green deployments and canary releasing—can also be used to perform zero-downtime releases and rollbacks. Before we start, there are two important constraints. The first is your data. If your release process makes changes to your data, it can be hard to roll back. Another constraint is the other systems you integrate with. With releases involving more than one system (known as orchestrated releases), the rollback process becomes more complex too.

There are two general principles you should follow when creating a plan for rolling back a release. The first is to ensure that the state of your production system, including databases and state held on the filesystem, is backed up before doing a release. The second is to practice your rollback plan, including restoring from the backup or migrating the database back before every release to make sure it works.

...

One way to approach this problem is to put the application into read-only mode shortly before switchover. You can then take a copy of the green database, restore it into the blue database, perform the migration, and then switch over to the blue system. If everything checks out, you can put the application back into read-write mode. If something goes wrong, you can simply switch back to the green database. If this happens before the application goes back into read-write mode, nothing more needs to be done. If your application has written data you want to keep to the new database, you will need to find a way to take the new records and migrate them back to the green database before you try the release again. Alternatively, you could find a way to feed transactions to both the new and old databases from the new version of the application.

Continuous Deployment

Continuous deployment isn’t for everyone. Sometimes, you don’t want to release new features into production immediately. In companies with constraints on compliance, approvals are required for deployments to production. Product companies usually have to support every release they put out. However, it certainly has the potential to work in a great many places.

The intuitive objection to continuous deployment is that it is too risky. But, as we have said before, more frequent releases lead to lower risk in putting out any particular release. This is true because the amount of change between releases goes down. So, if you release every change, the amount of risk is limited just to the risk inherent in that one change. Continuous deployment is a great way to reduce the risk of any particular release.

Perhaps most importantly, continuous deployment forces you to do the right thing (as Fitz points out in his blog post). You can’t do it without automating your entire build, deploy, test, and release process. You can’t do it without a comprehensive, reliable set of automated tests. You can’t do it without writing system tests that run against a production-like environment. That’s why, even if you can’t actually release every set of changes that passes all your tests, you should aim to create a process that would let you do so if you choose to.

Your authors were really delighted to see the continuous deployment article cause such a stir in the software development community. It reinforces what we’ve been saying about the release process for years. Deployment pipelines are all about creating a repeatable, reliable, automated system for getting changes into production as fast as possible. It is about creating the highest quality software using the highest quality process, massively reducing the risks of the release process along the way. Continuous deployment takes this approach to its logical conclusion. It should be taken seriously, because it represents a paradigm shift in the way software is delivered. Even if you have good reasons for not releasing every change you make—and there are less such reasons than you might think—you should behave as if you were going to do so.

17 May 2012

Growing a Team By Investing in Tools

Growing a Team By Investing in Tools
Where many teams struggle however is in finding the right balance of building out the product versus “sharpening the saw” by improving their tools and automating processes. With the same people attempting to do both jobs that contention is unavoidable.

The solution then is to build a second team who’s job is entirely focussed on building, maintaining, configuring and improving the tools that the product team uses. Since the majority of tools used by a project are fairly orthogonal to the project itself – and often shared with other product teams – the tool team can be much more independent, reducing the communication overhead. Not only do the product team now have more time to devote to improving the product, they are continuously made more productive because of the work done by the tools team.

10 May 2012

Engineering Management - Process

Engineering Management - Process
A process created and implemented by the individual contributors themselves is much more tuned to optimize the real work being done. A process designed by managers at best approximates the actual flow of work that it needs to govern, optimize, or codify. This is the source of many clueless and inefficient processes.

People feel greater ownership over process they create themselves, and are thus more empowered to adjust it in the future as circumstances inevitably evolve rather than allow it to calcify. Process imposed externally ("from above") is more difficult to break down and tends to be unnecessarily enshrined, thus producing extra organizational inertia.