Tuesday, May 23, 2017

Cloud computing, agile software development and DevOps are a holy trinity

Agile methodologies, DevOps and Cloud technologies are sometimes referred to as a holy trinity of the modern software development world. One of the promises of the Agile approach was to make software release cycles shorter. Gone are the days where software firms could have major releases every 2 years (and perhaps one or two service packs in between). Competitive pressure requires a much higher delivery frequency. Product owners started adopting hypothesis-driven approaches to quickly validate various assumptions and close feedback loops.

To support fast-pace environments, organizations needed to reduce friction between different teams as much as possible. Around 2009 DevOps was born. It is a way of thinking. It is in a way a philosophy. It brings different teams together in a cohesive unit focusing on end-to-end delivery. By eliminating human elements and automating as much as possible DevOps has changed the focus from traditional questions of stability and efficiency towards agility and unlocking teams’ innovation potential.

To be successful in innovation, organizations have to rely on creative thinking and engineering talent of their staff as the key inputs. Humans are good at thinking. Engineers enjoy solving new problems. All mundane, repetitive tasks can and should be automated. These tasks are still a necessary part of the overall software development cycle but it has become obvious that human talent is wasted performing these tasks. Ask QA engineers how they feel about performing manual regression tests. Automation of the repetitive tasks is one of the DevOps deliverables. This way organizations gain pace, reduce the risks of human errors and allow their engineers to focus on the important things.

A/B testing, hypothesis validation, faster feedback loops… They all require quick provisioning and elasticity. In the traditional IT world, it could take weeks to provision a new server. Software delivery teams couldn’t and didn’t want to wait that long. And, what if we need this server only for a couple of weeks to quickly validate an idea?!

This is how cloud was born. Public cloud offerings are becoming mainstream. More and more companies find it tempting to rent computer resources (OPEX) instead of investing in their own hardware and data centers.

A public cloud model is perfect for a start-up. More mature business with an existing hardware footprint quite often in order to protect their existing investment choose the hybrid model. This model offers the best of both worlds. Legacy applications will continue running in the existing datacenters (private cloud) with the ability to overflow into the public cloud. The public cloud’s elasticity can provide immediate access to the required resources where it makes sense.

There might be several reasons why a company might decide to go down this path. It could be a hard to maintain a legacy application that would be nearly impossible to migrate. Or they might be unwilling to move sensitive data or mission-critical applications as the risk of doing so might outweigh the perceived benefits of the public cloud. Private cloud or datacenter infrastructure can continue running static workloads (the ones that don’t require the adaptive capabilities of the public cloud), providing additional security benefits, or meeting the regulation requirements, where sensitive data cannot be stored in certain geographies.

Modern software development practices force us to break legacy monoliths into the independent microservices. This is where some of the newly written code can be easily moved to the cloud.

It is a mistake to think that DevOps is really just "IT for the Cloud". DevOps can really shine for companies considering the hybrid cloud approach.

What are the key DevOps considerations when running in a hybrid cloud environment?


Define your technology stack and stick to it


Avoid creating a "zoo of different technologies". I often use this term to describe a situation when different teams select similar but different technologies (e.g. MSMQ, SQS, Redis as queuing mechanisms) that provide roughly the same functionality. Identify key “building blocks” your application(s) consist of and socialize this fact with everyone in the team. For example, if a front-end team decided to use Angular then this should become your weapon of choice for some time. New javascript frameworks get released every 6 months. It will be a mistake to chase “the latest and greatest” flavor of the day.

The "sticking to it" part doesn’t mean stagnation. But every proposed change should be justified – what will the new technology give us compared to what we already have. If you decide to go ahead, then aim to replace existing technology, not to run the new one in parallel. A short overlapping period is fine but running 2 similar technologies for a long time is what creates a hard to support "zoo" and eventually results in increased technical debt.

Seek feature parity between the on-prem and cloud-based deployments


When running in hybrid mode it might be tempting to choose cloud-native technologies in the cloud part of your infrastructure – for example technologies like SQS for simple queuing. But in most cases it would be a mistake. You will break the parity principle, meaning that now you must support two different technologies. Two parts won’t look the same, you would have to deploy them differently and testing would most likely be different too. Consequently, it is better to select technologies that can be deployed into both parts of the hybrid solution.

Abstract the underlying components and infrastructure from your application


If the temptation to use cloud-native technologies is still too high, then at least make an effort to architecturally abstract these pieces into a separate layer. Avoid making direct calls to these components from all around the application. Instead, implement a thin abstraction layer as a single entry point. Provide this functionality as an easy to consume and integrate library/module. By doing this you will make it easier to upgrade and replace the underlying technology later on. This will also reduce your reliance on a particular cloud provider (“vendor lock-in”) making your overall solution more portable.

Avoid manual changes at all costs


Consistency is key to operate large/complex fast pace environments at scale. You can’t afford having unique snowflakes in your environment. Amazon’s "cattle, not pets" mantra means all of our individual assets should look the same. Any changes to the setup or configuration should be via the orchestration layer. When troubleshooting production issues, it might sometimes be tempting to login to a server and quickly make changes manually. This is driven by a false sense of convenience. If making changes automatically sounds slow or inconvenient then it’s a sign that your automation/orchestration toolset needs to be improved. This leads us to the last point.

Select the right automation platform


It is important to select the right tools to automate and manage your infrastructure. These tools should be able to control both the on-prem and the cloud parts providing a seamless, "single pane of glass" view of your infrastructure. They should be able to speak the "native language" of the cloud platform and provide the necessary level of convenience for the DevOps team to use them as the primary (and only!) way of maintaining the environment.

Keywords: Agile, DevOps, Cloud, "hybrid cloud", "public cloud"

No comments:

Post a Comment