The phrase “software is eating the world” seems to be among today’s most popular catchphrases muttered by IT professionals. While it’s a phrase that’s often casually thrown around by those in the industry, the truth is that there are actually a lot of numbers that prove the importance of this seemingly simple stance. In fact, software development, and in particular, enabling the developer in the enterprise data center, has become the foundation of the whole business/organizational movement toward a more DevOps-centric model. And when you look at the numbers, it’s pretty astonishing to see how businesses that ascribe to these strategies compare to those that don’t.
Why DevOps is important
Over the past few years, Puppet Labs has executed annual surveys of organizations that cover all kinds of metrics that are critical to both IT and organizational operational efficiency. What they’ve concluded is pretty staggering. Want 168 times better capabilities to recover from failure? Would you like to release a product 200 times faster? The data is clear: foster a DevOps-enabled culture.
There are solid business reasons to head down this path as well. Wouldn’t you like to work in an organization that’s twice as likely to exceed profitability and productivity expectations? How about having an organization with 50 percent higher market capitalization growth?
Forget about technology for a second, because in the end, technology is only a business enabler. But the potential for organizational success here is unbelievable. Like many changes, understanding how to begin to enable this process within an organization leaves many people searching for a quick answer. Realistically, however, this isn’t a change that happens overnight.
As IT professionals in an ever-changing landscape, we’re riding the wave of balancing legacy gear and implementation processes with the somewhat new wave of business application innovation that requires faster turnaround and entirely new tooling to support infrastructure and service release. Business leaders are at the mercy of the “I want it now generation”. Generally, these customers don’t care about what’s under the technological covers – all they really care about is if they can access the shiny application downloaded from the app store on their touchscreen device. This pitch for DevOps is a direct result of this service-driven, time-to-market mentality, and it is being pushed by the people who are generally more interested in direct business outcomes. In most cases, these projects are either rolling downhill or being bubbled up from an internal champion as part of an innovation project. Either way, with these new thoughts and challenges developing, it’s time to start developing a strategy around the ultimate “how” for an organization. Like any strategy, it’s fundamental to understand what goes into putting these motions into practice.
How Feedback Loops help us better understand change
A core tenant of any DevOps methodology is measurement. Figuring out what to measure is unique to each organization’s product set and usually involves looking at key performance indicators related to an organizational initiative. Application response time, database transaction time, and order processing time are all great examples of metrics that could be considered KPIs as they might relate to application effectiveness. By beginning to monitor such changes, organizations are able to effectively evaluate change by using simple and effective feedback loops. It’s not really important if the change in the environment is a change to application code or an infrastructure change (firewall, network port, etc); as long as the new feature to the application or the production environment change was released while maintaining or improving the KPI, it’s easy to show change effectiveness. This change and verify mentality ensures that quality improves while the work done to keep the lights on has built in checks to ensure reliability standards aren’t impacted.
By enabling both the feedback loop and also the method of change, IT organizations can enable an environment with built-in capabilities for greater experimentation, as well as ensure changes are seen less as a science project and more as a surgical procedure. With greater experimentation in an organization, there is more chance to develop ideas or services that can win in the marketplace. How do we set the stage for this process? Code control.
Code Control: Ensure deployments are consistent and repeatable
Monitoring changes in all environments is key. Where we’re really starting to see traction in organizations, though, is in automating deployments. In a world where infrastructure aspires to live as code, something needs to house this code. Much like humans need the base layers beyond battery and wifi in Maslow’s [new] Hierarchy of Needs, code too needs a place to live at the physiological layer. Shelter for code in the hierarchy is a simple Version Control System. To reach the desired state of automating deployments, having a single repository for code is the first step to ensuring that organizations can roll back if a KPI indicates a problem during deployment. This basic tenant of checking code into a repository and releasing production code is referred to as repository code control.
Git: Today’s code control system
Code control is literally as old as the hills. From file cabinets and punch card boxes to FTP servers and floppy disks and onto SVN, the basic tenet of code control has been to keep code safe and secure while developers make changes. Earlier versions of code control were limited to allowing developers to check out a snippet of code (much like you’d check out a library card) while a change was worked on.
With all technology, however, innovation is relentless. And with technology ever-changing, developers quickly realized that the next best step would be multiple people collaborating and working on the same areas of code simultaneously. Following a fallout with a company who was providing free access to the Linux community for distributed kernel development, the Linux kernel team decided to create their own version control system leveraging lessons learned from the previous provider. This new system, named Git, allowed for multiple developers to work on different branches of code simultaneously. This newly distributed version of code control quickly became mainstream and is now considered the de facto choice for people learning to code. Later, GitHub was founded as a social place to share these segments and Git/GitHub became a frenzy in social code source control.
Because Git allows for distributed work on the same set of source code, multiple areas of code can easily be updated and improved collaboratively and at the same time. Because it uses snapshotting technology, Git truly changes the way that we can look at storing our infrastructure code. The concept of snapshot is a powerful thing; when we are happy with a place that we’ve arrived at in our code development, the developer can simply commit the code, which creates a snapshot. This snapshot, or commit, becomes a point-in-time reference of the state of that section of code. Using other tooling in our DevOps toolkit, we can setup automated testing of this commit to give us immediate feedback on the validity of our code (more on that in a later post).
Note: Just because we’ve committed code, doesn’t necessarily mean that we’ve sent our changes into production. We’re able to work on test or development sections or branches of our code by potentially using Git’s branching feature. By branching code, we can effectively meet requirements for release management, code review, and code segregation. All of these bits are table stakes when it comes to showing that the system is enterprise-ready. When our branch has been tested, a pull request (essentially kicking off the promote-to-production workflow) can be issued to our production release engineer who has the segregated responsibility of merging our branch into production and applying the change.
Leveraging code control’s features to recover from disaster
Sometimes even the most highly tested changes go wrong when they’re released to production. In the event that a merge of code into production and the production deployment goes wrong, we’re able to quickly revert by backing out to a previous snapshot in our version system. This revert completely restores the previous code snapshot and state to our environment. The trick is to recover quickly and exactly in the event of a failure. Having a feedback mechanism, as discussed before, to identify when things have gone wrong, and a source of truth for previous versions of the codebase (for infrastructure or application) to roll back to gives us exactly what’s needed to revert changes or code releases.
Infrastructure as code: Taking the next step
After getting comfortable with the code deployment process that’s been used for years by developers, it’s easy to begin to think about controlling configurations on systems, applications, firewalls, and networking by using tools that link to version control. By enabling version control here, you’re limiting organizational risk to environment changes as the procedure for rolling back is as simple as rolling back your code changes. Application deployment fails in production? Revert to the last working commit. It’s that easy.
The foundation for enabling an environment that is controlled by code and is able to tie into the features needed to enable DevOps is a system for code control and KPI monitoring. If you haven’t considered what platform to use for code control in your environment today or don’t have a strategy around leveraging code to instantiate change, it’s time to start the conversation.
Schedule a time to meet with us in the AHEAD Lab and Briefing Center today!