What Is DevOps?
DevOps is :- Tool? Culture? Ethos? Behaviour? Buzzword? Getting 1 person to do 2 jobs?
I’ve heard DevOps described as all of the above at one stage or another. Maybe it depends on what side of the DevOps fence you sit, if indeed you have to sit on one side or the other.
DevOps is an abbreviation of Development and Operations. These being traditionally two separate departments / teams. It’s actually even more confusing for someone that’s been around for some time. When I first started, I was a Computer operator, in the Operations Team, this was distinct from programers and analysts. What Operations means in this regard is pretty much anyone that looks after production systems, including system administrators.
Of course, programmers are developers now and sysadmins are in the Operations team, so as far as people are concerned it’s a coming together of the Dev and Ops teams. It seems common place to merge the team functions under the development team, rather than bringing developers into the Ops team. From an Operations background myself, I’m finding that line management of Operations teams is more and more being Development Managers.
But, on a human level, are DevOps people Developers that can do sysadmin tasks, or indeed sysadmins who like to code a bit? I’ve heard it said that DevOps personnel are actually either bad SA’s or Bad developers who like to with write code or manage systems. This might be true in some cases. On the whole what I’m seeing is production systems staying pretty much as they were; SA’s not letting Dev types anywhere near production servers. But where it’s all change is in the development / testing environment.
Traditionally larger companies had a separate testing team. So dev’s would write code, SA’s would deploy it and testing team would ensure it all worked and was suitable for production use. This was done partly because certain industries, such as Banking were required to separate these functions out: maker / checker needed to be different. In other words I can’t produce a new piece of code and test it, document it and move it into production. Someone else needed to test it.
SA’s have always helped developers with test environments. Back in the day a developer might come and see you, or ring you and ask for a machine which they could develop code of, or run some tests. Especially if they were developing a new app that needed the latest and greatest OS, one that hadn’t been certified for production use, so they’d need something set up in a LAB environment. Then of course it was a VM they needed, this was easy, as you didn’t need to find an available physical server, you could just provision a VM, install the OS and get them up and running. Moving forward more in time an SA would be able to automatically provision and VM, install the OS and whatever else was needed.
Now, into the world of DevOps, where a developer can provision their own virtual machine / container for use. They don’t need to wait for anyone else, they probably still need to raise some kind of “paperwork” albeit electric, but they could probably have an environment up and running in a matter of minutes, rather than days.
The traditional SA role has transmogrified into one where they’re doing less and less “Linux” type stuff and more with automation tools such as Puppet, Chef, Ansible etc. There might come a point where SA’s have been so deskilled on the OS, that the only choice larger companies will have is to lean more and more on Vendors, we see this happening already.
Ethos / Behaviour / Culture
As I alluded to earlier, I’ve been in IT a while, over 25 years in fact. Once you’ve been through the cycle a few times, so get a unique view of the way the industry goes. So, we went from using Mainframes, before the advent of Ethernet / TCPIP, terminals were connected via racks of serial boards in cabinets, these we re multiplexed out to and from the mainframe. Next we went down the line of distributed systems, so we’d still have the mainframe, but certain apps were placed on distributed Unix systems. These were networked together with massive yellow ethernet cables servers were attached to the ethernet cable using transceivers (bee stings / vampire tap). This cable wasn’t very bendy and much thicker that the ethernet cables we use today. They were laid under the datacentre floor server connections had to be at least a metre (maybe 2) apart, this was known as thick ethernet (10base5).
When we got these servers up and running, in a distributed fashion, we then had to incease the size of these servers, so that the Unix servers were almost the size of the mainframe. Then a “new” idea come about to consolidate all of these servers into bigger servers, less is more. Once this was settle, it was back out to being distributed, commodity hardware, Grids etc. Now were into the world of clouds, and seem to have left “big iron” forever….
I digress, many “new ideas” have actually been around for some time. Take containers on Linux servers, docker for example. Well, Linux containers had been available for years (LXC) before docker and had been around even longer on Solaris servers. Take an old idea, re-package it, re-brand it and sell it on. You could argue AWS is doing that today with a lot of their offerings. Today, if you study for AWS certifications, it’s like taking an AWS sales course. If you’ve been in IT for a while, you know all the technology, you just need to know AWS’s terminology and how much they charge for services.
I’m sorry another digression (is that even a word?) Back to my point about culture.
The DevOps culture is one of collaboration. But hang on, didn’t we always collaborate? Any organisation I’ve ever been in have always collaborated. Especially Dev and Ops teams, surely not more rebranding? Of course not, it’s right to re-look at collaboration, especially with the extra tooling available. I mean why call someone on the phone when you can redirect them to a wiki page that’s been written on the subject. All joking aside this is actually a very useful way of sharing information, tools to allow for better collaboration, just try and keep the 2 hour conference calls with 50 people on to a minimum (you know who you are….)
Tools (Automate, Automate, Automate)
Automate is the current keyword surrounding DevOps. Of course, any good SA (and some bad ones) will automate everything they can. No one really likes doing the boring stuff, so if you see an issue, fix it and try to surround it with automation. However, in the DevOps arena it’s about automating the Development lifecycle; getting a product from development, through testing and ultimately into production. The goal would be to automate this entire process. Tools like Jenkins, Travis CI and Bamboo offer a way to do this, or at least to help you achieve this. Imagine the scenario where you’re a developer, you have code in GitHub, which you and some colleagues are working feverishly on. Jenkins (for example) can be set up to grab the source from GitHub, build the app and
run unit tests on it and on completion ultimately move the app to production, if that’s what you want. This process is commonly known as CI/CD Pipeline – Continuous Integration, Continuous Deployment (or maybe even Delivery)
This CI/CD process is probably only really usable on modern apps, something that’s been developed to run in this type of environment.
To help, or in fact allow this continual development of an app, Agile methodologies are used. An app goes through a series of iterations, continual iterations. This is best explained in an example. We all know (and love?) MS Word. The development of Word would have been done falling the Waterfall Methodology; this is a non-iterative (sequential) deign method. Microsoft would have sat down and gathered requirements, they they designed the Word app, including all it’s features; spell checker, ruler, formatting, text selection, fonts etc from day one. They would have then implemented the software run tests and verification, after this was on going maintenance. Works well, you have a completed app from day one, any new features and revisions would wait for the next development lifecycle, say in 12 months etc.
If a company today was wanting to developer a competitor to Word they would use the Agile Methodology, which of course, isn’t a methodology at all (yawn). So, as well as surrounding yourself with Scrum masters, yellow post it notes and stand up meetings (try that in a global organisation), you would create the app differently. Here’s what you’d do differently; rather than design the app from start to finish, you’d get requirements and start coding. You’d code, test, release and the end user would use the first release, this wouldn’t be a fully feature rich app, but it would be out there. Then, constant iterations would be run, adding functionality as you went through additional releases. The advantage this gives is that additional features could be added during the development process, you could need to wait for V2.0, or V1.5. It’s a continual things. This working model is made possible via the automation that I mentioned earlier, without this it would all take too long. So the users of this new Word product would get continual improvements and enhancements, rather than releasing a brave new Word app that someone forgot to add spell checker, then having to wait a year to get it incorporated into the next major release – imagine!
Obviously it’s not as simple as that, a lot of work needs to be done first to get the CI/CD pipeline automated, but it’s made easier with the tools available, you don’t need to be the guy (or gal) that writes 75 shell scripts to scp files around from test to prod, products are available to help.
Maybe the DevOps movement is nothing more than taking Operations obsession with automation and moving it into the development world. Operations were always good at automating things, but in the past it’s all been very adhoc and locally configured. Now large development houses have got involved and seen big revenue streams developing tools to handle all the automation, even in production environment with the likes of Puppet, Chef and Ansible. These tools are also being integrated with tools like Jenkins to enable automation of the CI/CD Pipeline.
I’m not sure if it’s a realistic concern or not, but could the traditional SA role be being dumbed down? Instead of knowing network configuration, kernel parameters, using LVM etc, they are becoming morphed into DevOps guys that know how to use apps? When everything is automated, who will fix system issues? There will of course be self healing systems which will be monitored to death. Naturally there’s a concern that any non-believers are beginning to sound like an aging Mainframe Systems Programmer from the 1990’s foreseeing doom and dam-national with the advent of *NIX systems.
Still, if you can’t beat them, join them! DevOps is dead, long live DevOps….