As a relatively new term, it is not surprising that there is confusion and abstraction about the role and purpose of DevOps.
The traditional approach to development, maintenance and deployment assumes clear divisions between developers, testing/QA and operations. Once the developers are done with coding the code is then passed on to the testing team to test the application and once the testing is done and bug-fixing finalized by the developers it is passed on to operations for deployment and pushing the application to production.
This separation of responsibilities resulted in building ‘walls’ and this quite often isolated groups, which in effect worked against each other: the deployment team versus the development team and the development team versus the testing team and so on. This setup also made the whole process very long and inefficient, with most of the mentioned processes done manually.
Today we are witnessing a range of changes within the industry.
Management has changed, moving towards the application of scrum/agile methodologies. Today, development is test-driven and uses automation/CI practices as never before; deployments are mostly in the cloud, which allows for easy scaling of the infrastructure in seconds.
Most businesses today are trying to cut the time-to-market and are increasingly making smaller and incremental releases as opposite to large deployments planned months in advance. Yet attempting to plan and manage all risks involved in software development whilst making frequent releases is an impossibility, because technologies, setup, and the environment and other requirements are constantly changing. With this in mind, advance planning is more akin to shooting at a moving target. Instead, we focus on the change as such. We adapt the system to function and anticipate change, constantly monitoring and improving whilst minimizing costs and increasing efficiency.
Traditional network engineers or maintenance developers cannot support such an approach and this is why we now have DevOps.
The introduction of DevOps has had two major impacts: cultural and technical changes.
It is always easier to change technology stack or tools than mindsets and the way people approach or envisage different problems. DevOps represent a cultural framework wherein change, deployments and deliveries happen all the time: the system can only work if we have frequent changes.
Today, DevOps are usually not the ones making buzz or even mentioning ‘go live’; rather it is a continuous process that happens each day and continuous delivery/deployment is part of this culture.
DevOps see infrastructure (servers, networks, storage, etc.) as code and they automate and ‘code’ it. A traditional network/system engineer would install and configure the servers, routers, switches and other equipment in the server room, making deployment according to the deployment plan and reporting it afterwards etc., whereas DevOps build automated virtual scripts (they template our infrastructure and operations). Once executed it ‘creates’ networks, application servers, database servers, firewalls, VPNs etc. After this, script will pull the code, execute tests, make the necessary migrations and adjustments and finally run the platform. The next update will happen automatically once the code is pushed to git.
DevOps have introduced various tools that have enabled automation and integrated and faster deliveries. They also monitor platforms and take care of and optimize cloud hosting costs.