DevOps - Workforce of the Future

DevOps - Workforce of the Future

Symphony Logo
Symphony
November 6th, 2017

As a relatively new term, it is not surprising that there is confusion and abstraction about the role and purpose of DevOps.

The traditional approach to development, maintenance and deployment assumes clear divisions between developers, testing/QA and operations. Once the developers are done with coding the code is then passed on to the testing team to test the application and once the testing is done and bug-fixing finalized by the developers it is passed on to operations for deployment and pushing the application to production.

This separation of responsibilities resulted in building ‘walls’ and this quite often isolated groups, which in effect worked against each other: the deployment team versus the development team and the development team versus the testing team and so on. This setup also made the whole process very long and inefficient, with most of the mentioned processes done manually.

Today we are witnessing a range of changes within the industry.

screen-shot-2017-11-07-at-150928-2.webp

Management has changed, moving towards the application of scrum/agile methodologies. Today, development is test-driven and uses automation/CI practices as never before; deployments are mostly in the cloud, which allows for easy scaling of the infrastructure in seconds.

Most businesses today are trying to cut the time-to-market and are increasingly making smaller and incremental releases as opposite to large deployments planned months in advance. Yet attempting to plan and manage all risks involved in software development whilst making frequent releases is an impossibility, because technologies, setup, and the environment and other requirements are constantly changing. With this in mind, advance planning is more akin to shooting at a moving target. Instead, we focus on the change as such. We adapt the system to function and anticipate change, constantly monitoring and improving whilst minimizing costs and increasing efficiency.[1]

Traditional network engineers or maintenance developers cannot support such an approach and this is why we now have DevOps.

The introduction of DevOps has had two major impacts: cultural and technical changes.

Cultural Changes

It is always easier to change technology stack or tools than mindsets and the way people approach or envisage different problems. DevOps represent a cultural framework wherein change, deployments and deliveries happen all the time: the system can only work if we have frequent changes.[2]

Today, DevOps are usually not the ones making buzz or even mentioning ‘go live’; rather it is a continuous process that happens each day and continuous delivery/deployment is part of this culture.

Technical Changes

DevOps see infrastructure (servers, networks, storage, etc.) as code and they automate and ‘code’ it. A traditional network/system engineer would install and configure the servers, routers, switches and other equipment in the server room, making deployment according to the deployment plan and reporting it afterwards etc., whereas DevOps build automated virtual scripts (they template our infrastructure and operations). Once executed it ‘creates’ networks, application servers, database servers, firewalls, VPNs etc. After this, script will pull the code, execute tests, make the necessary migrations and adjustments and finally run the platform. The next update will happen automatically once the code is pushed to git.

DevOps have introduced various tools that have enabled automation and integrated and faster deliveries. They also monitor platforms and take care of and optimize cloud hosting costs.

The magic recipe for a great product is that all of the required ingredients are in balance and mixed together as compact matter. One of the main ingredients is DevOps sauce, which makes sure that everything is bound together, properly baked and served to perfection. We believe that having a good recipe and ingredients is not the only concern and that you need to make the baking and serving processes flawless.

Symphony and DevOps

As with software engineers, Symphony is following cutting-edge trends within the DevOps industry. Our DevOps engineers are highly skilled with years of experience of various tools and technologies (AWS services like Elastic Beanstalk and CodeDeploy, Ansible, Nagios, ELK, Docker, Chef, Puppet Jenkins, CIrcleCI, Terraform, Codacy, Bitbucket/Github and so on).

Our DevOps teams are not isolated. They are members of engineering teams and work together with software engineers and QA engineers towards achieving the same goal. They participate fully in the development process and have equal influence over decisions as other team members.

A typical Symphony DevOps engineer requires certain qualities:

a team player,someone who keeps his life in the cloud,is connected to not less than 6 machines at any one time,a security freak,passionate about the fact that they actually code the infrastructure/hardware resources,tries to automate everything in his/her life (literally everything that happens more than once).

Notes:[1] Reaction to a cost increase or an inefficiency is almost immediate, due the fact that we have short cycles and intensive communication.[2] In the case of Facebook.com, they make a minor release every business day and on Tuesdays have a major release. Flickr has 10+ deployments each business day!

Author: Dino Osmanovic, CTO

Contact us if you have any questions about our company or products.

We will try to provide an answer within a few days.