DevOps, DataOps in 2020 – Tectonic Shift

2020 is a remarkable year because how the things are going in DevOps and DataOps fields. Also let me mention DataOps challenges I listed a year ago here.Christmas Tree - DevOps and DataOps in 2020

To see where we are now I remind you of DORA’s State Of DevOps 2019 report (get your copy here if you haven’t done so yet) – and there were few other similar studies, that generally outline the trend that the amount of high performing software companies is in the range of ~5-15%. And low performing companies are at ~20%. What is stunning is there exists orders of magnitude difference between low and high performers.

As software becomes the core of every modern organization, this now translates to live or death situation for businesses. And medium performers are not spared as in many aspects they are also far behind high performers. Business can only sustain if it manages to embrace high performing software practices, starting with the mindset. This is what I’d like to call “Tectonic DevOps Shift” (many people refer to this process as the 4th Industrial Revolution).

It is important to note that high performers are also “not done yet”. Meaning that there is large room for improvement for most of them as it would take some time for everybody to settle on best practices. But looks like we’re getting close to at least good understanding what those best practices are.

So here are my thoughts about what’s most important to embrace at this time and what are the immediate trends in DevOps and DataOps that I see (will be a bit technical in parts):

1. Continuous Integration (CI) must be fully containerized

Essentially, whole build process must be described in a single Dockerfile and run with a single docker build command – thanks to multi-stage docker builds – for example, here is a sample how a build-stage maven file could look. This simplifies CI scripts to bare minimum and makes moving between CI platforms trivial.

2. Continuous Delivery (CD) must be fully containerized as well 

This is similar to previous point. However – I moved this to a separate point, since this could be done later (or not at all for older projects nearing retirement). Essentially, if your CI is containerized, you could still build your artifacts inside containers and then extract to legacy non-containerized environments.

3. Container orchestration as a key DevOps platform

Widespread adoption of Kubernetes, wide popularity of Docker Compose for POCs and demos and use of Docker Swarm for smaller projects allows for quick switching between various hosting options and cloud providers once containerization is fully adopted. Arguably, this field has still a lot of room to grow due to complexity or existing tools and frequently messy code and yaml mix – but the trend stays and this should be sorted out eventually.

4. Security is bigger than ever

Focus on cyber-security is very strong. DevSecOps field is a huge part of DevOps these days. To me particularly the main pain point is still how we move secrets to environments and manage those secrets. A lot of improvement has been made recently – with tools such as HashiCorp Vault and others, but still there is visible room to improvement.

On the other side, with Quantum Computing on the horizon there is an emerging demand to switch to Quantum-safe encryption. Most people understandably prefer to ignore this challenge for now, but it is a huge growing problem.

5. Templating 

Standard Dockerfiles, standard terraform scripts, standard cookbooks, playbooks – templates make it faster and easier to solve typical problems quickly. Several organizations are working to make this easier and organize already, but there is still room to grow.

6. DataOps – Data is like software but more fluid

Main requirement for DataOps projects is additional level of agility on top of traditional DevOps pipelining. Essentially, idea above about project build process living in a single Dockerfile servers that purpose of agility. As data is fluid there should be expectation of updating pipelines quickly and efficiently.

In this sense I hear many consultants saying that it takes large effort to build pipelines for the sake of ease of use and savings in the end. To me that doesn’t cut it any more – namely it’s missing a step of building pipelines themselves for agility. Meaning if you need to change pipeline later, that you should be able to do quickly. Achieving that level of excellence brings you closer to what successful DataOps practice needs.

7. Embracing adult organizations – moving to remote asynchronous workforce

 I have a strong belief that elementary school-like organizations, which believe that everybody must be in the same office to perform cannot really win the market if given strong adult competitors. In this sense adult organizations need to build trust among its people so they can perform asynchronously and remotely, which leads to huge advantages in a lot of sense. This process undergoes some degree of learning curve but in my experience remote workforce is already able to perform better in many instances.

8. Breaking silos by embracing transparency and clear communication lines 

This refers to the previous point of adult organizations – if we already have adult organization, this now allows for transparency and greater level of responsibility and commitment from every member of such organization. This finally leads for developers caring about end-user experience and not only about binary marking of ticket as “fixed”. I highly recommend “CEOs Should Tell It Like It Is” post by Ben Horowitz to further describe this idea.

9. Embracing DevOps and DataOps as core

Since the terms like Agile, DevOps, DataOps are so broad, some organizations trying to get better in releasing software and in business in general are lost when it comes what to do and where to start. There are two extremes that I see: 1st – everything we do must be Agile – which sometimes leads to comic situation where people lose common sense in the process. 2nd extreme – DevOps is something with infrastructure, CICD, security (essentially, technical stuff) – and it’s not our core, so we better hire consultants to do all that and focus on our core.

Both of those extremes usually don’t work well. My take – always start with common sense, DevOps starts with mindset – and mindset is the core of any business. Good consultants are good to quickly bring you up to speed and show you best practices and state of the art, but they can’t run the business for you.

At the same time, it is ok to outsource certain technical aspects (i.e., infrastructure monitoring and parts of incident management), as long as core thinking about how everything is wired together stays in-house. Also, my advice – is if you don’t know where to start, start with lead times measuring time between feature being dev complete and deployed to production. If you have instances where such time is measured in months, your organization is in big need of change.

Summary thoughts

In summary, I’d like to point that DevOps is half culture and half technical stuff. And that is roughly about how my points above got split. I’d also say that culture is arguably more important – since the right culture leads to the right technical decisions eventually, but the opposite is not always true. So always think about culture and embrace that DevOps is at the core of any software organization – which is essentially any organization out there these days 😉

3 comments

Leave a comment

Your email address will not be published. Required fields are marked *