DevOps Trends for 2021

Traditionally at the year end I like to note most significant trends that I see unfolding in the DevOps world. I did not know about the level of disruption that Covid would cause when I put “Remote Asynchronous Workforce” on the last year’s list. As we now know this actually turned out to be the #1 trend of 2020, which to me actually was the brightest spot of the otherwise grim year.

Now, going to 2021 here are the trends that I would like to highlight:

  1. Bundles and Software Bills of Materials. The industry is actively looking to automate push-button deployments but at the same time preserving transparency and traceability of software components. From the one side we want a black box with the whole suite of what we are deploying. From the other side we want to be able to quickly replace a component which security or stability was compromised. That’s why bundles are so important. Read more about bundles in my recent post here.
  2. Complexity vs Simplicity paradox. Business wants simplicity and quick roll-outs of ad-hoc software components. To achieve that they are actively looking at so-called “no-code” or “low-code” tools. Similarly, in DevOps world “Serverless” or “No-Ops” tools become increasingly popular. At the same time, this approach creates a huge scalability gap between initial prototyping and production deployments. This problem is only starting to unfold and I expect we would be hearing more about it in 2021.
  3. Vertical integration in DevOps vs “Best Tool for the Job”. Philosophy of DevOps tooling is currently split between companies like GitLab who are pushing for single suite of tools to handle all your DevOps needs and many others who build specialized tools which do better job in their niches. I wrote a post about this some time ago here. What I currently observe is increased marketing effort from companies advocating vertical integration to gain market share. I believe in the end of a day this is harmful to the overall software quality and that the “Best Tool for the Job” approach would be the preferred one. But in any case it would be interesting to observe the community reaction to this marketing push.
  4. Widening gap between high performers and low performers. Not sure if this will happen in 2021 or a little later, but there is a growing sense of urgency for low performers to do better or be wiped out by competition. Essentially, the trend we are waiting for is mass migration of low performers (in DevOps terms) to modern tooling and modern philosophies. This creates massive challenge and massive opportunity at the same time for DevOps practitioners.
  5. Kanban. Kanban is often more appropriate than Scrum for DevOps-oriented organizations. Still, the adoption level of Kanban remains sub-par to that of Scrum. However, I see the trend that this is changing. People enjoy Kanban boards and gradually move to adopt wider Kanban principles (starting with things like WIP-limits).
  6. CI is the new Version Control – CD will come next. These days it is hard to imagine a software project without version control. And starting one is as simple as typing “git init”. Now with Dockerfile, Continuous Integration piece is not much harder. So essentially, every piece of software now relies on working Version Control and its Continuous Integration pipeline. These are pretty much 2 things you do right away when you start a software project. And I believe this trend will be solidified in 2021. Continuous Delivery would come later, but it will come inevitably. See my post on best CI/CD practices for more details on the current state of things.

I believe these are the most important trends that are happening right now in the DevOps world. Happy 2021, I truly hope it would be much happier than 2020!

Reliza Hub: Individual vs Bundled deployments

Pretty excited to announce a new feature we just added to Reliza Hub. As requested, it is now possible to choose deployment strategy per instance – that said to use either Individual or Bundled deployment strategy.

You can switch between deployment types on the Instance screen, after expanding instance settings:

Switch between deployment types on Reliza Hub
Switch between deployment types on Reliza Hub

Deployment types function as following:

  • Individual deployment type – latest approved project release per instance will be retrieved by the getlatestrelease command of Reliza Go Client regardless of what bundles if any are assigned to this instance.
  • Bundle deployment type – when you specify instance id on getlatestrelease command or use getmyrelease command of Reliza Go Client, only approved project releases from the bundles assigned to the instance in the “Product Releases” section will be returned.

Therefore, if you use “Individual” deployment – essentially your project components are being deployed individually as they become ready and approved, regardless of what is happening with the other projects. Note, that you can still pin all project releases to be taken from a specific release branch.

If you use “Bundle” deployment – only releases that you bundled in specific product releases are deployed together. So your per-project development may advance, but if project releases are not part of bundles – they would not get deployed.

Hope this makes sense, feel free to ask me questions regarding this. Will also try to prepare a video describing this functionality a little later.

DevOps Bundles – New Name of The Game

The way we deploy software for the Web has come a long way from bare metal to VPCs to on-demand VMs with SOA to microservices. Latest big game in the DevOps world was container orchestration which is largely being won by Kubernetes and its ecosystem.

What is coming next? I am starting to believe that the next big thing for DevOps is how we approach bundles. The problem statement here is “How we replicate deployments between environments?” In example, how we test exactly what we have in production? Or how can we explain to the customer what exactly we are going to deploy?

This problem grew exponentially with the push to microservices. That is because monoliths are more or less trivial to track, but microservices – not so much. Another idea that contributes to the importance of bundles (that also seems to be on everyone’s mind these days) – spawning quick and lean integration environments on demand.

For example, during CI/CD run we want to have our full stack running to do integration tests. Or we want to allow developers or QA to quickly spawn a temporarily environment for testing. Goal here is to allow such environments to be created in minutes rather than hours and also allow for their destruction just as easily. At the same time, those environments should closely represent production environments so that our tests would be valid. If achieved, this would be efficient from prospective of both cloud compute costs and developer time.

Here come bundles! In the nutshell, bundles are objects that define what we spawn and relevance of the stack we spawn to what goes in production later. Let’s discuss existing cloud native options for bundles and where they may evolve:

  1. GitOps – having infrastructure configuration is git is essential and considered best practice for quite some time. However, it is usually not enough in isolation to properly define full bundle. One issue here is that deployment over GitOps commit point may still be complex. Another issue is that modifications are hard – say dealing with many Git branches may be problematic without additional tools.
  2. Helm charts – helm is a great tool which can be used over GitOps and generally allows for bundling cloud native applications well. Main issue with helm is that there is no good way to manage different versions of components and configuration parameters between environments. Again, we either have to deal with excessive Git branching or creating multiple values files. And maintenance of all those differences and version changes is manual.
  3. Cloud native bundles (CNAB) – an opinionated spec for packaging application bundles. I find this idea to be great, but this specification is a little heavy to my taste and tooling is lacking. For that reason I prefer CycloneDX SBOM spec with its new version 1.2 that actually allows for bundling cloud native packages.
  4. HashiCorp recently announced Waypoint – which is positioned as a platform independent bundle manager. Note, that this product is too fresh to draw any conclusions, but it’s using hcl (like terraform) and so far looks highly opinionated to me.

What we are doing at Reliza with all this? Recently, we have introduced functionality to download instance or revision specs in Reliza Hub in CycloneDX json format. Such CycloneDX exports can later be parsed by Reliza Go Client (cli tool) to update specification files like docker-compose or kubernetes yaml files.

Therefore, we are trying to use CycloneDX as a medium to bundle deployment data and extend existing tooling (such as Helm or compose files) to automatically pick up bundle configuration. This solves the problem of manual maintenance and multiple version control branches to maintain such files.

Export as CycloneDX BOM option on Reliza Hub
Export as CycloneDX BOM option on Reliza Hub

These are early stages for cloud native bundles, a lot of work is happening and updates should be published soon. If you have thoughts or ideas or stories what worked / did not work for you in the past, I would love to hear and discuss those experiences.

Hiring Lead Developers – Seeing The Project End State

I was recently reflecting on hiring and team management decisions. Particularly, how can you tell that a developer would make a good lead?

My current thinking on this is following. The most important differentiator of whether a developer is a good lead is the ability to see the project end state. While junior and intermediate developers are sort of coasting along, senior and lead developers are the ones who can formulate what exactly they are building.

Tricky part in this is understanding what bells and whistles will actually make it near future and which ones will not. In other words, key is to formulate what is the bare minimum for the project that more-or-less fits business requirements and is at the same time achievable by the team. Developers who get a good sense of this would likely make good leads.

It is interesting that lead developers may not be the best in terms of actual coding or technology expertise. So it is quite normal for teams to have management path for developers (to be a lead) and what I call architect path (to become technology expert). I know terminology may be confusing, as lead and architect terms are frequently mixed up in the industry.

Essentially, lead developers are the ones who envisions the end state and can schedule work from the prospective of end state. Therefore, lead developers are good at prioritizing work – simply because they know what they are building.

So how can we interview lead developer candidates? I would say, describe a hypothetical project (based on some specific project that you know really well) and ask what absolutely needs to be done to launch it. Ask about everything from recommended technology to desired team size and expected time frame. An answer you are looking for is a very concise description of specific steps needed. Certain questions posed by a candidate would also be a good indicator – i.e. trying to gauge performance characteristics or friendliness of early stage users.

Analyzing the candidate’s answer you need to try to match steps and technologies proposed to actual requirements as they were stated. Potential red flags for me would be something like proposing heavy stacks for early stage startup projects at prototyping phase (think Kafka for a blog platform). Similarly proposing light stacks for difficult problems (think WordPress for ML solution). Similarly mismatches in team sizes and time needed should also be considered as red flags. So what you are looking for is the answer that fits the problem and supporting questions that help candidate refine and better understand the problem. Good answer shows that candidate thinks about the end state of your project and thus would make a good lead.

Running k3s on Windows with WSL2

The original instructions were for microk8s but I had glitches with its operations on Windows.
So I replaced microk8s with k3s and it worked.
Algorithm of installation goes as following:
1. Install WSL2 – instructions here: (note that you need specific versions of Windows 10 referenced in the document). One of key bonuses right away with WSL2 – you now may have full docker running on Windows Home!

2. One thing I recommend right away after installing WSL2 – set some sane memory and CPU limits for it, otherwise it becomes resource hog quickly. How you can read here –

3. Set up your favorite Ubuntu 18.04 from Microsoft Store. Here comes surprise – it comes with systemd disabled and not working snaps.

4. To obtain systemd I tried bunch of articles, and finally this one worked – (would appreciate feedback on this one, since I had few other things on my system by the time I tried this one – but hopefully it works right away).

5. Finally install k3s as usual as described at .

6. Note, that to reach k3s cluster from your browser or powershell on host you would need to note ip of Ubuntu vm, which can be found for example by running ifconfig.

List All Docker Containers with IP Addresses

Solution for one container can be found in this stackoverflow:

Solution for all containers is below:

docker ps | awk 'NR>1{ print $1 }' | xargs docker inspect -f '{{range .NetworkSettings.Networks}}{{$.Name}}{{" "}}{{.IPAddress}}{{end}}'

Versioning Feature and Release Branches

For some time I was thinking how to solve a use case where we track both feature and release branches in Reliza Hub.

The problem encountered is related to versioning. Here is the problem in a nutshell: if all branches share same versioning schema, then releases overlap and by simply looking at versions we don’t know which is which. If we only deal with release branches, the solution would be what I presented earlier in this blog – namely, separating branch versions by Minor in SemVer case or by Month in the CalVer case.

However, feature branches complicate this – there are just too many of them to do Minor separation. These days a frequently used pattern is to create a feature branch per each task ticket or per each pull request, and then merge it into master (or in case of urgent fixes – also into release branch). Conventional naming for feature branches would be just the ticket name it references, i.e. TICKET-134 or PR-12. So, we end up having lots of such feature branches with unclear versioning.

The way we started to solve it with Reliza products is by adding specific versioning component Branch. So that our default versioning schema for feature branches becomes Branch.Micro. A sample release version for our feature branch could be TICKET-134.5 – meaning 6th release of our TICKET-134 branch.

There are of course variants, where you could do something like YYYY.0M.Branch.Micro – this way you can also track creation month of the branch. The main benefit of such approach is that feature branch releases do not mix with release branch releases, so we wouldn’t accidentally pull them into non-test environments.

Finally, we are going to introduce approvals for release branches. So that we explicitly indicate which feature branch should be used per environment.

The resulting release selection structure on Reliza Hub is going to look as following:

  1. Identify to what environment our instance belongs
  2. Identify project branch that is currently approved to this environment
  3. Identify most recent release from this branch approved to this environment -> this would be the release version that would be returned to UI and integrated CD system.

This internally creates 2-step selection process instead of 1-step process we used previously. That 1-step process required clients to explicitly specify desired branch, which was not obvious in many cases (as shown above). 2-step selection would add greater logic complexity onto Reliza Hub but remove complexity from the clients – who would just ask for latest approved release per environment and get it. Without worrying about branches.

We are currently actively working on implementing the above logic and it hopefully should be live by the end of June 2020. Meanwhile, I would appreciate any comments or feedback via DM to my LinkedIn.

7 Best Practices of Modern CI/CD

This is a summary of my research of modern CI/CD practices while working on Reliza Hub. This list is rather opinionated but I try to provide explanations why I hold specific opinions. Finally, I’m making this an ordered list, but it’s not actually sorted by importance.

So, let’s start:

1. Separate CI and CD – Use asynchronous pipelines

Too many times I see convoluted multistage CI+CD pipelines, all in one, with bunch of tests and approvals. Such super-pipelines are hitting multiple stages and culminating in production deployments.

While this is still much better than manual no-CI/CD approach, it is not great either. Key problem here is presented by pipelines stuck in the middle waiting for approvals or tests to complete. This quickly creates unmanageable Backlog of Work-in-Progress pipeline runs. Different versions are now tied up together and it is not clear which knot to untie first.

Separating CI and CD means that you have small pipelines that build artifacts. Then you have other small pipelines that perform tests, and yet others that do deployments.

Sometimes you can still mix things. For example, you may mix some quick unit tests into your build CI. But you definitely shouldn’t include your production-grade load testing in there.

As a result of separation, you get multiple small testing, approval and deployment components. They need to run independently and asynchronously – meaning that you also need some system of record to store results and progress (such as Reliza Hub).

Continue reading “7 Best Practices of Modern CI/CD”