Jenkins Builds with Kaniko and Reliza: Tutorial

Here I would like to present complete tutorial how we integrate Jenkins kaniko builds with Reliza Hub. As a base I will use my toy project – Mafia game. Specifically, we would use UI project to do the builds – Mafia Vue.

Quick summary of what we are going to do in the course of this tutorial:

  • Deploy Jenkins from scratch on Kubernetes
  • Configure Reliza Integration Plugin on Jenkins
  • Configure Kaniko Docker Image Build on Jenkins with Reliza Integration Plugin – all in declarative manner
  • Detailed walk-through of the sample Jenkinsfile I used

If you have questions during this tutorial, feel free to reach out to me via DevOps and DataOps Community on Discord.

Note, that if you are on Windows, I recommend to use either cygwin or WSL for shell commands – since PowerShell may not work as expected.

Deploying Jenkins on Kubernetes

Let’s start with deploying Jenkins on Kubernetes. First of all, note that you don’t need a sophisticated Kubernetes cluster here. A single node k3s cluster would work perfectly fine for the scope of this tutorial (and you can even have one on Windows).

Once your Kubernetes cluster is ready, you would also need to install helm. If you used Rancher Desktop to setup your Kubernetes cluster, you already have helm, otherwise use one of the options to install helm listed on the official website. Note that for helm to function properly, KUBECONFIG environment variable must be configured correctly – i.e., here is how to set it for microk8s.

Once helm is ready, deploying Jenkins is fairly straightforward:

kubectl create ns jenkins
helm repo add jenkins
helm repo update
helm install jenkins jenkins/jenkins --version 3.10.2 --set 'controller.installPlugins={kubernetes:1.31.1,workflow-aggregator:2.6,git:4.10.1,configuration-as-code:1.55,reliza-integration:0.1.17,kubernetes-client-api:5.10.1-171.vaa0774fb8c20}' -n jenkins

Note that while last instruction could be as simple as helm install jenkins jenkins/jenkins -n jenkins , I added few parameters to avoid possible conflicts with Jenkins plugin version and also install Reliza Integration plugin that we will use in the course of this tutorial. For more options when installing Jenkins, check out official Jenkins instructions here.

After Jenkins installation is complete, helm would print command to retrieve admin password, which you should use. Conveniently, helm also prints option how to port-forward Jenkins installation to be able to access the UI. Specifically

kubectl --namespace jenkins --address= port-forward svc/jenkins 8080:8080

Note, that I have added here --address= flag to make port-forwarding listen on all interfaces, which is required for example on k3s running via WSL. Be careful with this option – particularly, do not use it on cloud installations for security reasons.

Note that if your cluster has an ingress controller, another option to expose Jenkins would be to use Ingress resource. In example, we could add a simple ingress like this:

kind: Ingress
  name: jenkins
  namespace: jenkins
  - http:
      - path: /
        pathType: Prefix
            name: jenkins
              number: 8080

You should now be able to access Jenkins UI in the browser and log in using the login admin and the password obtained from the helm status instructions.

Fork GitHub Repository

Once Jenkins is deployed, go ahead and fork Mafia Card Shuffle UI project on GitHub:


This is the project we will be building on Jenkins over course of this tutorial.

Set up organization and projects on Reliza Hub

This step is similar to the one from my complete Helm CD tutorial – so simply refer to step 2 in that tutorial.

Set up Reliza Integration Plugin

If you used the helm Jenkins installation I described above, you already have Reliza Integration installed. Otherwise, log in into Jenkins and install Reliza Integration Plugin. For this, from Jenkins Dashboard click on ‘Manage Jenkins’ -> ‘Manage Plugins’. Select ‘Available’ tab and search for ‘Reliza’. Once found, check ‘Reliza Integration’ and click ‘Install without restart’.

Next, for the purposes of this tutorial create Organization Read-Write key on Reliza Hub. For this, in Reliza Hub, go to the Settings page, click on the plus circle icon under Programmatic Access and select Org-wide Read-Write as key type.

Once Reliza Hub generates a key, you need to register it on Jenkins. For this, in Jenkins, go to Dashboard -> Manage Jenkins -> Manage Credentials. Click on Jenkins (global) store. Click on Global credentials (unrestricted). Click on ‘Add Credentials’ in the left.

Select Kind to be ‘Username with password’, Scope – ‘Global’. Insert API ID from Reliza Hub in the Username field and API Key into the Password field. For the ID field enter ‘RELIZA_API’.

Set up Docker Registry

While you can use any Docker registry, the easiest one to use for this tutorial is the one built into Reliza Hub. To set it up, follow instructions from step 3 from my Helm CD tutorial here.

Notice that Reliza Hub creates both public and private registry for your organization. For this tutorial I will use the public one so my registry URI looks like:

To set Docker registry credentials on Jenkins, we first need to create a config.json file as following (don’t forget to replace docker-server URI with yours and also set proper docker-username, docker-password and docker-email fields):

kubectl create secret docker-registry docker-credentials \ \
    --docker-username=""  \
    --docker-password="" \
    --docker-email="" --dry-run=client -o jsonpath='{.data.\.dockerconfigjson}' | base64 -d > config.json

Once config.json file is generated, we need to set it as a secret on Jenkins which can be done as following:

  1. From Jenkins Dashboard, click ‘Manage Jenkins’, click ‘Manage Credentials’
  2. Navigate to global domain (for the purposes of this tutorial)
  3. Click ‘Add Credentials’ and choose
    • Kind: Secret File
    • Scope: select proper scope
    • File: select config.json file on your local filesystem
    • ID: for the example Jenkinsfile used in this tutorial, input “docker-credentials”

Note that I’m using here slightly modified instructions from .

Set up Jenkins build with kaniko

Our project already has Jenkinsfile compatible with this tutorial. Later on I will describe in detail what is going on inside this Jenkinsfile, for now we just want to make the build work. There are only 2 things you need to change in the Jenkinsfile in your forked repository:

  1. In the environment variable section change IMAGE_NAMESPACE variable to the actual registry URI you obtained from Reliza Hub
  2. In the withReliza directive change projectId to the UUID of Mafia Vue project on Reliza Hub for your organization

Now we are ready to set up a build job on Jenkins. In Jenkins Dashboard click ‘New Item’. Enter desired name, for example ‘Mafia Vue’, select ‘Pipeline’ as type and click ‘OK’.

Optionally, in build triggers choose Poll SCM and set the schedule. For example you can set ‘* * * * *’ in the schedule – essentially, this would poll SCM every minute – or you can use something like ‘H * * * *’ to poll every hour. Alternatively, it is also possible to configure Git Webhook, but describing this would be out of scope of the current tutorial. This step is optional, because you can still build manually even if no triggers are set.

Then, in the Pipeline section select ‘Pipeline script from SCM’. Select Git as your SCM and enter URL of your forked repository. You may also need to add credentials in the case if your repository is not public.

Choose branches you would like to build – in my case I will leave the field at its default value to only build master and will also uncheck the ‘Lightweight checkout’ box for proper Git processing required for Reliza Hub.

Script path should stay at default ‘Jenkinsfile’ – unless you chose to use some other location for the file.

Finally, save the pipeline.

You can now click ‘Build Now’ on the pipeline screen – this should build your image and push it to the Docker registry.

You can verify that the image is built by checking your project in Reliza Hub and observing new release and its components. You may also pull the image from your local machine using docker pull.

Congratulations, the build is now working! In the next section I am not going to ask you to do any more work – but rather present you with detailed description of how this Jenkinsfile works.

Detailed Walkthrough of our Jenkinsfile

While this section is largely optional, its aim is to describe everything we are doing in the course of Jenkinsfile.

I will describe here specific revision of the Jenkinsfile in case there are further modifications and line numbers would shift. If that happens, I may update this tutorial at some point. But even if it is not updated, you can always check the latest state yourself.

Let us start with the first 4 lines which contain opening statements telling Jenkins that we are using pipeline and it should run on Kubernetes agent:

pipeline {
    agent {
        kubernetes {
            defaultContainer 'alpine'

Notice here that we also set defaultContainer directive here – meaning that if container is not set explicitly, commands will be run on the alpine container.

Next line opens yaml block with yaml """ statement and ends with another """ mark on line 33.

Within the yaml block goes the actual Kubernetes pod definition – which defines the pod that is going to run our build:

apiVersion: v1
kind: Pod
  name: kaniko
  - name: shared-data
    emptyDir: {}
  - name: kaniko
    imagePullPolicy: Always
    - /busybox/cat
    tty: true
    - name: shared-data
      mountPath: /shared-data
  - name: alpine
    image: alpine
    imagePullPolicy: Always
    - /bin/cat
    tty: true
    - name: shared-data
      mountPath: /shared-data

Notice here that we call our pod kaniko, since kaniko is the tool of choice we will be using to build our container images. The reason I’m picking kaniko is because it does not require any special privileges to run, unlike plain docker or buildah. While buildah has some other advantages, like support for multi-architecture builds, this is not required for the purposes of this tutorial – and I may discuss this at some later time.

Our pod has 2 containers, first is called kaniko and runs of kaniko debug image. The reason I use debug image is because it allows shell access inside container – which is needed in the Jenkins context.

Second container is called alpine and runs of latest image of alpine linux. We will use this container to do all non-kaniko related operations, like Git commands.

I am also adding a volume named shared-data which would act as a shared storage between the two containers – so that both containers can write to it and read from it.

With that we are closing our yaml block and adding 2 closing curly braces to also close agent section of our Jenkinsfile. Our agent is now fully defined.

Next goes the environment block where I am defining environment variables that would be used in the actual build:

    environment {

This part should be fairly self-explanatory. IMAGE_NAMESPACE variable is set to the root of our organization container registery. IMAGE_NAME is the actual image that we are building. RELIZA_API is resolved to the organization read-write key on Reliza Hub that we previously set to Jenkins credentials with the ‘RELIZA_API’ Id.

Then goes the main part of our script – stages. Here we use only one stage which is called ‘Build with Kaniko’ (line 42).

stage('Build with Kaniko') {
    steps {
        script {
            sh 'apk add git'
            env.COMMIT_TIME = sh(script: 'git log -1 --date=iso-strict --pretty="%ad"', returnStdout: true).trim()

Notice that Jenkins requires us to define steps block, within which we define our script block.

First thing we do here is adding git to alpine linux container and then using it to compute COMMIT_TIME environment variable of our latest commit.

Next, on line 47 we open withReliza wrapper:

withReliza(projectId: '28c3735d-a810-4a3f-9e3a-2a43932589b1') {

Once open, this wrapper would use our RELIZA_API credentials to request latest successful release from our checked out branch and supplied projectId on Reliza Hub. If such release exists, the wrapper would populate LATEST_COMMIT environment variable with corresponding commit hash. It will also populate VERSION environment variable with next version obtained from Reliza Hub and DOCKER_VERSION with version string which is safe for tagging container images.

With newly obtained LATEST_COMMIT details, we now go into an if block on lines 48-52 which uses these data to resolve Git changes:

if (env.LATEST_COMMIT) {
    env.COMMIT_LIST = getCommitListWithLatest()
} else {
    env.COMMIT_LIST = getCommitListNoLatest()

This if block relates to 2 helper functions that are defined in the very bottom of our Jenkinsfile on lines 82-92:

String getCommitListNoLatest() {
    return sh(script: 'git log $GIT_PREVIOUS_SUCCESSFUL_COMMIT..$GIT_COMMIT --date=iso-strict --pretty="%H|||%ad|||%s" -- ./ | base64 -w 0', returnStdout: true).trim()
  } else {
    return sh(script: 'git log -1 --date=iso-strict --pretty="%H|||%ad|||%s" -- ./ | base64 -w 0', returnStdout: true).trim()

String getCommitListWithLatest() {
  return sh(script: 'git log $LATEST_COMMIT..$GIT_COMMIT --date=iso-strict --pretty="%H|||%ad|||%s" -- ./ | base64 -w 0', returnStdout: true).trim()

First helper function – getCommitListNoLatest() – is used for the case when no latest successful release was returned from Reliza Hub. It then tries to make use of Jenkins’ native GIT_PREVIOUS_SUCCESSFUL_COMMIT environment variable. If found, we establish log as the diff between previous successful commit known to Jenkins and the current one. If not found, we only relate to last known commit to Git.

Second helper function – getCommitListWithLatest() – assumes that we got latest commit from Reliza Hub and it uses this latest commit to compute the list of new commits going into the present build.

Once all commit differences are resolved, our script goes into line 53 which has another if statement:

if (!env.LATEST_COMMIT || env.COMMIT_LIST) {

Essentially, we only proceed with the build in the case if either LATEST_COMMIT is not found on Reliza Hub, or we have some COMMIT LIST difference with that LATEST_COMMIT. Otherwise, we treat the build as a repeated one and skip it. Note, that this logic would be particularly useful in monorepos and I will discuss it in more detail in subsequent posts on the subject.

With that we go into the actual build section of our script:

try {
    container(name: 'kaniko', shell: '/busybox/sh') {
        withCredentials([file(credentialsId: 'docker-credentials', variable: 'DOCKER_CONFIG_JSON')]) {
            withEnv(['PATH+EXTRA=/busybox']) {
                sh '''#!/busybox/sh
                    cp $DOCKER_CONFIG_JSON /kaniko/.docker/config.json
                    /kaniko/executor --context `pwd` --destination "$IMAGE_NAMESPACE/$IMAGE_NAME:latest" --digest-file=/shared-data/termination-log --build-arg CI_ENV=Jenkins --build-arg GIT_COMMIT=$GIT_COMMIT --build-arg GIT_BRANCH=$GIT_BRANCH --build-arg VERSION=$VERSION --cache=true
    env.SHA_256 = sh(script: 'cat /shared-data/termination-log', returnStdout: true).trim()
    echo "SHA 256 digest of our container = ${env.SHA_256}"
} catch (Exception e) {
    env.STATUS = 'rejected'
    echo 'FAILED BUILD: ' + e.toString()
    currentBuild.result = 'FAILURE'

Few things are happening here. First, we wrap the section in a try-catch block – because we want to report build status to Reliza Hub even if the build fails.

Then, we access kaniko container to execute the actual build. Notice that there we first access withCredentials wrapper to access our docker-credentials. Once inside the container shell we first copy credentials file in the location expected by kaniko.

Next we run actual kaniko build setting required build arguments and using cache to speed up subsequent builds – particularly, caching npm install step is frequently very useful. One extra flag we add is --digest-file to extract sha256 digest of the resulting image which will be sent to Reliza Hub as a part of build metadata.

Notice that we place digest file in a shared-data location so we can later extract it from alpine container as kaniko container is not suitable for this.

Finally, we have a catch block that prints out the error and marks build as failed on both Jenkins and Reliza Hub.

Finally, on line 72 we ship build metadata to Reliza Hub:

addRelizaRelease(artId: "$IMAGE_NAMESPACE/$IMAGE_NAME", artType: "Docker", useCommitList: 'true')

Next Steps

We covered all the steps needed to build a container image with kaniko on Jenkins using Reliza Hub as metadata storage and container image registry.

If you are willing to practice this on a project which does not yet have a sample Jenkinsfile – feel free to use back-end microservice of Mafia game – . You should notice that the process is very similar.

Later I am going to add 3 more tutorials on the subject:

  • Sample Jenkinsfile for packaging and storing helm chart with Reliza Hub
  • Sample Jenkinsfile for multiple container builds with monorepos using kaniko and Reliza Hub
  • Building multi-architecture images with buildah

Meanwhile if you have any questions, you can always find me on DevOps Community Discord.

Leave a comment

Your email address will not be published. Required fields are marked *