This guide will get you started with Jenkins, an open-source automation tool written in Java. It's a popular choice when it comes to continuous integration and continuous delivery (CI/CD ) tools. We will explore what Jenkins can be used for, its features, and the terms associated with a pipeline.
Latest posts by Graham Beer (see all)

Remind me again: What is CI/CD?

CI/CD refers to the terms continuous integration and continuous delivery. Sometimes you will also hear CD called continuous deployment.

Continuous integration is a practice whereby developers make small changes to code that invoke an automated process to build, test, and package their applications in a safe, repeatable way.

Continuous delivery gives us a way to deliver our code using an automated, consistent approach to our environments, such as Development, QA, and Production.

Lastly, the other term used is continuous deployment. This step is part of continuous delivery, where changes that pass our automated testing can be automatically moved to Production environments.

What is Jenkins?

Jenkins is a great tool for DevOps engineers to build and test software projects providing CI/CD orchestration. With Jenkins, you have a centralized platform to build a complete build pipeline utilizing all stages of CI/CD.

Jenkins' enhancement is achieved through plugins. Plugins aid functionality and make it easier for the user to build a pipeline. Jenkins supplies more than 1800 plugins in their plugin index, https://plugins.jenkins.io/. The range of available plugins is impressive, from cloud providers to iOS Development. Plugins can help with integration, such as pulling your code from GitHub or CodeCommit.

As Jenkins is built on Java, you need to install a Java Development Kit (JDK) to create your own plugin. This tutorial provides more information on this process.

Jenkins architecture

There are two primary components of the Jenkins architecture: the Jenkins controller and the Jenkins agent. The Jenkins agent is used to take the pressure off the controller by distributing the load. This helps prevent too much load on the Jenkins controller.

The Jenkins controller is the central place where builds are scheduled and executed for either the controller itself or the agents. All results are gathered from the agents and viewed by the controller. From the controller, you can monitor and control the agents, including performing tasks such as powering on/off as required.

The Jenkins controller assigns jobs to the Jenkins agent, which is where the build runs. The Jenkins agent then reports the results of the build back to the Jenkins controller. A Jenkins agent can also be referred to as a node.

You can configure a node on any server OS, such as Windows, Mac OS, or Linux, as only a Java executable is required to run. The node receives all information from the Jenkins controller and, when assigned a task, will instantly build and return the results.

Jenkins job types

A Jenkins job, sometimes known as a Jenkins project, is a way to build a sequential set of tasks. This could include pulling some code from a Git repository, building, and deploying it to a cloud provider. Jenkins has a few ways to build projects. From the Jenkins console, once you have clicked on the New Item tab, you will be presented with several job types (see screenshot below).

New item job types from Jenkins

New item job types from Jenkins

Let's take a look at what some of these jobs are used for.

Jenkins Freestyle Project is a repeatable build job, script, or pipeline that contains steps and post-build actions. With the Freestyle Project, you can set build triggers, such as an update to a git repository, and provide project-based security to your project.

The Maven Project job type helps build Maven projects, although you are required to use the Maven Integration plugin.

The Pipeline job, which can be referred to as Pipeline as Code, is used to define your pipeline in code and store it in a file called a JenkinsFile. Defining your code in a JenkinsFile means that you can source control your projects. I will offer a bit more detail on this step later in this guide.

The External Job is used if you need to monitor the execution of a process outside of Jenkins, such as on a remote machine. You can use the Jenkins dashboard for another automation system you are running.

Multi-configuration is a powerful feature that lets you run the same build job in different configurations. The feature allows you to test an application in different environments, such as Production, QA, or Development, and different databases.

Jenkins allows you to organize jobs into an organizational structure with folders. With folders, you can manage permissions on a per-folder basis.

The Multibranch Pipeline project extends the Pipeline job, where you can use different JenkinsFiles for different branches within the same project. For branches in your source control that contain a JenkinsFile, Jenkins can automatically discover, manage, and execute pipelines.

The Ivy Project allows Jenkins to automatically configure project dependencies using the Ivy dependency management system.

This concludes our summary of the types of Jenkins jobs. We will now take a deeper look into the concept of Pipeline as Code with a JenkinsFile.

Using a JenkinsFile

You can define a Jenkins pipeline using a text file named JenkinsFile. The JenkinsFile is written in domain-specific language (DSL), which is a Groovy-based language. Groovy is like Java but is more of a dynamic scripting language.

The advantage of defining your pipeline as code is that you can get the benefits of source control. This then allows you to take advantage of the benefits that come with source control: an audit trail of your pipeline that provides a single source of truth, along with code reviews with your peers.

Pipeline syntax

The pipeline supports two types of syntax: declarative and scripted.
The declarative option is simpler. It provides a more structured approach to the scripted pipeline, which can make the writing and reading of the pipeline code easier. Declarative pipelines must always begin with an enclosed pipeline block.

Here is a simple outline of what a declarative JenkinsFile would look like:

pipeline {
    agent any

    stages {
        stage('Build') {
            steps {
                echo 'Building..'
            }
        }
        stage('Test') {
            steps {
                echo 'Testing..'
            }
        }
        stage('Deploy') {
            steps {
                echo 'Deploying....'
            }
        }
    }
}

From the example above, the pipeline starts the agent, which is required for all pipelines. The agent's job is to tell Jenkins where and how to execute the pipeline. When the agent is informed that the executor is ready, the steps begin to run. The agent will allocate a workspace for the pipeline, which will contain files checked out from source control and any additional working files needed.

You should see that the pipeline has multiple steps. These steps allow you to build, test, and deploy your application. Each step acts and then moves on sequentially if successful. Failure on a step will halt pipeline execution and fail.

Timeouts and retries

A powerful feature of writing your pipeline as code is being able to solve problems such as retries and timeouts. You can set the number of retries and exit if a step takes too long.

Here is a simple example to demonstrate these features:

pipeline {
    agent any
    stages {
        stage('Deploy') {
            steps {
                retry(3) {
                    sh './shell.sh'
                }

                timeout(time: 3, unit: 'MINUTES') {
                    sh './checker.sh'
                }
            }
        }
    }
} 

If we want to retry our deployment 5 times but never want to spend more than 3 minutes in total before failing the stage, we can define the steps with the following block:

steps {
                timeout(time: 3, unit: 'MINUTES') {
                    retry(5) {
                        sh './checker.sh'
                    }
                }
            }

Finishing up

Jenkins gives us the post action, which can be used to run at the end of the pipeline to perform any clean-up activities. This extends to providing actions that are based on the outcome of the pipeline as well. This example shows the post actions available for use:

pipeline {
    agent any
    stages {
        stage('Test') {
            steps {
                sh 'echo "Fail!"; exit 1'
            }
        }
    }
    post {
        always {
            echo 'This will always run'
        }
        success {
            echo 'This will run only if successful'
        }
        failure {
            echo 'This will run only if failed'
        }
        unstable {
            echo 'This will run only if the run was marked as unstable'
        }
        changed {
            echo 'This will run only if the state of the Pipeline has changed'
            
        }
    }
}

To conclude this part on the pipeline syntax, let's briefly take a look at what the scripted syntax is.

The scripted pipeline syntax is a general-purpose DSL built with Groovy. A large part of the Groovy language is available in the scripted pipeline, which gives us extra flexibility when constructing our pipelines.

Being able to script in our pipeline means that we can use if/else conditions for flow control.
This example shows how we would use the if/else condition in our pipeline:

node {
    stage('Branch') {
        if (env.BRANCH_NAME == 'main') {
            echo 'This is the main branch'
        } else {
            echo 'I'm on the wrong branch!'
        }
    }
}

We can also make use of Groovy's exception handling with try/catch/finally blocks. When a step fails, we can control how we handle the exception.

node {
    stage('Demo') {
        try {
            sh 'exit 1'
        }
        catch (exc) {
            echo 'We have a problem!'
            throw
        }
    }
}

Which is better?

The scripted pipeline and the declarative pipeline are fundamentally the same. The main differences between the two are syntax and flexibility. The declarative pipeline was created to offer a simpler way to define a pipeline. The scripted pipeline, with its Groovy scripting abilities, has a steeper learning curve but does have fewer limitations in what you can create.

Environment variables

Jenkins can use environment variables, which can be set globally or per stage. Setting global variables makes them available to all pipelines, whereas when set per stage, they are local to that pipeline. Environment variables are made available with the env variable, which can be used anywhere in a JenkinsFile. Using the URL, ${YOUR_JENKINS_URL}/pipeline-syntax/globals#env, you can view all accessible environment variables, which is a nice feature.

Summary

Jenkins is one of the most popular and commonly used tools for continuous integration and deployment pipelines. It provides a powerful way to automate day-to-day tasks and deploy production applications.

Jenkins is highly configurable and has a vast number of plugins available to make building your pipeline easier. There is a lot of flexibility around what you can do with Jenkins, from simple projects to pipelines.

Subscribe to 4sysops newsletter!

Declaring your pipelines with code using a JenkinsFile provides an approach to secure and source control your pipelines.

avataravatar
1 Comment
  1. Rex Keene 10 months ago

    Thanks for that, but it’s a very dev oriented piece.
    I’ll never see those file types, I’m not a Jenkins user, but I do have to install and maintain Jenkins servers, and I’d have appreciated a more infrastructure oriented piece.

Leave a reply

Your email address will not be published.

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account