Table of Contents

Introducing CloudBees Jenkins Enterprise

Introduction

CloudBees Jenkins Enterprise (CJE) is CloudBees’ commercial version of Jenkins. CJE provides a number of plugins that help organizations address three main problem areas, described below. Additionally, with CloudBees Jenkins Enterprise, Jenkins users are provided with a stable release and frequent patches.

If you upgrade from CloudBees Jenkins Enterprise based on Jenkins 1.x to CloudBees Jenkins Enterprise based on Jenkins 2 (more specifically, CloudBees Jenkins Enterprise 2.7.19.1 or higher), then you can take advantage of the CloudBees Assurance Program, which is enabled by default through the Beekeeper Upgrade Utility ("Beekeeper"). By monitoring plugin versions and comparing the configuration of your instance to plugins identified by CloudBees engineers as verified, trusted, or unverified, Beekeeper provides "envelope enforcement" to ensure the stability and security of your Jenkins instance.

Note

If you are testing functionality known to require plugin versions newer than those currently identified as verified or trusted, you have the option of disabling the Beekeeper Upgrade Utility for the duration of your tests.

For more information about the Beekeeper Upgrade Utility, see the CloudBees Jenkins Enterprise Install Guide.

See release-notes.cloudbees.com for the change log.

The three main areas that CloudBees Jenkins Enterprise helps Jenkins users in can be thought of as scale, efficiency, and security:

Large Install Plugins

CloudBees Jenkins Enterprise helps administrators manage large numbers of jobs, projects and teams. The plugins that help with large installation problems are:

  • Folders Plugin

    Folders help to create hierarchies or custom taxonomies to better manage large numbers of jobs

  • Templates Plugin

    Capture “sameness” of configuration information in one place that is propagated

  • Backup Scheduling Plugin

    Use Jenkins to backup Jenkins. No more cron jobs or error-prone custom scripts.

  • Plugin Usage Plugin

    Helps determine which plugins are in use by which jobs.

Optimized Utilization Plugins

CloudBees Jenkins Enterprise helps administrators make better use of existing resources with the following plugins:

  • VMware ESXi/vSphere Auto-Scaling Plugin

    Make better use of existing VMWare resources by using machines in VMWare pools as agents (formerly called slaves).

  • Label Throttle Build Execution Plugin

    Define the bare-metal limits for VMs being used as agents. This helps in faster runs when multiple builds are running multiple VMs on a single bare-metal machine.

  • Even Load Strategy Plugin

    Change the default agent allocation algorithm of Jenkins to go to allocate jobs to free machines.

  • Skip Next Build Plugin

    Stop running builds if there are known errors that are being fixed.

Security Plugins

CloudBees Jenkins Enterprise helps administrators secure their projects and their installations with the following plugins:

  • Role-Based Access Control (RBAC) Plugin

    Setup sophisticated authorization policies to manage Jenkins

  • WikiText Descriptions Plugin

    Prevent potential XSS attacks due to HTML descriptions in Jenkins

  • Secure Copy Plugin

    Enable controlled and targeted sharing of artifacts between Jenkins jobs either on different Jenkins instances or within the same Jenkins instance.

  • Folders Plus Plugin

    Permits agent permissions to be associated with folders.

Terms and Definitions

Jenkins (also referenced as Jenkins OSS in CloudBees documentation)

Jenkins is an open-source automation server. Jenkins is an independent open-source community, to which CloudBees actively contributes. You can find more information about Jenkins OSS and CloudBees contributions on the CloudBees site.

CJE

CloudBees Jenkins Enterprise - Commercial version of Jenkins based on Jenkins OSS Long-Term Support (LTS) releases with frequent patches by CloudBees. CJE also provides a number of plugins that help organizations address main needs of enterprise installations: Security, High Availability, Continuous Delivery, etc.

CJOC

CloudBees Jenkins Operations Center - operations console for Jenkins that allows you to manage the multiple Jenkins masters within. See details on the CloudBees site.

CJA

CloudBees Jenkins Analytics - provides insight into your usage of Jenkins and the health of your Jenkins cluster by reporting events and metrics to CloudBees Jenkins Operations Center.

Backup Plugin

Introduction

The backup plugin simplifies the execution and the administration of backups. Backing up CloudBees Jenkins Enterprise is important for several reasons; for a disaster recovery such as a disk failure, for retrieving old configuration in case someone accidentally deleted something important, and for auditing for tracing back the origin of a particular configuration.

The Backup plugin was introduced in Nectar 10.10.

Taking Backup

This plugin lets you create backup jobs as jobs on CloudBees Jenkins Enterprise. In this way, you can use the familiar interface for scheduling execution of backup, and monitor any failure in backup activities.

To create a backup, click New Job from the left and select Backup Jenkins. This takes you to the page where you configure the backup project. Configuration of a backup job is very similar to that of a freestyle project. In particular, you can use arbitrary build triggers to schedule backup. Once the trigger is configured, click Add build step and add Take backup builder.

You can add multiple backup build steps to take different subsets of backups at the same time, or you can create multiple backup jobs to schedule different kind of backup execution at different interval. For example, you might have a daily job that only backs up the system configuration and job configuration, as they are small but more important, then use another job to take the full backup of the system once a week.

Configuring Backup

There are three aspects to the configuration of the Take backup builder.

First you should decide "what to backup": the data included/excluded from the backup. This plugin divides Jenkins storage into three classifications:

System configuration

This includes all the settings scoped to the entire CloudBees Jenkins Enterprise, such as everything in the system configuration page, plugin binaries, plugin settings, user information, fingerprints. Anything that lives outside the individual job directory.

Note
Security

This category includes a master key which is used by CloudBees Jenkins Enterprise to encrypt all other private information such as passwords in job configuration. You are recommended to select the configuration option to omit this master key from backup, since it is small and never changes once created; store it elsewhere and manually re-add it when resurrecting a backup.

Job configuration

This includes all the settings scoped to individual jobs. Mostly this maps to the job configuration, and is generally the most precious data.

Build records

This includes information scoped to individual builds, such as its console output, artifacts, test reports, code analysis reports, and so on. Build records can be quite large and may be dispensable.

When selecting any of these, you may optionally specify a list of file patterns to exclude from the backup. The Backup plugin ships with a number of standard exclusions for well-known files created by Jenkins and popular plugins that would be pointless or expensive to back up. Nonetheless, you may need to add some custom entries appropriate for your own installation. You can also use the exclude list to suppress certain jobs (or whole folders) from backup, and so on.

The next aspect is "where to backup", which controls the destination of the backup.

Azure Blob Storage

This tells CloudBees Jenkins Enterprise to backup data as a file to a configured Azure Blob Storage container.

azure
Amazon S3

This tells CloudBees Jenkins Enterprise to backup data as a file to a configured Amazon S3 bucket.

amazon
Local directory

This tells CloudBees Jenkins Enterprise to backup data as a file in the file system of the master. For disaster recovery purpose, it is recommended that you back up files to a network mounted disk.

Remote SFTP server

This tells CloudBees Jenkins Enterprise to connect to a remote SFTP server and send a backup over there. This is more convenient way to back up to a remote system on Unix as it requires no shared file system.

Remote WebDAV server

This tells CloudBees Jenkins Enterprise to connect to a WebDAV server and store a backup in a configured directory.

Note
Storage requirements

For S3 backups a temp file with the whole backup has to be temporarily stored during upload, this is a limitation of the S3 service.

The last aspect is the retention policy, which controls how many/long backups are kept. By using this switch you can avoid run-away disk consumption caused by old backups. Backup retention is based on file names; backup file names include the job name and the build number of the backup job. So you can safely use a single destination directory for multiple backup jobs and their backup retention policies are enforced without interfering with each other.

Exponential decay

This tells Jenkins to keep lots of newer backups and fewer old backups. It’s called "exponential decay" because the number of backups kept is proportional to e^-t. In this way, you’ll always cover the entire lifespan of your Jenkins installation with some backups, with more coverage on recent data.

Keep all backups

In this mode, Jenkins will not delete any backups that it has created. This is useful if you are managing backup retention externally, such as via logrotate.

Keep N backups

In this mode, Jenkins will keep the latest N backups and delete any older ones.

"Backup to S3" with Amazon S3 compatible storage systems

Note
Since "CloudBees Back-up Plugin" version 3.33

The "Backup to S3" destination can be used to store backups in storage systems that are compatible with Amazon S3 (e.g. EMC Atmos storage, OpenStack Swift…​).

To do this, you have to use the standard mechanism of the AWS SDK to modify the list of Amazon S3 endpoints known by the SDK defining a file com/amazonaws/partitions/override/endpoints.json in the classpath of Jenkins. This modification can either be to add a new Amazon S3 endpoint or to replace the existing Amazon S3 endpoints.

Notes on endpoints.json:

  • CloudBees recommends its customers to ask their storage system vendor for their recommendations to customize the standard AWS SDK endpoints.json configuration file. Note that this endpoints.json file is used in all the AWS SDK including aws-sdk-net, aws-sdk-js, aws-sdk-java and aws-sdk-ruby.

  • The desired S3 compatible endpoint MUST be declared in the section partitions.regions and in the section partitions.services.s3.endpoints.

  • The section partitions.services.s3.endpoints. my-s3-compatible-endpoint .signatureVersions (where my-s3-compatible-endpoint is the new section) must be filled according to the specifications of the vendor of the S3 compatible storage system.

Customization of the Jenkins startup command line to add the endpoints.json file to the classpath

A solution to add this endpoints.json file in the classpath of Jenkins is to use the java commandline parameter -Xbootclasspath/a:/path/to/boot/classpath/folder/ and to move com/amazonaws/partitions/override/endpoints.json in the folder /path/to/boot/classpath/folder/.

Sample:

Jenkins startup command line
java -Xbootclasspath/a:/opt/jenkins/boot-classpath/ -jar /opt/jenkins/jenkins.war

Customization of endpoints.json

To add the Amazon S3 Compatible Storage Endpoint keeping all the AWS existing endpoints, we recommend to edit the endpoints.json file and add a partition to the out of the box AWS partitions.

To remove all the existing AWS endpoints and just have the Amazon S3 Compatible Storage Endpoint, we recommend to edit the endpoints.json file, delete all the existing AWS partitions and add a partition with the Amazon S3 Compatible Storage Endpoint.

Sample AWS SDK endpoints.json adding a "My Company" partition containing an "us-mycompany-east-1" region with the s3 endpoint "s3-us-mycompany-east-1.amazonaws.com".

com/amazonaws/partitions/override/endpoints.json
{
  "partitions": [
    {"_comment": "OUT OF THE BOX AWS PARTITIONS ..."},

    {
      "defaults": {
        "hostname": "{service}.{region}.{dnsSuffix}",
        "protocols": [
          "https"
        ],
        "signatureVersions": [
          "v4"
        ]
      },
      "dnsSuffix": "cloud.mycompany.com",
      "partition": "mycompany",
      "partitionName": "My Company",
      "regionRegex": "^(us|eu|ap|sa|ca)\\-\\w+\\-\\d+$",
      "regions": {
        "us-mycompany-east-1": {
          "description": "My Company US East 1"
        }
      },
      "services": {
        "s3": {
          "defaults": {
            "protocols": [
              "http",
              "https"
            ],
            "signatureVersions": [
                "s3",
                "s3v4"
             ]
          },
          "endpoints": {
            "us-mycompany-east-1": {
              "hostname": "s3-us-mycompany-east-1.amazonaws.com"
            }
          }
        }
      }
    }
   ],
   "version": 3
}

Restoring From Backup

Backup files are in the tar+gzip format, with file names relative to the Jenkins home directory. Example of restoring a backup using Linux (GNU tar) syntax:

$ cd $JENKINS_HOME
$ tar xvfz /backups/jenkins/backup-daily-13.tar.gz
config.xml
jobs/myjob/config.xml
...

$JENKINS_HOME is relocatable, meaning you don’t have to restore the backup into the same instance it was created from. This allows you to exercise the backup restore steps by creating a different Jenkins instance on your own computer and restoring things there.

Note
If performance issues are found while taking big backups

It is recommended to configure an executor on the master, limit it with a label and assign the backup job to it, this way the backup is directly taken from the master without transferring it first to the agent.

Even Scheduler Plugin

Introduction

When using many agent nodes, Jenkins needs to decide where to run any given build. This is called "scheduling", and there are several aspects to consider.

First, it can be necessary to restrict which subset of nodes can execute a given build. For example, if your job is to build a Windows installer, chances are it depends on some tools that are only available on Windows, and therefore you can only execute this build on Windows nodes. This portion of the scheduling decision is controlled by Jenkins core.

The second part is determining one node that carries out the build, of all the qualifying nodes, and that is what this plugin deals with.

Default Jenkins Behavior

To better understand what this plugin does, let us examine the default scheduling algorithm of Jenkins. How does it choose one node out of all the qualifying nodes?

By default, Jenkins employs the algorithm known as consistent hashing to make this decision. More specifically, it hashes the name of the node, in numbers proportional to the number of available executors, then hashes the job name to create a probe point for the consistent hash. More intuitively speaking, Jenkins creates a priority list for each job that lists all the agents in their "preferred" order, then picks the most preferred available node. This priority list is different from one job to another, and it is stable, in that adding or removing nodes generally only causes limited changes to these priority lists.

As a result, from the user’s point of view, it looks as if Jenkins tries to always use the same node for the same job, unless it’s not available, in which case it’ll build elsewhere. But as soon as the preferred node is available, the build comes back to it.

This behaviour is based on the assumption that it is preferable to use the same workspace as much as possible, because SCM updates are more efficient than SCM checkouts. In a typical continuous integration situation, each build only contains a limited number of changes, so indeed updates (which only fetch updated files) run substantially faster than checkouts (which refetch all files from scratch.)

This locality is also useful for a number of other reasons. For example, on a large Jenkins instance with many jobs, this tends to keep the number of the workspaces on each node small. Some build tools (such as Maven and RVM) use local caches, and they work faster if Jenkins keeps building a job on the same node.

However, the notable downside of this strategy is that when each agent is configured with multiple executors, it doesn’t try to actively create balanced load on nodes. Say you have two agents X and Y, each with 4 executors. If at one point X is building FOO #1 and Y is completely idle, then on average, the upcoming BAR #1 still gets assigned to X with 3/7 chance (because X has 3 idle executors and Y has 4 idle executors.)

Even Loading Strategy

What this plugin offers is a different scheduling algorithm, which we refer to as "even loading strategy".

Under this strategy, the scheduler prefers idle nodes absolutely over nodes that are doing something. Thus in the above example, BAR #1 will always run on Y, because X is currently building FOO #1. Even though it still has 3 idle executors, it will not be considered so long as there are other qualifying agents that are completely idle.

However, idle executors are still idle, so if more builds are queued up, they will eventually occupy the other 3 executors of X, thereby using this agent to its fullest capacity. In other words, with this algorithm, the total capacity of the system does not change—only the order in which the available capacity is filled.

The strength of this algorithm is that you are more likely to get a fully idle node. Quite simply, executing a build on a fully idle system is faster than executing the same thing on a partially loaded system, all else being equal.

However, the catch is that all else may not be equal. If the node does not have a workspace already initialized for this job, you’ll pay the price of fully checking out a new copy, which can cancel other performance gains. In a nutshell, even loading is most useful for jobs which are slow to build, but for which the very first build on a node is not notably slower than any other.

The Even Scheduler plugin was introduced in Nectar 11.10.

Using Even Scheduler Plugin

This plugin adds two ways to control which algorithm to use for a given job:

Global preference

Picks the scheduling algorithm globally for all jobs, unless specifically overridden by individual jobs.

Per-job preference

Picks the scheduling algorithm for a specific job, regardless of the global preference.

Selecting Global Preference

Go to "Manage Jenkins" and select "Configure System" to get to the system configuration page. Then look for "Default Scheduler Preference". Selecting "Prefer execution on idle nodes over previous used nodes" will make the Even Scheduler loading strategy the default strategy.

Selecting Per-Job Preference

Go to the configuration screen of a job, then select "Override the default scheduler preference". This will activate the per-job override. If "Prefer execution on idle nodes over previous used nodes" is selected, it’ll tell Jenkins to use the even loading strategy for this job. Otherwise it’ll make Jenkins use the default scheduling algorithm for this job.

Folders Plugin

Introduction

The Folders plugin allows you to organize jobs in hierarchical folders, much like how you organize files in directories in your file system.

While this plugin is now open source, it is a foundational element of CloudBees Jenkins Enterprise.

Overview

Jenkins has limited capacity for organizing a large number of jobs or organizing jobs around taxonomies such as projects or departments. Today, most teams group jobs around views on the dashboard, but views are not optimal for understanding custom taxonomies. The Folders plugin addresses this limitation because it helps capture taxonomies in a user environment and helps organize jobs around these taxonomies.

The Folders plugin enables you to create a folder and group related jobs together. These jobs can be organized around a project, department, sub-department, release or any taxonomy that you choose to define. You can create an arbitrary level of nested folders. Folders are namespace aware, so Job A in Folder A is logically different from Job A in Folder B

The Folders plugin allows you to clone a folder with its children intact. This is useful to seed new projects or jobs with information from a particular project type. Folders can also define properties that are visible to jobs inside them, enabling you to simplify branch and pipeline management.

Setting up the Folders plugin

The Folders plugin is extremely easy to set up and use. Enable the CloudBees Folders Plugin in the plugin manager as shown in Installing the Folders plugin. Restart CloudBees Jenkins Enterprise to enable the plugin.

folder plugin install
Figure 1. Installing the Folders plugin

Using the Folders plugin

Go to New Job, choose Folder as the job type as in Creating a new Folder.

folder create
Figure 2. Creating a new Folder
Tip

To replicate an existing folder, choose Copy Existing Job and choose the folder that you want to replicate. This will replicate all jobs and nested folders.

A sample taxonomy with the Folders plugin

Sample Taxonomy with Folders plugin shows a potential taxonomy with the Folders plugin. In this example, an organization wants to split its jobs by departments and sub-departments. Here, we have created a folder called "Department A" with a sub-folder for a sub-department. "A very important job" that cross cuts across sub-departments sits at the top level in the department.

folder sub department
Figure 3. Sample Taxonomy with Folders plugin

Using folders with other CJE plugins

The power of the Folders plugin is unleashed with other CloudBees Jenkins Enterprise plugins.

Role-Based Access Control Plugin

You can use Folders with the Role-Based Access Control Plugin to enable folder-level security roles. By default, the roles are inherited by sub-folders.

Tip

You can use the filter mechanism with the RBAC plugin to filter out permissions for a particular sub-folder so that it does not inherit the parent folder roles.

Templates Plugin

You can place a template inside a folder rather than at top level. Then the template is only offered to jobs (or other templates) in or below that folder.

Folders Plus Plugin

Introduction

The Folders Plus plugin provides additional capabilities not available in the free Folders plugin, including the ability to:

  • restrict the use of slaves to specific folders

  • move jobs (or subfolders) from one folder to another (or to or from the root of Jenkins)

  • get additional health reports for jobs within a folder

  • set custom icons on a folder

  • define environment variables that will be passed to the builds of all jobs within a folder

  • display selected jobs from subfolders in a higher-level view

  • restrict which kinds of items may be created in a folder

Controlled slaves

Core Jenkins functionality does not provide any mechanism to prevent somebody who can configure a build from assigning that build to a slave. This can be a concern when specific slaves contain secrets that are not for general consumption.

Consider the case of a Jenkins instance shared by both an Operations group and multiple Developer Groups. The operations group have specific credentials that are required in order to deploy the company website into production, so the operations group create a dedicated slave specifically for deploying the company website into production and store the required credentials directly on that slave. The slave is configured to only accept jobs that are tied to the specific slave. However there is nothing stopping a developer from configuring a job they control to run on the operations slave and copy off the credentials.

The Controlled Slaves functionality added by the Folders Plus plugin provides the required restrictions to prevent the above concern.

Note

The controlled slaves functionality is designed to allow the management of controlled slaves in a situation where the person responsible for the slave does not have permissions to configure the folder that will be approved to use the slave and that the person responsible for the folder does not have permissions to configure the slave.

Future versions of this plugin may provide a simplified pipeline for the case where the person setting up the controlled slave has the configure permissions on both the slave and the folder.

Configuring Controlled slaves

The first step is to configure the slave to only accept jobs from within approved folders.

Note

Once a slave is configured to only accept jobs from within approved folders it will refuse to build jobs in the root of Jenkins.

Open the configure screen of the slave (Enabling approved folders functionality on a slave) and select the Only accept builds from approved folders option.

enable approved folders
Figure 4. Enabling approved folders functionality on a slave

After saving the configuration you should see a screen like that in A slave with approved folders functionality enabled and no approved folders assigned

approved folders enabled
Figure 5. A slave with approved folders functionality enabled and no approved folders assigned

At this point the person with permissions to configure the folder needs to create a request for a controlled slave. They need to open the folder in Jenkins (Creating a controlled slave request - step 1) and select the Controlled Slaves action.

approving a folder step 1
Figure 6. Creating a controlled slave request - step 1

This will display the list of controlled slaves assigned to the folder (Creating a controlled slave request - step 2). We want to add a new controlled slave so select the Create request action. Confirm the request creation (Creating a controlled slave request - step 3) and the Request Key should be generated and displayed (Creating a controlled slave request - step 4). The request key needs to be given to the person with permissions to configure the controlled slave.

approving a folder step 2
Figure 7. Creating a controlled slave request - step 2
approving a folder step 3
Figure 8. Creating a controlled slave request - step 3
approving a folder step 4
Figure 9. Creating a controlled slave request - step 4

At this point the person with permissions to configure the slave needs to approve the request. They need to select the Approved Folders action from the slave (A slave’s approved folders screen) and select the Controlled Slaves action.

approving a folder step 5
Figure 10. A slave’s approved folders screen
Note

The Approved Folders screen allows for multiple tokens to be issued, a token can be used to sign multiple requests, and it is left up to the administrator of the slave to decide whether they want to create a new token or use an existing token.

Creating a new token for each slave has the advantage that each approved folder can be revoked individually. Conversely, re-using an existing token allows for bulk revoking of folders.

If there are no existing tokens for approving controlled slave requests you will need to create a new token by selecting the Create token action and confirming the creation (Creating a token for approving requests) otherwise the authorize button on any existing token can be used to authorize a request using that token. Either route should display the Authorize Request screen (Approving a controlled slave request).

approving a folder step 6
Figure 11. Creating a token for approving requests
approving a folder step 7
Figure 12. Approving a controlled slave request

Enter the Request Key and press the Authorize button. The Request Secret should be generated and displayed (An approved request for a controlled slave). The request key needs to be given to the person with permissions to configure the folder.

approving a folder step 8
Figure 13. An approved request for a controlled slave

At this point the person with permissions to configure the folder needs to return to the request screen and enter the Request Secret provided by the person responsible for configuring the slave. (Completing a controlled slave request) and click the Authorize button.

approving a folder step 9
Figure 14. Completing a controlled slave request

Once the controlled slave request has been completed, the controlled slave should appear in the list of slaves (A folder with an approved slave) and the folder should appear on the slave (A slave with an approved folder)

Note

If the person viewing the slave does not have permissions to view the folder, they will not see the folder name when viewing the slave’s status screen or list of approved folders. In this case an additional indicator will be provided detailing the number of “hidden” folders in addition to those that the user is permitted to see.

folder with controlled slave
Figure 15. A folder with an approved slave
slave with approved folder
Figure 16. A slave with an approved folder

Troubleshooting

Issues with third-party plugins

In order to enforce that only jobs belonging to approved folders can be built on a controlled slave it is necessary to identify the job that owns any build submitted to the build queue of Jenkins. Once the job has been identified, the folder hierarchy can be traversed to see if any parent folder is on the list of approved folders.

Once of the extension points that Jenkins provides is the ability to define custom build types that can be submitted to the build queue. When a plugin author is developing such a plugin, they should ensure that the owning build type hierarchy eventually resolves to the build job to which the task belongs whenever a custom build type is associated with a build job.

In the event that a third-party plugin author has not correctly ensured that their custom build type is correctly associated with its originating job, the controlled slaves functionality will be unable to identify the originating job, and consequently will refuse to let the job build on the controlled slave.

If it is critical to enable the third-party plugin to build on the slave, an the risk of allowing other custom build types to run on the controlled slave, there is an advanced option that can be enabled on a per-slave basis.

Issues with builds being blocked forever

There are multiple reasons why Jenkins can block a build from executing. The controlled slave functionality adds an additional reason, however, it should be noted that when Jenkins evaluates a job for building, once it has identified a reason why the job cannot be started, it does not evaluate any of the other reasons. Therefore when Jenkins reports the reason as to why the build is blocked, it will only report the first reason it encountered. In such cases, the user may not realize that the reason for the build being blocked is because it is not within an approved folder.

Jenkins does not re-evaluate whether builds are still blocked until an event causes a probability of jobs being executed. Such events include, but are not limited to

  • A job being added to the build queue

  • A job being removed from the build queue

  • A job completing execution

  • A node coming on-line

  • A node going off-line

If a job is submitted to the build queue and is blocked because it is not in an approved folder, it will not be re-considered for execution until one of the re-evaluation triggers occurs. Therefore, even if the folder containing the job is added to the list of approved folders, existing blocked builds will not become unblocked until a queue re-evaluation trigger is fired.

Miscellaneous features

There are other smaller features available in Folders Plus.

Moving

You can use the Move action to move a job, or entire subfolder, from one location to another. The configuration, build records, and so on will be physically relocated on disk.

Beware that many kinds of configuration in Jenkins refer to jobs by their relative or full path. If you move a job to a different folder, you may also need to update configuration that was referring to that job.

Health reports

The base Folders plugin has a simple way of displaying a health report next to a folder in the dashboard: it simply indicates the worst status (such as failing) of any job in the folder. Folders Plus offers some alternate metrics:

  • the average health of jobs within the folder

  • whether some jobs are disabled

  • counts of jobs with different statuses, such as stable or unstable

Icons

Normally folders all have a fixed “folder” icon. Folders Plus allows you to customize this with other kinds of icons:

  • a blue, yellow, or red ball corresponding to the most broken job in the folder

  • a variety of icons packaged with Jenkins, such as that for a “package”

  • any icon you like, if you provide a URL

Environment variables

You may configure an Environment variables property on a folder giving a list of variable names and values. These will be accessible as environment variables during builds. This is a good way to define common configuration to be used by a lot of jobs, such as SCM repository locations.

Newer versions of Jenkins also allow these variables to be used during SCM polling or other contexts outside of builds.

List view column

The Pull information from nested job list view column may be selected when configuring a view on a folder or at top level in Jenkins. This column allows you to display some aspect of a job (available as another kind of list view column) for every subfolder being displayed.

For example, you might have a dozen folders corresponding to different products, each containing several jobs with predictable names: build, release, and so on. You can create a view at the Jenkins root which contains all these folders as entries. Then add this kind of list view column, specifying the job name build and the standard Status list view column. Now your custom view will display all of the product folders, and for each will show a blue, yellow, or red ball corresponding to the current status of the build job in that folder.

Restricting children

The Folders Plus plugin allows you to specify the types of jobs that a particular folder allows. To choose the specific job types that the folder can allow, enable Restrict the kind of children in this folder as shown in Specifying parameter types for Folders.

folder options
Figure 17. Specifying parameter types for Folders

Label Throttle Build Plugin

Introduction

The Label Throttle Build plugin brings hypervisor-aware scheduling to Jenkins. The plugin allows users to limit the number of builds on over-subscribed VM guests on a particular host.

Overview

When agents are set up as virtual machines and share the same underlying physical resources, Jenkins may think that there is more capacity available for builds than there really is.

For example, in such an environment, Jenkins might think that there are 10 agents with 2 executors each, but in reality the physical machine cannot execute 20 concurrent builds without thrashing. The number is usually much lower; say, 4.[1] This is particularly the case when you have a single-system hypervisor, such as VMWare ESXi, VirtualBox, etc.

Every time a new build is to start, Jenkins schedules it to one of the available virtual agents. However, in this particular case the underlying physical infrastructure cannot support all the virtual agents running their respective builds concurrently.

CloudBees Jenkins Enterprise allows you to define an actual limit to the number of concurrent builds that can be run on the system. One can group agents together, then assign a limit that specifies how many concurrent builds can happen on all the agents that belong to that group. In this way, CJE avoids overloading your hypervisor host machine.[2]

The benefit of using this plugin is that builds run much faster as the underlying physical machines are not overloaded anymore.

The Label Throttle plugin was introduced in Nectar 11.04.

Setting up the Label Throttle Build plugin

Enable the CloudBees Label Throttle Build plugin in the plugin manager as shown in Install from the plugin manager. Restart CloudBees Jenkins Enterprise to enable the plugin.

throttle label
Figure 18. Install from the plugin manager

Configuring a label throttle

First, decide on a label and assign it to all the agents that you’d like to group together. For example, you can use the hypervisor host name as a label, and put that label on all the agents that are the virtual machines on that host. This can be done from the agent configuration page as shown in Set appropriate label on the agent configuration page.

throttle enter label
Figure 19. Set appropriate label on the agent configuration page

Then click the newly entered label to jump to the label page as show in Go to the labels page.

throttle click label
Figure 20. Go to the labels page

Then configure this label and enter the limit as shown in Set limit on the hypervisor.

throttle set limit
Figure 21. Set limit on the hypervisor

With this setting, as you can see, the total number of concurrent builds on hypervisor1 is limited to 2, and CloudBees Jenkins Enterprise will enforce this as you can see in the executor state in Label Throttle Build plugin in action. Two builds are already running, so the third job sits in the queue.

throttle usage
Figure 22. Label Throttle Build plugin in action

Fast Archiver Plugin

Introduction

The Fast Archiver plugin uses an rsync-inspired algorithm to transfer archives from agents to a master. The result is that builds complete faster and network bandwidth usage is reduced.

The feature was introduced in CloudBees Jenkins Enterprise 12.05.

Overview

After a job is built on an agent selected build artifacts in the workspace may be copied from the agent to the master so those artifacts are archived. The Fast Archiver plugin takes advantage of the fact that there are usually only incremental changes between build artifacts of consecutive builds. Such incremental changes are detected and only those changes that need to be sent from the agent to the master are transferred, thus saving network bandwidth. The algorithm is inspired by rsync’s weak rolling checksum algorithm, which can efficiently identify changes.

The changes are gzipped to further reduce network load.

Setting up Fast Archiver

The Fast Archiver plugin is automatically installed and enabled if you have downloaded CloudBees Jenkins Enterprise. This plugin takes over the built-in artifact archiving functionality in Jenkins. If you are already using that feature, no configuration change is needed to take advantages of this feature. This plugin also does not touch the persisted form of job configurations, so you can just uninstall this plugin and all your jobs will fall back to the plain vanilla artifact archiving.

Simply use the regular Archive artifacts option in the Post-build Action section for a build. You can see the result in Fast archiver running.

fast archiver console
Figure 23. Fast archiver running
Note

Note that fast archiving only occurs on builds run on an agent, not those run on the Jenkins master. Also there must have been previous builds which produced artifacts for the new artifacts to be compared to. Otherwise, Jenkins will perform a regular complete artifact transfer. It will do the same if the fast archiver detects that data is corrupted due to an internal bug.

Normally you need do nothing to configure fast archiving. As can be seen in Fast archiving configuration, this “artifact manager” is enabled by default when the plugin is first loaded. However you can remove this strategy and later read it in the main Jenkins configuration screen.

fast archiver config
Figure 24. Fast archiving configuration

Role-Based Access Control Plugin

Introduction

The Role-Based Access Control plugin gives a Jenkins administrator the ability to define various security roles that will apply to the system they administer. Once roles have been defined the Jenkins administrator can assign those roles to groups of users. The assignment of roles can take place either at the global level, or limited to specific objects within the system. And, additionally, the Jenkins administrator can even delegate the management of groups for specific objects to specific users.

The Role-Based Access Control plugin combines with the Folders plugin to give a powerful solution for managing a Jenkins which is shared by multiple teams of users. The Jenkins administrator can create folders for each of the teams and then create groups in those folders for each of the roles that team members can have. By delegating the management of the group membership (but not the management of the roles assigned to groups) to the team leaders, the Jenkins administrator can empower the team leads to manage the permissions of their team while reducing their own administrative overhead.

The RBAC plugin was introduced in Nectar 11.04.

Concepts and Definitions

The Role-Based Access Control plugin introduces a number of new concepts to Jenkins when enabled. As these concepts and terms are used in a lot of different systems other than Jenkins, this section defines their meaning within the Role-Based Access Control plugin.

Permission

A Permission is a basic capability, such as starting a job or configuring a build agent. There is a core set of permissions defined by Jenkins, but some plugins define further permissions. If an action within Jenkins requires a specific permission, then the user wishing to perform that action must have the required permission.

Role

A Role is just a set of permissions grouped into some meaningful category. For example, a “developer” role might include viewing jobs, browsing workspaces, starting (or canceling) builds, and creating version control tags; but not configuring (or deleting) those jobs, managing the build agents they run on, or other permissions. The Role Based Access Control plugin allows the Jenkins administrator to define as many roles as are needed; these definitions will then be available throughout the system.

Security Realm

The Security Realm is responsible for establishing the identity of users after login. Jenkins ships with a number of security realm implementations and plugins can define additional ones, typically connecting to external services such as LDAP. Only one security realm can be active at a time.

External Group

An External Group is a group of users which is defined outside of the control of Jenkins and reported by the security realm. For example when the LDAP security realm is active, the LDAP groups that a user is a member of are considered external groups by Jenkins. The same is true of the Active Directory security realm. (Some security realms, such as the built-in “Jenkins’s own user database”, do not support external groups.)

Authorization Strategy

The Authorization Strategy picks up user definitions from the security realm and is responsible for determining what permissions users have on various objects within Jenkins. Jenkins ships with a number of authorization strategy implementations and plugins can define additional ones—including Role-Based Access Control. Only one authorization strategy can be active at a time; since Role-Based Access Control subsumes the functionality of most others this is unlikely to be a limitation.

Group

A Group (termed a Local Group when there is a chance of ambiguity) is a set of external groups, individual users, and even other local groups, which has been defined within Jenkins by the Role-Based Access Control plugin. Besides membership, a group may be granted some of the roles defined separately in Jenkins. Groups can be defined either at the root of Jenkins, or within specific objects such as folders or even single jobs. When a group is defined within a specific object, that group is local to the object and any children (such as nested jobs and subfolders). A user’s effective permissions are the aggregate of all the roles assigned to the groups they are members of which are within the scope of the object they wish to perform an action on.

Role Filter

A Role Filter may be defined on an object within Jenkins, and allows roles to be restricted from propagating through from the parents of the object. For example, if a specific job is to be hidden from all but a small team of users, a group would be created in that job with the required roles assigned to that group. A role filter would then be applied to the job to suppress other roles from propagating through from the parent. Administrators can however specify that certain roles, such as Jenkins system administrator, must never be filtered out.

Setting up the Role-Based Access Control plugin

Before configuring the Role-Based Access Control plugin, it is helpful to review the checklist in Pre-configuration checklist

Table 1. Pre-configuration checklist
Check Done

Have you got a list of roles do you want to define?

It is usually best to start small, you can always add more later. If unsure a reasonable minimal set could be: Read-only; Developer; Manager; and Administrator. Additionally it is helpful to think about what you want those roles to be able to do.

Yes / No

Have you got a list of groups you want to define?

You will typically require at least one local group for each role you define. If some roles are only being assigned to specific users or groups of users on specific objects, then each of those assignments will require a group local to that object.

Yes / No

Have you configured and tested your security realm?

The first time you enable the Role-Based Access Control plugin, it defaults to a configuration whereby logged-in users can do anything and users cannot access the system anonymously. If you have not tested that you can successfully log in to the system with the security realm you are using, enabling the Role-Based Access Control plugin may result in everyone being completely locked out of the system. See Recovering from a lock-out for details on how to restore access to the system in the event of a lock-out.

Yes / No

Does your security realm provide details of external groups?

If the security realm does not provide details of external groups then you (or somebody you delegate the permissions to) may have to manually maintain synchronization between the external group membership and any corresponding shadow local groups that you may define within Jenkins.

Yes / No

Is this the first time the Role-Based Access Control plugin will be enabled?

The Role-Based Access Control plugin remembers its configuration even when a different authorization strategy is configured, this allows easier experimentation with the plugin, but a side-effect is that you may have to remove the existing configuration if you want to start from a clean slate. See Completely resetting the configuration for details on how to remove any existing configuration.

Yes / No

Are there specific objects which require different permissions?

If most of your jobs will have a permissive set of permissions, but a few jobs will have permissions restricted to specific users, then the typical design is to grant the permissive permissions at the root level, filter those permissions out on the few jobs that have restricted permissions, and define groups for the users who require permissions on the restricted jobs.

Yes / No

Tip

If you are still prototyping how the Role-Based Access Control plugin can help you, it can be useful to defer setting up a proper security realm while you are testing Jenkins configuration. The Mock Security Realm plugin lets you simulate having multiple users, as well as various external groups that can be bound to local groups. It provides no actual security (everyone has a dummy password), so replace it with another security realm before going into production.

Enabling the authorization strategy

The Role-Based Access Control authorization strategy is enabled from the global security configuration screen (Manage JenkinsConfigure Global Security) by selecting the Role-based matrix authorization strategy from the Authorization selections (that are displayed when the Enable Security checkbox is enabled) and then saving the configuration. See Enabling the Role-Based Access Control authorization strategy

Note

The first time you enable the Role-Based Access Control plugin, it defaults to a configuration whereby logged in users can do anything and users cannot access the system anonymously. If you have not tested that you can successfully log in to the system with the security realm you are using, enabling the Role-Based Access Control plugin may result in everyone being completely locked out of the system. See Recovering from a lock-out for details on how to restore access to the system in the event of a lock-out.

Alternately, you may select the option Typical initial setup when enabling this strategy. This option is only available when you are logged in, because it creates a group of “administrators” (with full permissions) of whom you are initially the sole member. For convenience, it also creates initially empty groups for “developers” and “browsers” with useful permission sets; later you might want to add some existing users to developers, or add the special labels anonymous or authenticated to browsers, etc.

configured
Figure 25. Enabling the Role-Based Access Control authorization strategy

When the Role-Based Access Control authorization strategy, Jenkins Changes is enabled there will be a number of changes to the Jenkins interface.

new icons
Figure 26. The main Jenkins screen with the additional icons for Groups and Roles
manage icon
Figure 27. The Manage Roles action from the main Manage Jenkins screen

Configuring and managing roles

Roles are configured and managed from the image Manage Roles screen accessed via the Manage Jenkins screen (See The Manage Roles action from the main Manage Jenkins screen). The first time the Role-Based Access Control authorization strategy Plugins RBAC Authorization Strategy is enabled the Manage Roles screen should look something like The Manage Roles screen after the Role-Based Access Control authorization strategy Plugins RBAC Authorization Strategy has been initially enabled. The screen consists of a table of check boxes with each row of the table corresponds to a role, and each column corresponding to a permission. By checking the boxes you can assign permissions to roles.

manage roles initial
Figure 28. The Manage Roles screen after the Role-Based Access Control authorization strategy Plugins RBAC Authorization Strategy has been initially enabled

There are two types of roles:

  • image system role.

  • image user defined role.

System roles cannot be removed. There are two system roles:

  • image anonymous

    This role is automatically assigned to any user that is not logged in. If you do not want anonymous users to be able to access the system then do not assign any permissions to the anonymous role.

  • image authenticated

    This role is automatically assigned to any user that is logged in. If you want very fine grained control over user permissions you may end up removing all permissions from this role.

To define a new role you type the name of the role you want to define into the Role to add text box (Adding a role called Administrator).

manage roles add admin
Figure 29. Adding a role called Administrator

Once you have typed the name in click the Add button. This will add a new row to the bottom of the table (After clicking the Add button).

manage roles admin added
Figure 30. After clicking the Add button

Once you have created the row for the new role, you can assign permissions for that role by clicking the appropriate check boxes.

Note

If you want to assign all permissions there is a image Check all button at the far right of the screen. There is also a image Clear all button to speed up clearing all the check boxes in a specific row. See The three icons for: checking all the check boxes in a row; clearing all the check boxes in a row; and removing the role. and After clicking on the Check all icon for the Administrator role

Tip

If you install the Extended Read Permission plugin you will see another permission: to be able to view, but not change, configuration of objects such as jobs. This can be useful for certain roles; for example, a developer role might be granted this permission so that developers can more easily debug build failures while still requiring a project administrator to make changes to the job setup.

Note

Sometimes one permission will automatically imply other permissions. For example, Overall/Administer usually implies every other permission (running Groovy scripts may be excluded). When you have checked the overarching permission but not other permissions it implies, these other permission checkboxes will be shown with a colored background to remind you that they are also implicitly part of the role.

manage roles check all
Figure 31. The three icons for: checking all the check boxes in a row; clearing all the check boxes in a row; and removing the role.
manage roles admin configured
Figure 32. After clicking on the Check all icon for the Administrator role

If you need to delete a role, there is a image remove button at the far right of the row for all of the user-defined roles.

Normally roles are filterable: anyone with the Role/Filter permission on an object such as a folder is allowed to block that role from propagating into the object from its parent (such as Jenkins overall), possibly while allowing another group defined on the object to explicitly grant it. (See Configuring and managing role filters.) For some roles it is undesirable to permit this: a Jenkins administrative role ought to be valid anywhere in the system, so that a project manager cannot accidentally block even site administrators from seeing an otherwise private project. (That situation could be considered a kind of lockout.) For such roles you should uncheck the Filterable checkbox to make sure the role applies globally.

Once you have defined all the roles you require, you can save that configuration by clicking on the Save button.

Warning

Care should be taken when removing the Overall Administer permission, from roles, as it is possible to lock yourself out of administrative access. There are some safeguards which will protect you from obvious ways of locking out administrative access, however, as you may legitimately want to remove your own administrative access, the safeguards can only protect the obvious issues. See Recovering from a lock-out for details on how to restore access to the system in the event of a lock-out.

Configuring and managing groups

Groups can be defined at any point in Jenkins where the image groups action is available. The following classes of objects support group definitions out of the box:

  • Jenkins itself

  • Jobs

  • Maven modules in a Maven project

  • Agents

  • Views

  • Folders

Note

Other objects added by plugins can support group definitions if the required support is built into the plugin itself, or is added by an additional plugin. However, in general, the objects need to have been designed to allow other plugins to contribute actions in order for the group definition support to be possible.

Where an object does not support group definitions, the groups are inherited from the object’s parent.

Tip

While you can define groups on a view, and in general manage which roles are available to whom on the view, such permissions do not apply to jobs which happen to be displayed in the view, since they might also be encountered in other views (such as All). Permissions granted on a view only affect operations on the view itself, such as changing its list of columns. To control access to a set of jobs in one place, you must move them into a folder.

Clicking on the image groups action will bring you to a screen similar to that shown in The initial groups screen for the root of a Jenkins instance

manage groups initial
Figure 33. The initial groups screen for the root of a Jenkins instance

You create a group using the image New Group action which will prompt for the group name as in Creating a new group. Groups can be renamed after they are created. Group names are case sensitive and whitespace is allowed (although it can be confusing). When a group is created with the same name as a group from the parent context, it will hide the group from the parent context, however the permissions granted to members of the group will not be removed.

manage groups create
Figure 34. Creating a new group

After the group has been created, depending on the effective permissions of the user creating the group, one of two screens will be displayed. Users with the Group Configure permission will be shown the Configuration screen (Configuring a group), otherwise the user will be taken straight to the group details screen (Managing a group). This differentiation is key to effectively allowing administrators to delegate some of the administrative responsibility.

manage groups configure
Figure 35. Configuring a group

The Configuration screen is the screen where roles are assigned to a group. By restricting access to this screen, an administrator can ensure that groups cannot be created with permissions greater than those intended by the administrator. The Group Manage permission can then be assigned to specific users in order to allow them to manage the membership of groups without risking them escalating their own permissions by the addition of extra roles to groups they belong to.

Where a role is granted to a group, the role can be granted in one of two modes. Be default the role is granted in the Propagates mode. This means that the role will be available in child contexts. By clearing the Propagates checkbox, you can create a pinned role ( image) that is only available on the specific object that the group is created on. Some examples where a pinned role can be useful would be:

  • A pinned group creation role at the root allows creating groups at the root while preventing creating groups on jobs.

  • A pinned role in a Maven Project job which allows building the job in order to prevent users from triggering builds of individual modules within the Maven Project job.

  • When using the Folders plugin, a pinned group creation role on a folder allows creating groups at the in the folder while preventing creating groups on sub-folders.

After the configuration has been saved, the user will be taken to the group details screen (Managing a group).

manage groups created
Figure 36. Managing a group

The image Add user/group action is used to add a member to the group. After clicking the action you will be prompted to enter the id of the user/group to add (Adding a member to a group).

manage groups add ext group
Figure 37. Adding a member to a group

The ID that you enter can be any of:

  • image A user id

  • image A group name from the current context

  • image A group name from any of the parent contexts

  • image An external group id from the security realm

You can add as many members as you like. A group with multiple members: another group; an external group; and a user shows a group containing three members, one of each type. Circular membership is permitted, i.e. a group can have as a member a group that it is a member of. When adding external groups the http:// Jenkins-URL /whoAmI page can be useful to figure out what the external group IDs actually are.

manage groups add users groups
Figure 38. A group with multiple members: another group; an external group; and a user

Members of a group can be removed by clicking on the member’s image remove icon. There is a two step confirmation for removing a member and you must confirm the removal by clicking on the image confirmation icon. (Removing a member from a group requires two step confirmation)

manage groups remove member
Figure 39. Removing a member from a group requires two step confirmation

Configuring and managing role filters

Role filters can be defined at any point in Jenkins where group definitions are supported and roles are inherited from a parent of that object. By default the following objects will support role filtering:

  • Jobs

  • Maven modules in a Maven project

  • Agents

  • Views

  • Folders

view roles initial
Figure 40. The roles screen for a job within a Jenkins showing the roles, their permissions, and the groups that are assigned to each of the roles for this object and its children

If the object supports role filters then the image Filter action will be available from that screen. Clicking on the image Filter icon will bring you to a screen similar to that shown in The role filter screen for a job within a Jenkins instance where two roles are being filtered out

filter roles
Figure 41. The role filter screen for a job within a Jenkins instance where two roles are being filtered out

To filter a role, you just check the corresponding check-box. When a role is filtered, then the role is not available on that object to users unless there is a group defined within that object which the user is a member of and which has been assigned that role.

As mentioned in Configuring and managing roles, administrators can mark certain important roles as being non-filterable. In such a case they will not appear in the Filter Roles page at all.

Example Configurations

This section details some solutions to common configuration requirements. There are other ways to solve these configuration requirements, and there is no one “correct” solution for any of these, so it is better to think of these as examples of what can be done.

Instance owned by one cross-functional team

Background.

This example considers the case where there is one team of users owning the single instance. The team consists of multiple people with different roles. The most common roles in this case are:

  • Developer - will want to be able to kick off builds and modify job configuration

  • Tester - will want to be able to kick off builds, access build artifacts and create SCM tags

  • Manager - will want to be able to manage user roles in the team and create projects

  • Admin - will want to be able to administer the system

  • Read only - will require read access to the system but not be able to download build artifacts

Sample solution.

Create one role for each role of user: developer_role, tester_role, manager_role, admin_role, and reader_role. Additionally, create one group at the root level for each role of user, assigning the corresponding role to each group: developers, testers, managers, admins and readers. Finally add the members to each group as necessary.

Table 2. Sample roles for an instance owned by one cross-functional team
Role Permissions

developer_role

  • Job | Configure

  • Job | Build

tester_role

  • Job | Build

  • Job | Workspace

  • SCM | Tag

manager_role

  • Job | Create

  • Job | Delete

  • View | Create

  • View | Delete

  • View | Configure

  • Group | Manage

admin_role

  • Overall | Administer

reader_role

  • Overall | Read

  • Job | Read

  • Group | View

  • Role | View

Table 3. Sample groups for an instance owned by one cross-functional team
Context Group Role(s) Member(s)

ROOT

developers

  • developer_role

All the developers

ROOT

testers

  • tester_role

All the testers

ROOT

managers

  • manager_role

All the manager(s)

ROOT

admins

  • admin_role

All the administrator(s) of the instance

ROOT

readers

  • reader_role

  • Group: developers

  • Group: testers

  • Group: managers

  • Group: admins

  • All the people requiring read-only access not covered by the above groups

Instance shared by multiple cross-functional teams

Background.

This example considers the case where there are multiple teams of users sharing the single instance. Each team consists of multiple people with different roles. The most common roles in this case are:

  • Build Manager - will want to be create, modify and delete job configurations

  • Developer - will want to be able to kick off their builds

  • Tester - will want to be able to kick off their builds, access build artifacts and create SCM tags

  • Manager - will want to be able to manage user roles in their teams

  • Admin - will want to be able to administer the system

  • Read only - any authenticated user will require read access to the system but not be able to download build artifacts

Sample solution.

Create one role for each role of user: builder_role, developer_role, tester_role, manager_role, admin_role, and reader_role. Additionally, create two groups at the root level for the administrators and readers, assigning the corresponding role to each group: admins and readers respectively. Use the folders plugin to create folders for the two teams, and in each folder create groups for the builders, developers, testers and managers of the respective teams. Finally add the members to each group as necessary.

Table 4. Sample roles for an instance shared by multiple cross-functional teams
Role Permissions

builder_role

  • Job | Create

  • Job | Delete

  • Job | Configure

  • Job | Build

  • Job | Workspace

developer_role

  • Job | Build

tester_role

  • Job | Build

  • Job | Workspace

  • SCM | Tag

manager_role

  • Job | Create

  • Job | Delete

  • View | Create

  • View | Delete

  • View | Configure

  • Group | Manage

admin_role

  • Overall | Administer

reader_role

  • Overall | Read

  • Job | Read

  • Group | View

  • Role | View

Table 5. Sample groups for an instance shared by multiple cross-functional teams
Context Group Role(s) Member(s)

ROOT

admins

  • admin_role

All the administrator(s) of the instance

ROOT

readers

  • reader_role

  • System identity: authenticated

Team 1 Folder

builders

  • builder_role

All the build managers in team 1

Team 1 Folder

developers

  • developer_role

All the developers in team 1

Team 1 Folder

testers

  • tester_role

All the testers in team 1

Team 1 Folder

managers

  • manager_role

All the manager(s) of team 1

Team 2 Folder

builders

  • builder_role

All the build managers in team 2

Team 2 Folder

developers

  • developer_role

All the developers in team 2

Team 2 Folder

testers

  • tester_role

All the testers in team 2

Team 2 Folder

managers

  • manager_role

All the manager(s) of team 2

Instance shared by multiple single-function teams

Background.

This example considers the case where there are multiple teams of users sharing the single instance. Each team consists of multiple people with the same role. The most common teams in this case are:

  • Development - responsible for developing a single product

  • SQA - responsible for testing many products

  • DevOps - responsible for deploying many products into production

Sample solution.

Create one role for each role of user: developer_role, sqa_role, devops_role. Then create groups for the sqa and devops teams at the root level assigning the corresponding role to each group: sqa, devops. Use the folders plugin to create folders for each product and within each folder create a developers group assigned the developer_role. Complete the configuration by adding the members of each team as members of the corresponding groups.

Sample roles for an instance shared shared by multiple single-function

teams

Role Permissions

developer_role

  • Job | Create

  • Job | Delete

  • Job | Configure

  • Job | Build

  • Job | Workspace

tester_role

  • Job | Build

  • Job | Workspace

  • SCM | Tag

devops_role

  • Job | Configure

  • Job | Build

  • Job | Workspace

  • SCM | Tag

Sample groups for an instance shared shared by multiple single-function

teams

Context Group Role(s) Member(s)

ROOT

sqa

  • sqa_role

All the people in the SQA team

ROOT

devops

  • devops_role

All the people in the DevOps team

Product 1 Folder

developers

  • developer_role

All the developers of product 1

Product 2 Folder

developers

  • developer_role

All the developers of product 2

Secret projects on a shared instance

Background.

This example considers the case where you want to have a secret project on a shared instance. The project will be worked on by a small team who should be able to see the project, but everyone outside of that team will be unable to see the project.

Note

If somebody has access to the JENKINS_HOME directory, they will be able to see the secret project. Similarly, by creating an appropriate build script, a non-secret project could gain access to the secret project when building on a file system that the secret project has also built on. If it is absolutely critical that the project remain secret, then it may be best to use a dedicated instance for the project.

For most use cases, however, the project is being kept secret from people who both do not have direct access to the JENKINS_HOME directory, and do not have access to creating build jobs or modify the source code.

Sample solution.

Create a group in the project (or folder) for the team of people who will be working on the secret project. Add all the roles that the team will require. Add all the members of the team. Modify the role filters for the project (or folder) filtering out every role (or all but the administrator role if administrators are to be allowed to view the project).

Note

If you are configuring a secret project to which you will not have access, once you apply the role filters you will be locked out from the project, and it will appear as if there is no such project. See Finding hidden projects for details on how to recover from accidental lock-outs of this type.

It may also be advisable to create a group for admins of the secret project before configuring the role filters if only a subset of those with the overall administrator role are to be able to see the secret project.

Scripting RBAC. CLI commands

Sometimes it is useful to perform large-scale operations on a Jenkins server using scripts. RBAC plugin provides CLI commands for managing the security setup. It also provides REST API, which is described in REST API in RBAC plugin.

group-membership

Having defined some roles and groups, a common requirement is to change the list of users authorized in a given group. To support this operation from scripts, the group-membership CLI command is available in plugin versions 4.1 and later. The first argument identifies the group container:

  • root for the root of Jenkins

  • jobname or foldername or foldername/jobname for a job, a folder, or any other item in the Jenkins folder hierarchy that can hold groups

  • viewname or foldername/viewname for a view

  • slavename for an agent node

The second argument is the name of an existing group. With no additional arguments, all the members of the group are printed, one per line. (These “members” might be user IDs, or Jenkins or external group names.) If additional arguments are given, the group membership is set to the listed members (replacing any current members).

create-group

Creating a group can be performed via the create-group CLI command, which is available in plugin versions 4.9 and later. The first argument identifies the group container:

  • root for the root of Jenkins

  • jobname or foldername or foldername/jobname for a job, a folder, or any other item in the Jenkins folder hierarchy that can hold groups

  • viewname or foldername/viewname for a view

  • slavename for an agent node

The second argument is the name of the group to create.

delete-group

Deleting a group can be performed via the delete-group CLI command, which is available in plugin versions 4.9 and later. The first argument identifies the group container:

  • root for the root of Jenkins

  • jobname or foldername or foldername/jobname for a job, a folder, or any other item in the Jenkins folder hierarchy that can hold groups

  • viewname or foldername/viewname for a view

  • slavename for an agent node

The second argument is the name of the group to delete.

group-role-assignments

Assigning or listing group roles can be performed via the group-role-assignments CLI command, which is available in plugin versions 4.9 and later. The first argument identifies the group container:

  • root for the root of Jenkins

  • jobname or foldername or foldername/jobname for a job, a folder, or any other item in the Jenkins folder hierarchy that can hold groups

  • viewname or foldername/viewname for a view

  • slavename for an agent node

The second argument is the name of the group.

With no additional arguments, the current group roles will be printed. By default the roles are printed one by line in the format used to assign them. To print them in a more readable format, use the -e (--expanded-display) command line option.

If additional arguments are given, the group roles are set to the listed roles (replacing any current roles).

Each role follow the format: ROLE[,GRANTED_AT[,PROPAGATES]] GRANTED_AT=0|1|2 (0:current level, 1: Child level, 2: Grand child level), PROPAGATES=true|false. For example: developer,0,true will assign the "developer" role at the current level and the role will be available in child contexts.

list-groups

Listing existing groups can be performed via the list-groups CLI command, which is available in plugin versions 4.11 and later. The argument identifies the group container:

  • root for the root of Jenkins

  • jobname or foldername or foldername/jobname for a job, a folder, or any other item in the Jenkins folder hierarchy that can hold groups

  • viewname or foldername/viewname for a view

  • slavename for an agent node

REST API in RBAC plugin

Role-Based Access Control plugin provides endpoints, which allow managing groups and roles using POST requests. It is also possible to retrieve the data in various formats using Jenkins' standard REST API functionality.

Warning
All REST API commands require the POST request type. The commands operate with security settings, hence it is highly recommended to configure the protection from Cross-Site Requests Forgery to prevent hijacking your Jenkins instance by malicious attackers. Jenkins provides its own functionality, which is described here.

Managing RBAC group containers

This set of commands allows managing groups in various group containers: Jenkins root, folders, jobs and nodes. Endpoint URL examples:

  • ${JENKINS_URL}/groups

  • ${JENKINS_URL}/job/folder1/job/folder2/groups

  • ${JENKINS_URL}/job/folder1/job/myJob/groups

  • ${JENKINS_URL}/computer/node1/groups

Table 6. REST API commands

Command name

Required permissions

Description and parameters

createGroup

Group.CREATE

Creates a new group.
- name, String - name of the group to be created

deleteGroup

Group.DELETE

Deletes a group.
- name, String - name of the group to be created

addRoleFilter

Roles.FILTER

Adds a role filter. It allows to filter roles in the container if these roles are filterable.
- name, String - name of the role, which should be filtered

deleteRoleFilter

Roles.FILTER

Deletes a role filter.
- name, String - name of the role, which should not be filtered anymore

Table 7. Exported data

Field

Type

Description

groups

Group[]

List of groups in the container. See the structure in Managing RBAC groups individually below.

roleFilters

String[]

Assigned role filters

Managing RBAC groups individually

After creating groups using the commands above you may need to configure the groups. For each group the API endpoint can be accessed by its group URL. Endpoint examples:

  • ${JENKINS_URL}/groups/myGroupName

  • ${JENKINS_URL}/job/folder1/job/myJob/groups/myGroupName

  • ${JENKINS_URL}/computer/node1/groups/myGroupName

Table 8. REST API commands

Command name

Required permissions

Description and parameters

setDescription

Group.MANAGE

Sets the group description.
- description, String - Raw string with the group description

addMember

Group.MANAGE

Adds a new member to the group.
- name, String - User id of the member to be added

removeMember

Group.MANAGE

Removes a member from the group.
- name, String - User id of the member to be removed

grantRole

Group.MANAGE

Grants a role to this group. Overrides the current settings if they exist.
- role, String - ID of the role
- offset, int - Propagation level. 0 - current (e.g. folder), 1 - child, 2 - grand-child, other - error
- inherited, boolean - true if the role should be granted to child items

revokeRole

Group.MANAGE

Removes the role assignment from the group.
- role, String - ID of the role

Table 9. Exported data

Field

Type

Description

name

String

Group name

description

String

Description of the group (raw string)

roles

String[]

List of roles assigned to this group

roleAssignments

RoleAssignment[]

List of all group assignments including inherited ones. Each role assignment has a role field with role IDs

members

String[]

List of group members (user IDs, inner groups, etc.)

url

String

URL of the group within the container

Global management of RBAC roles

On the top level of Jenkins you can create and manage roles. Endpoint URL for overall group management: ${JENKINS_URL}/roles.

Table 10. REST API commands

Command name

Required permissions

Description and parameters

createRole

Jenkins.ADMINISTER

Creates a new role
- name, String - name of the role

deleteRole

Jenkins.ADMINISTER

Remove role from Jenkins.
- name, String - name of the role

createFilterableRole

Jenkins.ADMINISTER

Adds a role to the list of filterable ones. - name, String - name of the role

deleteFilterableRole

Jenkins.ADMINISTER

Remove role from the list of filterable ones. - name, String - name of the role

Table 11. Exported data

Field

Type

Description

roles

Role[]

List of available roles. See Managing RBAC roles individually for the data format.

filterableRoles

String[]

List of roles, which can be filtered in group containers

Managing RBAC roles individually

After creating roles it is possible to manage them individually. Endpoint URL for individual group management: ${JENKINS_URL}/roles/${ROLE_NAME}.

Table 12. REST API commands

Command name

Required permissions

Description and parameters

grantPermissions

Jenkins.ADMINISTER

Grants permissions to the role.
- permissions, String - Comma-separated list of permissions to be granted.
Call Example: JENKINSURL/roles/myRole/grantPermissions?permissions=hudson.model.Item.Discover,hudson.model.Hudson.Administer

revokePermissions

Jenkins.ADMINISTER

Revokes permissions from the role.
- permissions, String - Comma-separated list of permissions to be granted. See the examples above.

Table 13. Exported data

Field

Type

Description

id

String

Role identifier

description

String

Role description

url

String

URL of the role’s page.

grantedPermissions

PermissionProxy[]

Assigned permissions (including local and provisioned ones). All items contain the id field with the permission unique identifier and enabled field, which indicates that the permission can be assigned.

filterable

boolean

Indicates that the role is filterable.

Troubleshooting

Recovering from a lock-out

There are two ways to recover from a lock-out. Both require that the Jenkins instance be stopped and then restarted after completion of the steps.

The first way will remove all the roles that are defined in the system but does not require any manual editing of XML files

Example 1. Recovering from a lock-out by removing all roles (on Unix-based systems)
  1. Ensure that the Jenkins instance is stopped.

  2. Open a shell

  3. Change into the JENKINS_HOME directory

    $ cd $JENKINS_HOME
  4. Remove/rename the nectar-rbac.xml configuration file

    $ mv nectar-rbac.xml nectar-rbac.xml.old
  5. Start the the Jenkins instance.

Example 2. Recovering from a lock-out by removing all roles (on Windows-based systems)
  1. Ensure that the Jenkins instance is stopped.

  2. Open a command prompt

  3. Change into the JENKINS_HOME directory

    C:\> cd %JENKINS_HOME%
  4. Remove/rename the nectar-rbac.xml configuration file

    C:\...\Jenkins> ren nectar-rbac.xml nectar-rbac.xml.old
  5. Start the the Jenkins instance.

The second way will not remove any of the the roles that are defined in the system but requires manual editing of an XML file.

Example 3. Recovering from a lock-out by manually editing JENKINS_HOME/nectar-rbac.xml
  1. Ensure that the Jenkins instance is stopped.

  2. Open the JENKINS_HOME/nectar-rbac.xml in a text editor that can edit XML files

    It is a good idea to make a backup copy of the file before changing it.

    The file should look something like General structure of the nectar-rbac.xml file

  3. Add the following line into the role that you want to have the overall Administrator permission

    <permission id="hudson.model.Hudson.Administer"/>
  4. Save the file

  5. Start the the Jenkins instance.

General structure of the nectar-rbac.xml file
<?xml version='1.0' encoding='UTF-8'?>
<nectar.plugins.rbac.strategy.RoleMatrixAuthorizationPlugin>
  <configuration class="...">
    ...
    <role name="...">
      <permission id="..."/>
      ...
    </role>
    ...
    <role name="anonymous">
      ...
    </role>
    <role name="authenticated">
      ...
    </role>
    ...
  </configuration>
</nectar.plugins.rbac.strategy.RoleMatrixAuthorizationPlugin>

Completely resetting the configuration

If you want to completely reset all of the Role-Based Access Control plugin’s configuration, an irreversible action, you can use the script console to remove all of the user-defined roles, local groups and role filters on all the objects within your Jenkins instance.

Warning

There is no way to recover the the user-defined roles, local groups and role filters after you have reset the configuration other than by restoring a backup of the complete system, and this may have the side effect of removing any changes that occur within the system after the configuration has been reset.

Only follow this procedure if you are absolutely sure that you want to wipe all of the Role-Based Access Control plugin’s configuration

If you have been locked out of the system, you will need to follow one of the procedures in Recovering from a lock-out to recover administrative access.

Example 4. Completely resetting the RBAC configuration
  1. Login to Jenkins using a web browser and open the Script Console from the Manage Jenkins screen.

  2. Type the following into the script text box:

    nectar.plugins.rbac.strategy.RoleMatrixAuthorizationPlugin
    .getInstance().reset()
  3. You should have a screen that looks like Using the script console to completely reset the Role-Based Access Control plugin’s configuration

  4. Click on the Run button. The screen should now look like After successfully resetting the Role-Based Access Control plugin’s configuration via the script console

reset via script console
Figure 42. Using the script console to completely reset the Role-Based Access Control plugin’s configuration
reset via script console complete
Figure 43. After successfully resetting the Role-Based Access Control plugin’s configuration via the script console

Finding hidden projects

You can remove permissions for a project by filtering out roles on that project using a role filter. In general it is best to add groups and roles to a project before applying the role filter, as if you filter all roles, you will be locked out from accessing the project completely. The strategy of filtering out all roles can be used to create secret projects, but there are times when it is necessary to recover or discover the projects that are hidden.

While there are various workarounds for this situation, best is to make sure that there is at least one administrative role with all permissions which is not filterable (see Configuring and managing roles). The Jenkins administrator should grant this role to a group containing themselves, and the secret project will be visible again.

Skip Next Build Plugin

Introduction

The Skip Next Build plugin allows you to skip building a job for a short period of time. While you could achieve something similar by disabling the job from the job configure page, you would need to remember to re-enable the job afterwards.

There are two main use cases for this plugin:

  • If you are going to be taking some external resources that the build requires off-line for maintenance and you don’t want to be annoyed by all the build failure notices.

  • If you are merging a major feature branch and you want to prevent builds until after the merge is completed.

The Skip Next Build plugin was introduced in Nectar 11.10.

Using the Skip Next Build plugin

The plugin adds a image Skip builds action to all jobs. When a skip has been applied to the job, the icon will be yellow image and the main job page will look something like The main job screen when a skip has been applied. When no skip has been applied the icon will be green image

skip applied
Figure 44. The main job screen when a skip has been applied

To apply a skip to a folder / job, click on the image Skip builds action. This should display a screen similar to Applying a skip to a folder / job.

apply skip
Figure 45. Applying a skip to a folder / job
Note

When a skip is applied to a folder, all jobs within the folder will be skipped.

Select the duration of skip to apply and click the Apply skip button. The main job screen should now have a notice that builds are skipped until the specified time (See The main job screen when a skip has been applied for an example)

To remove a skip from a folder / job, click on the image Skip builds action. This should display a screen similar to Removing a skip from a job.

remove skip
Figure 46. Removing a skip from a job

Click on the Remove skip button to remove the skip.

If the skip was not applied directly to the folder / job but instead is either inherited from a parent folder or originating from a skip group then the screen will look something like this:

skip inherited
Figure 47. Trying to remove a skip from a job where the skip is inherited from a parent folder.

The link(s) in the table of active skips can be used to navigate to the corresponding skip.

Skip groups

Depending on how the jobs in your Jenkins instance have been organized and the reasons for skipping builds, it may be necessary to select a disjoint set of jobs from across the instance for skipping. This functionality is supported using skip groups.

Before you can use the skip groups functionality, it is necessary for the Jenkins administrator to configure the skip groups in the global configuration: Jenkins ▸ Manage Jenkins ▸ Configure System ▸ Skip groups

skip groups global config navigate
Figure 48. Navigating to the Jenkins ▸ Manage Jenkins ▸ Configure System ▸ Skip groups section using the breadcrumb bar’s context menu.

Each skip group must have a unique name. It can be helpful to provide a description so that users understand why the skip has been applied to their jobs.

skip groups global config adding
Figure 49. Adding a skip group to the Jenkins global configuration

You can have multiple skip groups.

skip groups global config multiple
Figure 50. Multiple skip groups can be defined

Once skip groups have been defined, you can configure the jobs and/or folder membership from the job / folder configuration screens.

skip groups job membership
Figure 51. Configuring a job’s skip group membership
skip groups folder membership
Figure 52. Configuring a folder’s skip group membership

When there is at least one skip group defined in a Jenkins instance, the Jenkins ▸ Skip groups page will be enabled.

skip groups root action
Figure 53. A Jenkins instance with the Jenkins ▸ Skip groups some skip groups enabled
skip groups index
Figure 54. The Jenkins ▸ Skip groups page

To manage the skip state of a skip group, you need to navigate to that skip group’s details page

skip groups details
Figure 55. The details page for a specific skip group

The details page will display the current status of the skip group as well as listing all the items that are directly a member of this skip group.

Note
Where folders are a member of a skip group, the skip group membership will be inherited by all items in the folder.

Advanced usage

The Skip Next Build plugin adds two new permissions:

  • Skip: Apply - this permission is required in order to apply a skip to a job. It is implied by the Overall: Administer permission.

  • Skip: Remove - this permission is required in order to remove a skip from a job. It is implied by the Overall: Administer permission.

The Skip Next Build plugin adds two new CLI operations:

  • applySkip - this operation applies a skip to a job. It takes a single parameter which is the number of hours to skip the job for. If the parameter is outside the range 0 to 24 it will be brought to the nearest value within that range.

  • removeSkip - this operation removes any skip that may have been applied to a job.

Jenkins CLI

The Skip Next Build plugin adds a number of Jenkins CLI commands for controlling skips:

apply-skip

Enables the skip setting on a job. apply skip cli This command takes two parameters:

  1. The full name of the job

  2. The number of hours the skip should be active for.

apply-folder-skip

Enables the skip setting on a folder. apply folder skip cli This command takes two parameters:

  1. The full name of the folder

  2. The number of hours the skip should be active for.

skip-group-on

Enables the skip setting on a skip group. skip group on cli This command takes two parameters:

  1. The name of the skip group

  2. The number of hours the skip should be active for.

remove-skip

Removes the currently active skip setting from a job. apply skip cli This command takes only one parameter: the full name of the job.

remove-folder-skip

Removes the currently active skip setting from a folder. remove folder skip cli This command takes only one parameter: the full name of the folder.

skip-group-off

Removes the currently active skip setting from a skip group. skip group off cli This command takes only one parameter: the name of the skip group.

Template Plugin

Introduction

The template plugin gives the Jenkins administrator the opportunity to provide users a simplified and directed experience for configuring jobs, in “domain specific” terms that make sense to your organization. With plain-vanilla Jenkins, users can get confused with a large number of advanced options and switches that they don’t need, resulting in less than optimal user experience and misconfigured jobs. The template plugin is useful in situations like this, because it allows the administrator to define a few parameters that users need to fill in, while the administrator controls precisely how these map to the full general configuration, in one place. In this way, most users are shielded from seeing more of Jenkins than they need to, and the administrator can enforce consistency, best practices, and enterprise rules without overhead.

Another typical use case of the template plugin is when you have a large number of similar-looking jobs that only differ in a few aspects, such as jobs that build different branches of the same product, or teams building a large number of components in a uniform way. When you change a template, all the uses of that template get updated instantaneously, so the administrator does not have to manually update a large set of jobs to fix a shell script, nor does he need to rely on the Groovy console for bulk update.

Yet another way to think of the template plugin is that it’s like writing a plugin without writing a program. If you think about the Ant plugin, for example, it lets you configure Ant invocation with a number of text fields that talk in the domain-specific terms of Ant (referencing targets, properties, VM options, etc.), as opposed to the single text area that is the shell script. Similarly with the Maven plugin, you get a whole project type specifically for invoking Maven, instead of using the generic freestyle project and invoking Maven from it. With the template plugin, you can do these things without actually writing code (or writing only some small Groovy snippets).

Tutorial: Hello world builder

Before we dive into gory details, let’s create and use a simple template. In this tutorial, we’ll create a template build step that says “hello”.

Define a template

With this plugin enabled, when you click New Job among other options you see several kinds of templates you can create. We want to create a template that we can use as a builder, akin to “Execute shell script”, “Execute Windows batch file”, or “Invoke Ant”, that you can use in configuring freestyle projects. So we choose “builder template”.

The name you enter here will be used as an identifier referred to from jobs using the template, as well as a default display name. There are no particular characters that are banned from this name, but it is good practice to use simple punctuation and no spaces. Just as with jobs, you can set a more pleasant display name later.

In this tutorial, we name the template “Say Hello World”, and click “OK.”

The next page that appears is the configuration page for this template. There are two main things to configure here: attributes and a transformer.

When you define a template, you first ask yourself “What do I want my users to enter when they use my template?” The answer to that question becomes attributes. In this hello world builder, we want the user to configure who we are saying hello to, so we define one attribute named “target” for this purpose. The user should see the single text field for this, so we choose the type accordingly. The display name and inline help are self-explanatory. They control what the user will see when editing freestyle projects to use our new builder.

The second question you should ask when you define a template is “How does it execute?” (Or, more generally, “How does it map to the terms Jenkins understands?”) The answer to that question becomes the transformer. In this tutorial, our builder will turn into a shell script that says hello. (Your real template would probably turn into a shell script that gets some real work done—or your template can translate into any other builders, but see the rest of the user guide for that.) So we’ll choose “generate a shell script to execute via Groovy”.

In the text area, we’ll enter the shell script that this build step is going to execute, but with expressions here and there of the form ${…​} (because this is a Groovy template). ${target} refers to the actual value of the target attribute we defined above, and ${build.fullDisplayName} is a Groovy expression to access the getFullDisplayName() method (which returns the full name of the build) of the build object, which refers to the current build in progress. ${build.fullDisplayName} needs to be quoted because this is going to look like test #1, and # gets interpreted by shell as a comment sign unless you quote it.

Configuring a hello world builder captures this template configuration. When you are done, click “Save” to save this template. Your template is now ready.

hello world template
Figure 56. Configuring a hello world builder

Use a template

Now that we have our first builder template, let’s create a freestyle project that actually uses it. Go back to the Jenkins top page, and create a new freestyle project. You’ll be taken to the configuration page. This part of Jenkins should already be familiar to you.

When you click “add build step”, you should see the newly created “Say Hello World” builder. You click it, and you see the say hello world builder added to your project. You also see the “target” attribute you defined in the configuration page, as depicted in Using a hello world builder.

use build template
Figure 57. Using a hello world builder

I configured this project to say hello to "Kohsuke". Click save and schedule a new build. In its console output, you should see something like this:

Example console log
[workspace] $ /bin/sh -xe /tmp/hudson2322666548865862105.sh
+ echo Hello to Kohsuke from test #1
Hello to Kohsuke from test #1
Finished: SUCCESS

So the template is running as expected.

Changing the template definition

Let’s change the definition of the template, and see how it affects the instances.

We’ll go back to the template definition by going back to the top of the page, clicking the “Templates” link in the left, and clicking the configuration icon on the right for the “Say Hello World” template.

Instead of saying hello, we will now make it say good evening. Click “Save” to save this new definition:

update hello world builder
Figure 58. Updating hello world builder template

Now, when you update the template definition, all the uses of this template automatically reflect the change you made. So without revisiting the configuration page of the freestyle job, let’s simply schedule another build and see its console output:

Example console log after template updated
[workspace] $ /bin/sh -xe /tmp/hudson2322666548865862105.sh
+ echo Good evening to Kohsuke from test #2
Good evening to Kohsuke from test #2
Finished: SUCCESS

Our change was indeed reflected in the output.

Tutorial: job template

In the second tutorial, we look at a more complex template that "templatizes" a whole job.

To set the context for this tutorial, look at Jenkins-on-Jenkins (Jenkins on Jenkins), which is the Jenkins server that the Jenkins community runs for itself. On this Jenkins, there are a large number of jobs that build the community-maintained Jenkins plugins:

j on j
Figure 59. Jenkins on Jenkins

The actual configurations of all these jobs are almost identical. They each check out a Git repository, run mvn -e clean install, send an email at the end of the build, and update JIRA issues that are discovered in the commit messages.

Each time we add a new plugin job, we copy from one of the existing jobs, change the name, and then call it done. But if tomorrow we want to install a new plugin (say we start sending the results to IRC via the IRC plugin) and make a mass change to all the jobs, that’s hopelessly tedious.

This is a prime use case for the template plugin. So let’s see how we could turn jobs like these into a template.

Creating a template

First, we go to the top of the page, click the New Job link, and select Job Template and provide a name. If our template were for building Jenkins plugins, we might call it “Jenkins Plugin”.

Note: Be aware that, by default, there is an Attribute already created that cannot be deleted. It is Name and refers to the name of the job item which will implement this job template. Do not override it, leave it as it is.

As with the previous tutorial, we just ask ourselves “What does the user [such as a plugin developer] need to configure”, and in our case it just involves one question, which is whether the sources are in Git or still in Subversion. (Set aside from the name attribute, which is there by default.)

So in the attribute, we define the “scm” attribute to be a choice between Git and Subversion as shown in "SCM" attribute in the template:

scm attr
Figure 60. "SCM" attribute in the template

Further down below, we configure the template to refer to jobs as “plugins.” While this change is purely cosmetic and only affects what’s shown in the HTML for humans, it nonetheless makes it easier for users to understand, because in this context, “plugin” is a well-defined technical term that has specific meaning, while “job” is not.

job template pronoun
Figure 61. Changing the pronoun of the template

Defining transformation

Unlike the previous tutorial where we defined a builder template in terms of the transformation into a shell script that we execute, defining a transformation for a job template is more complicated.

The idea here is that we need to tell the template plugin how our model definition (that consists of the name attribute and the scm attribute) maps to the actual job definition that Jenkins understands (in our case that’s Maven2 job type). We do this by defining a transformation from the model into the XML persisted form of the Maven2 job type. The template plugin uses this transformation to obtain the XML, then load it up to Jenkins so that it understands how to run it.

So in this tutorial, we use “Jelly based transformation”. Jelly is a template language that generates an XML. It is similar to JSTL (the tag library for Java Server Pages), and as long as your transformation doesn’t have a complex computation, it’s a reasonable choice.

The best way to define this transformation is by doing “programming by example” —that is, we manually configure one representative project, then obtain its XML representation. Let us assume you are starting with a Maven project which checks out code from a Git server. We’ll obtain its XML representation by simply adding config.xml to the job’s URL: http://your-jenkins/job/this-job/config.xml. (Some web browsers fail to display the actual XML when browsing such URLs. Try using View Source or Save functions in such cases, or use a separate download tool like curl or wget.) Later we will see that there is a shortcut, the Load Prototype button (currently only supported for Groovy transformations).

Next, we take this XML and insert variables where appropriate. For example, you see an occurrence of some-repo in the Git configuration, so we replace that with $Daya Sharma as below:

<scm class="hudson.plugins.git.GitSCM">
  <userRemoteConfigs>
    <hudson.plugins.git.UserRemoteConfig>
      ...
      <url>git://your-server/${name}.git</url>
    </hudson.plugins.git.UserRemoteConfig>
  </userRemoteConfigs>

There’s one more reference of some-repo down in the root module, but this is an inferred value. (We can tell because we are never asked to enter this information in the configuration page.) So we can simply remove the entire <rootModule> element. Unless you know the internals of Jenkins, this is a bit of a trial-and-error. Similarly, if you want to keep XML concise, you can remove a large number of elements from the configuration XML and Jenkins will still read it and provide default values. (Again, this involves a bit of trial-and-error in the general case, although there are some rules of thumb. For example, boolean fields are always safe to remove, as are most string fields.)

We also need another XML from a plugin that’s building in Subversion to complete this transformation. You can spot the section that configures Subversion, so we do the same variable insertion, and the result is as follows:

<scm class="hudson.scm.SubversionSCM">
  <locations>
    <hudson.scm.SubversionSCM_-ModuleLocation>
      <remote>https://your-server/trunk/${name}</remote>
    </hudson.scm.SubversionSCM_-ModuleLocation>
  </locations>
  ...
</scm>

Now, we want to switch between two fragments depending on the value of the scm attribute. To do this, we use a tag library from Jelly that does the switch/case equivalent, called choose/when (XSLT has the same elements that do the same thing.) So where we had the scm tag, we add the following fragment:

<j:choose xmlns:j="jelly:core">
  <j:when test="${scm=='git'}">
    <scm class="hudson.plugins.git.GitSCM">
      <configVersion>2</configVersion>
      <userRemoteConfigs>
        <hudson.plugins.git.UserRemoteConfig>
          <name>origin</name>
          <refspec>+refs/heads/*:refs/remotes/origin/*</refspec>
          <url>git://your-server/${name}.git</url>
        </hudson.plugins.git.UserRemoteConfig>
      </userRemoteConfigs>
      <branches>
        <hudson.plugins.git.BranchSpec>
          <name>master</name>
        </hudson.plugins.git.BranchSpec>
      </branches>
      <buildChooser class="hudson.plugins.git.util.DefaultBuildChooser"/>
      <gitTool>Default</gitTool>
      <submoduleCfg class="list"/>
    </scm>
  </j:when>
  <j:when test="${scm=='svn'}">
    <scm class="hudson.scm.SubversionSCM">
      <locations>
        <hudson.scm.SubversionSCM_-ModuleLocation>
          <remote>https://your-server/trunk/${name}</remote>
        </hudson.scm.SubversionSCM_-ModuleLocation>
      </locations>
      <workspaceUpdater class="hudson.scm.subversion.UpdateUpdater"/>
    </scm>
  </j:when>
</j:choose>

See Jelly tag library reference for more about these tags. The entire transformation Jelly script will look something like this:

<?xml version='1.0' encoding='UTF-8'?>
<maven2-moduleset>
  <actions/>
  <description></description>
  <properties>
    <hudson.plugins.disk__usage.DiskUsageProperty/>
  </properties>

<j:choose xmlns:j="jelly:core">
  <j:when test="${scm=='git'}">
    <scm class="hudson.plugins.git.GitSCM">
      <configVersion>2</configVersion>
      <userRemoteConfigs>
        <hudson.plugins.git.UserRemoteConfig>
          <name>origin</name>
          <refspec>+refs/heads/*:refs/remotes/origin/*</refspec>
          <url>git://your-server/${name}.git</url>
        </hudson.plugins.git.UserRemoteConfig>
      </userRemoteConfigs>
      <branches>
        <hudson.plugins.git.BranchSpec>
          <name>master</name>
        </hudson.plugins.git.BranchSpec>
      </branches>
      <buildChooser class="hudson.plugins.git.util.DefaultBuildChooser"/>
      <gitTool>Default</gitTool>
      <submoduleCfg class="list"/>
    </scm>
  </j:when>
  <j:when test="${scm=='svn'}">
    <scm class="hudson.scm.SubversionSCM">
      <locations>
        <hudson.scm.SubversionSCM_-ModuleLocation>
          <remote>https://your-server/trunk/${name}</remote>
        </hudson.scm.SubversionSCM_-ModuleLocation>
      </locations>
      <workspaceUpdater class="hudson.scm.subversion.UpdateUpdater"/>
    </scm>
  </j:when>
</j:choose>

  <canRoam>true</canRoam>
  <jdk>(Default)</jdk>
  <triggers class="vector">
    <hudson.triggers.SCMTrigger>
      <spec>*/15 * * * *</spec>
    </hudson.triggers.SCMTrigger>
  </triggers>
  <goals>-e clean install</goals>
  <defaultGoals>package</defaultGoals>
  <mavenName>maven-3.0.3</mavenName>
  <mavenValidationLevel>0</mavenValidationLevel>
  <aggregatorStyleBuild>true</aggregatorStyleBuild>
  <reporters/>
  <publishers />
  <buildWrappers/>
</maven2-moduleset>

Now we save the configuration, and our job template is ready.

Go back to the top page, and click “New Job”. You’ll see the newly created template as one of the options:

tutorial create plugin job
Figure 62. Creating a new plugin job

So we type in the name of the plugin that we want to build, say “git”, then click OK. The configuration page of this job cannot be any simpler:

tutorial configure plugin job
Figure 63. Configuring a new plugin job

The git plugin lives in the Git repository, so choose that, and save. Let’s schedule a new build, and see it build the plugin.

Concepts

A template is a reusable item of configuration that only exposes a minimal set of parameters to users, and is internally translated to a “standard” Jenkins configuration. Thus, a template can be used to expose a simplified, abstract, restricted representation of Jenkins job configuration to users. Changes to a template immediately apply to everywhere the template is used.

A template is conceptually broken down into a few pieces:

Model

A model defines a set of attributes that constitutes a template. Roughly speaking, you can think of this as a class of an object-oriented programming language. For example, if you are creating a template for your organization’s standard process of running a code coverage, you might create a model that has attributes like “packages to obtain coverage”, “tests to run.”

Attribute

An attribute defines a variable, what kind of data it represents, and how it gets presented to the users. This is somewhat akin to a field definition in a class definition.

Instance

An instance is an use of a model. It supplies concrete values to the attributes defined in the template. Roughly speaking, the model-to-instance relationship in the template plugin is like the class-to-object relationship in a programming language. You can create a lot of instances from a single template.

Transformer

A transformer is a process of taking an instance and mapping it into the “standard” Jenkins configuration, so that the rest of Jenkins understands this and can execute it. This can be logically thought of as a function.

When we say “template” it actually is a combination of a model and a transformer. And each facet of these concepts is individually extensible --- for example, additional transformers can be developed to let you express the transformation in the language of your choice.

Models

Models are hierarchical and can extend other models. This can be used to create some common usage templates, that are not designed for direct usage and will not be instantiated by users, or create concrete templates by specializing them for a dedicated task. When one model inherits another, it inherits all the attributes that are defined in the base model.

For example, a common utility template can handle connection and authentication to enterprise infrastructure, and child templates can use this basis to expose high-level tasks, like deploying a new release.

Template hierarchy is a useful way to manage templates of growing complexity and make attribute definitions mutual.

Templates can be configured as being instantiable by users, or as only ways to encapsulate configurations to be used from other templates by inheritance.

Templates have a dedicated entry in the top page, where the administrator will set up and maintain the templates.

All templates are based on generating Jenkins configuration from a reduced set of attribute, that the user will provide to apply the template for a dedicated job. This major reduction in complexity makes configuring new jobs simple for new users and reduces the risk of misconfiguration.

Attributes

Each attribute represents a fragment of data that constitutes a template, along with how it gets presented in the UI to the users. For example, you might define an attribute called "product code", which stores a string represented in the single-line text field.

Each attribute consists of the following data model:

ID

IDs uniquely identify an attribute among other attributes of the same model. ID is used to refer to an attribute in transformers and other places, so the characters you can use in ID are restricted.

Display Name

Display name is the human-readable name of the attribute. It is used for example when Jenkins present the configuration page to edit an instance.

Type

The type of the attribute primarily determines the type of the data that this attribute holds (such as String or boolean), but it also controls how the data is represented in the UI (for example, is it a single-line text field or a multi-line text area?). For the discussion of available attribute types, see Attribute Type Reference.

Attribute Type Reference

The template plugin ships with the following distinct attribute types. Attribute types are also extensible, allowing additional plugins to bring in more types.

Text field

This attribute type holds an arbitrary string value that can be edited through a single line text field. It is the most general purpose attribute type.

Transformers access these values as java.lang.String values.

Text area

This attribute type holds an arbitrary string value that can be edited through a multi-line text area. This is suitable for entering large chunks of text.

Transformers access these value as java.lang.String values.

Checkbox

Checkbox holds a boolean value that can be edited through an HTML checkbox.

Transformers access these values as boolean values.

Nested Auxiliary Models

A nested auxiliary model allows a model to compose other models. See Auxiliary Template for more discussion.

It supports four different modes.

Single value

The attribute will hold one and only one instance of the specific auxiliary model. In the UI, the configuration of the nested auxiliary model is presented as-is, inline. Let’s say you created an auxiliary model that represents the access credential to your internal package repository called “PackServer Credential.” You then create “Acme Corp Pack Build” template that builds and deploys a binary to this package server. This template would want to have an attribute that holds a “Credential” auxiliary model as a nested value in the single mode, since you need one and the only one credential for each job.

Single value (choice of subtypes)

The attribute will hold one instance of the auxiliary model (if instantiable), or one of its (instantiable) subtypes.

List of values

The attribute will hold a list of instances. All the instances must come from a single auxiliary model, but users can add as many/few of them as they wish. A good example of this is the JDK configurations in the system configuration page (where JDK can be thought of as an auxiliary model, which in turns has attributes like name, home, installer, etc.) The UI of this mode will be just like that of the JDK configuration. A button to add/remove/reorder instances of the auxiliary model.

List of values, including subtypes

This is similar to the “list of values” mode above, except that the user can choose from all the instantiable subtypes of the specific nested model type. A good example of this is the build step configuration in a free-style project, where you can add any number of build steps (which are all subtypes of the base model called “Builder”.) This mode presents a drop-down menu button that allows the user to choose the specific auxiliary model type to instantiate.

Transformers access these values as an instance (or as a java.util.List of instances) of auxiliary models. In the pack server / credential example above, you could access it like i.credential.username (where i is the instance of the outer model, credential is the ID of the attribute, and username is the ID of an attribute of the “Credential” auxiliary model.)

In the modes allowing a choice of subtypes, you can use attrName.model.id (or elementOfAttr.model.id in the case of a list) to identify the actual auxiliary model chosen. (As usual, before the template instance has been saved, attrName will be null, so you must check for this first.) The ID will be a fully-qualified path name to the model, such as topFolder/subFolder/my-aux-model. However rather than listing all possible subtypes using if-then or switch statements in your transformer, you can use Computed Value to obtain some information from the chosen model in a consistent format. (The analogy in object-oriented programming would be using a virtual method rather than querying the implementation type of an object.)

Select a string among many

This attribute type is like an enumeration type. The designer of a model can specify a list of name/value pairs, and the user of the model gets to choose one of the pairs as the value. The UI will show the name part of the pair, and the transformers will see the value part of the pair, which mimics how the <select> tag works in HTML.

The designer of a model can choose different UI presentation for the attribute.

Computed Value

This is a special type of attribute, whose value is a result of evaluating a JEXL expression, which is specified during the model definition. The value isn’t editable in the instance configuration screen. This is somewhat akin to defining a getter method (whereas the regular attribute definitions are like defining fields); it lets you specify a program that computes the value, typically based on the value of other attributes.

There are a number of use cases for such a property:

  • To simplify the transformer definition. If your transformer keeps doing a similar manipulation of attribute values over and over again, it is often better to define such a manipulation as a computed attribute (for example, say you define an attribute for a subversion repository URL, and you want its last path component accessible as a separate attribute.)

  • For polymorphic behavior in auxiliary models. Imagine you are defining a model for assembling a book from its chapters, where each chapter can be either a pre-existing PDF in your content repository or a DocBook that needs a transformation. You define this as a base abstract auxiliary model called “Chapter” and then two concrete sub-models. Those sub-models likely define different set of parameters, but it’s often convenient for them to both have the attribute of the same name (say shellScript that evaluates to the shell script that produces PDF) that yield very different values that are computed from other attributes.

At runtime, the actual type of the attribute is opaque; use attrName.toString() to get a string representation of the value within Groovy code. Normally this is done implicitly by a template transformer, for example in ${attrName}.

Heterogeneous components from descriptors

This advanced attribute type allows a model to hold a collection of arbitrary Describable instances. The discussion of the Describable class and its role in Jenkins goes beyond the scope of this document, but this allows models to reuse plethora of information fragments defined in plugins large and small (such as Subversion checkout location, builders, …​)

Of various attribute types that deal with Describables, this attribute type holds a list of them, and it allows all the subtype Describables for the specified base type. A good example of this is the build step configuration in a freestyle project, where users can insert any build steps of various types, so long as they extend the base "build step" type.

Heterogeneous components from descriptors (one instance per descriptor)

This attribute type is a flavor of Heterogeneous components from descriptors. This version also holds a list of Describables, but it only allows up to one instance from one type. A good example of this is the post-build action configuration in a freestyle project, where users can activate an arbitrary number of publishers of various types, but only up to one per type.

Single Describable object

Attributes of this type can hold a single Describable rather than a list.

Select Item

This attribute type holds a reference to an item in Jenkins. “Item” is the generalization of the top level elements of Jenkins, like Maven and freestyle jobs, matrix jobs, folders, etc.

This attribute type can be further constrained by the kind of item it can refer to. This can be done in terms of the job type in Jenkins, or the template it is created from. For example, if you are modeling a test job to be run after a build job, your test job model should refer not just to arbitrary items in Jenkins, but items created from the build job model.

At runtime, this attribute is accessed as an instance of hudson.model.Item.

Select Tool Installation

This attribute type holds a reference to a tool installation, which is a specifically configured instance of JDK, Maven, Ant, etc. Plugins that you are using might have added other types.

When you define an attribute, you’ll choose the specific type of the tool (say, Ant), and instances will choose one of the configured installations from this specific kind (say Ant1.6 and Ant1.7, assuming that your system configuration has these two Ant installations configured.)

Most often attributes of this type are used in job and builder templates. For example, if you define the attribute called jdk in a job template, your Groovy transformer can use the following code to inject the chosen JDK into the job configuration. Most other tool installations (including Ant and Maven) are referenced from within the specific Builders. You can figure this out through programming-by-example as described in the tutorial.

<project>
  <% if (jdk != null) { %>
  <jdk><%= jdk.name %></jdk>
  <% } %>
  ...
</project>

At runtime, this attribute is accessed as an instance of hudson.tools.ToolInstallation. As you see above, a ToolInstallation is normally referenced through its name property.

Select Credentials

This attribute type holds a reference to credentials defined by the Credentials plugin. The control shows a pull-down with available credentials offered.

This attribute type can be further constrained by the kind of credentials it can refer to. For example, you can restrict the credentials to username/password combinations.

If the template instance is inside a folder, the user will be offered credentials defined in that folder, parent folders, or globally in Jenkins.

At runtime, this attribute is accessed as an instance of com.cloudbees.plugins.credentials.common.IdCredentials, or a more specific type if you have requested one. It would normally be used in XML configuration by calling getId() to retrieve an ID, typically stored in a field called credentialsId.

Builder Template

Often, developers create build steps as shell scripts to address enterprise requirements, for custom deployments, packaging, notification, QA integration, etc. The same script is used across a large set of jobs and only differ by few attributes. Maintaining such scripts and avoiding copy/paste mistakes can be a nightmare on large Jenkins installations. The builder template creates a new type of builder, thereby eliminating the need of copying a script, and highlighting the things that need to be filled in by users.

A similar goal can be achieved by a custom plugin development. A builder template, compared to a custom plugin, has the following trade-offs:

  • Writing a plugin requires a significant one-time learning cost, whereas a simple template can be developed rapidly. So for simple things, templates tend to get the job done more quickly.

  • A template can be created, modified, or removed without restarting Jenkins.

  • You get a lot more tool assistance for plugin development, thanks to the great IDEs available for Java. So for complex things, plugins tend to get the job done more quickly and elegantly.

Compared to job templates, builder templates tend to be more flexible, as the person configuring jobs still has substantial freedom to control how it works. A builder template is also the easiest kind of template to create, especially in combination with the specialized transformers that generate shell scripts. This makes builder templates the best entry point to the template plugin.

Creating a new Builder Template

The new item wizard offers a few options, including that of creating a new Builder Template. The template configuration allows the administrator to define the execution details of any custom tool, which will be exposed to users as a new build step.

Defining a Transformer

The transformer controls how your builder template maps to the actual builder that carries out the build.

One way to define a transformer is to use one of the standard transformers that takes an instance and produces XML. (In this case, the XML should match that saved for a build step in a job configuration.) As we discussed in the tutorial, a good way to do this is programming by example, where you manually create a freestyle project and configure it, then observe what XML it is producing.

The other way to do this, which is often preferable, is to use one of the specialized transformers that only work with builder templates, which we’ll discuss below.

Generate shell script via Jelly/Groovy

For shell scripts, a simpler way to define a builder template is to generate the script using Jelly or Groovy instrumentation to process the attributes and generate the adequate script. These specialized transformers eliminate the need of “programming by example” by focusing on the shell scripts.

See the transformer reference section for more details about the general syntax of Jelly/Groovy template, as well as available variable bindings.

The following screenshot demonstrates a Groovy-generated shell script to run the du command on an arbitrary user home directory. The ${…​} in the text should not be mistaken for a shell variable reference. These Groovy expressions are expanded during the transformation before it gets passed to the shell.

Such templates are easily created by a Unix administrator to automate maintenance processes, and can be used to give users the ability to run tools without the risk of allowing arbitrary commands to be executed.

template script
Figure 64. Generate Shell Script via Jelly/Groovy
Predefine variables for shell/batch script

An alternate transformer option, Sets some variables using Groovy then runs a shell/batch script, is also available. Unlike the previous transformer, the shell (or Windows batch) script section is not processed using any template language. Instead, your template attributes are directly available as shell variables, and you can optionally run a Groovy program to define other shell variables (typically based on the Jenkins build object, as in Template evaluation timing).

For example, suppose you have defined an auxiliary template for compiler options (Auxiliary Template has more).

compiler option
Figure 65. Defining an auxiliary template for compiler options

Now if you want to collect a number of options in a builder template, and produce an options variable suitable for interpolation in a shell script, you can use a brief Groovy snippet to process the options into a flat string.

compiler template attributes
Figure 66. Defining attributes for a compiler builder template
compiler template transformer
Figure 67. Defining a transformer for a compiler builder template

When this builder template is used in a job definition, compiler options may be defined structurally.

compiler template usage
Figure 68. Using a compiler builder template

The resulting build will run: cc -O3 -g --sysroot=/usr/local *.c

Template evaluation timing

Another key difference of these specialized transformers is the difference in the timing of the template evaluation. The general purpose transformers perform transformation asynchronously from the execution of builds (roughly speaking, think of it as transforming right after the job is configured), whereas these shell script transformers run during the build.

This allows you to perform contextual transformations that depend on the parameters and conditions of the in-progress build (such as the changes in the source code, or what started it).

To do so, use the build variable from within your template to access the AbstractBuild object that represents the current build in progress.

Tip

One special use for a builder template is to use Groovy to compute some variables, and then make them available to subsequent build steps—without actually running anything itself. The Sets some variables using Groovy then runs a shell/batch script transformer can be used for that purpose. Just leave Shell/Batch Script empty, and use something like the following in Groovy Script:

e = new hudson.EnvVars()
e.put('VAR1', 'val1')
e.put('VAR2', 'val2')
build.environments.add(hudson.model.Environment.create(e))

These variables can now be used like any others in subsequent build steps, e.g. $VAR1 from Execute shell.

Publisher Template

Another template type closely related to builder templates is the publisher template. The only difference is that it creates Publisher instances (“post-build actions” in the UI), rather than Builder instances (“build steps” in the UI).

Unlike builder templates, publisher templates do not offer any specialized transformers. You can only use the generic Groovy template transformation (or its Jelly equivalent). This is because there is no catch-all publisher analogous to the shell/batch script builders.

Job Templates

In many enterprise situations, you end up with multiple jobs that differ by only a few aspects, such as where the code is checked out from, or the QA servers that are used. Job templates can be used to represent this model in enterprise development by letting projects only define the minimal variable configuration to apply to the model. Thus, a job template offers a higher level of abstraction, as it templatizes the definition of an entire job.

One typical use case of this is that you have lots of components that are written and built the same way. The open-source Jenkins project, for example, has a large number of plugins that are built precisely in the same way. It makes sense to define a template for this, as we did in the tutorial, to centralize the configuration and make it easier to create a new job for a new plugin.

Another typical use case is that you have multiple branches of the same project, actively developed in parallel. Such jobs tend to only differ in version control configuration, and making a template out of them allows you to rapidly set up CI for new branches.

Another use case of the job template is where there is a gap in Jenkins expertise between the Jenkins experts in your organization and the developers at large. Templates can be useful to “dumb down” the user interface and steer them toward the best practices the experts define, thus gaining overall productivity for them.

A custom job type can be also developed as a plugin, but this is one of the most complex extension points in Jenkins. For this reason, we recommend you start with a job template. Only after you start feeling the constraints of the template plugin should you consider rewriting your template as a custom plugin.

Configuration

Job templates need to produce a full job XML definition as a result of the transformation. As discussed in the tutorial, often the simplest way to get this done is “programming by example”. You manually create a project (or a few), configure it with all the desired options, and then get the config.xml file by adding config.xml to the URL, then insert expressions and statements where you need to reflect actual values from instances.

Tip

When using the Groovy template transformation, you can save time by using the Load Prototype feature to load a named job’s config.xml directly into your transformer input. (Any special characters like $ will automatically be escaped for you.)

Warning

When a job is initially created from template, all of its attributes will be null until the user configures it and clicks Save. This means that your transformer must be prepared to accept null attribute values. Jelly scripts usually have an easy time with this since the JEXL expression language treats attr.prop as null if attr is null. Groovy scripts can use the safe navigation operator (attr?.prop). Or either kind of script can explicitly check for null and conditionally insert an XML block:

Null attribute check in Jelly scripts
<j:if test="${attr != null}">
  <somexml>${attr.prop}</somexml>
</j:if>
Null attribute check in Groovy scripts
<% if (attr != null) { %>
  <somexml>${attr.prop}</somexml>
<% } %>
template job
Figure 69. Job Templates

A job template will also maintain a list of projects that are using the template, so that an administrator can quickly check the project that follows the success of project structure standardization.

Tip

There is no special trick to create job templates whose instances have parameters (i.e., per-build variables chosen by the user initiating the build). You just need to mentally separate parameters from template attributes, since they are expanded at different times. The Load Prototype feature helps ensure that parameter expansions are escaped and not misinterpreted as template attribute expansions. This blog post goes into detail.

Folders Template

Folders are another CloudBees Jenkins Enterprise feature, described in another document. It allows you to group related jobs and organize them in a hierarchical way, as you’d do with files and directories on a file system.

As a container of jobs, a folder template is an useful way to define a set of jobs that follow enterprise rules and best practices. For example, a single folder template could instantly provision a set of jobs that implement your organization’s best practice, for example (1) a CI job that builds the tip of the repository every time a commit is made, (2) a continuous inspection nightly job to populate a sonar server with code quality metrics, and (3) a continuous deployment job to push successful builds to a demo server for dogfood eating.

A folder template can be also useful to clarify the “flavor” of a folder and how it’s supposed to be used. For example, you might have a template for a product, and which contains branch folders that are templates themselves (say because you need multiple jobs for one branch.) Such modeling enables you to now capture relationship between your products as model attributes, which can then translate into lower level configuration, such as trigger relationship.

What a folder template does and does not control

A folder template controls the configuration of a folder itself (that is, everything that appears in the configuration page of the folder), such as the name, the icon, views, and access control matrix.

A folder template also allows you to specify a set of activities that are to be performed when a new instance of the template is created, such as creating jobs inside the new folder, as can be seen in the screenshot below:

template folders
Figure 70. Folder Templates

But beyond the initial creation, jobs inside a folder aren’t controlled by templates. So with sufficient permissions, users can add/remove jobs and the template change will not void those changes. If this is undesirable and you prefer to lock things down further, you can use job templates for jobs in a folder template (to restrict the configuration freedom) and use access control to prevent creation/deletion (see RBAC documentation for more details.)

Folder features that work well with templates

There are several features in the folder plugin that work really nicely when combined with a folder template.

Restrict child type in a folder

A folder can be configured to restrict the kind of items that it can contain. When you define a folder template, this is useful way of strengthening the flavor of a folder. For example, you can say that a product folder template can only contain branch folder templates.

Environment variables

A folder can be configured to define a set of environment variables that are automatically made visible to all the builds inside the folder. This is one of the primary ways to use the template attributes and have them affect what goes on inside a folder. In the earlier example of a folder template that represents a branch, you can define the environment variable called BRANCH, and all your jobs can refer to this variable to choose where to check out the code from.

Model attribute list view column

The template plugin adds a new list view column extension, which lets you display values of the specified attribute from items in the container.

This list view column is useful when you have a folder (or a view) that contains items created from job templates / folder templates. For example, if you have a folder template that represents a product, which in turn contains folder templates that represent branches, you can show the name of the branches in a list view inside the product.

This can be further combined with the “computed value” attribute type to further control what to display.

Auxiliary Template

Auxiliary templates are second-class citizens that are used to create inner structures in other models. As such, they do not get transformed to any Jenkins concepts by themselves.

It is easier to understand an auxiliary template if you think about how you’d model a music CD. A “music CD” model might have properties like title and the year of publication, but it also has a list of tracks, where a track by itself is a model that has several attributes, such as the title, the duration, and the composer. In this example, a track is an auxiliary model.

Auxiliary model is also useful for a set of information appear repeatedly across different models. For example, your software development might involve accessing database, which requires several attributes to model (say host, schema, user name, and password.) So as you define various templates that use database, it is convenient to define the database access as an auxiliary model and refer to it from other templates. (The other way to do this is via inheritance, where you define this set of attributes in the base template and derive from it elsewhere. See the extensive discussion between inheritance vs composition in object oriented programming languages for more analogies and trade-offs.)

Accessing attributes of auxiliary instances

When you define a model that contains instances of auxiliary models, your transformation will want to access these attributes. See Nested Auxiliary Models for more details.

Templates Defined in Folders

Starting with version 4.0, templates may be placed inside folders rather than at top level. When you do this, the template will only be visible in the configuration of jobs (or other templates) in the same folder or a (recursive) subfolder. For example, job templates will be offered from the New Job link only inside the folder defining the template. As another example, a template using a nested auxiliary model attribute will only be able to select an auxiliary model in the current folder (where the main template is defined) or above; and if the base auxiliary model type is abstract (i.e. you can pick subtypes), a job based on the main model will only be able to select concrete subtypes from among those defined in the folder of the job or above.

This feature allows each team to maintain its own set of private templates, while still reusing some global templates if desired. Assuming each team only has Job/Configure permission applied to its own folder (using role-based access control), that team will only be able to make changes to its own templates, not those of other teams (which it might not even be able to see).

No specific permission is required to use a template, beyond being able to see that it exists (in the current folder or above), i.e. Job/Read; nor is any specific permission required to create a template, other than being able to create any item in a folder, i.e. Job/Create. This is because templates, when expanded, are simply shortcuts to configuration that you could have produced yourself anyway, so they should not generally be privileged or secret. However you may certainly deny configuration access to a template to some users while still making it available for instantiation, for example by defining an RBAC group on the template which filters out some roles. Alternately, you could define the template in the root of Jenkins or in a high-level folder, and then only offer job configuration permissions to most users in certain subfolders.

Transformer Reference

The template plugin comes with a number of built-in transformer implementations. As discussed in the “Concepts” section, logically speaking a transformer is a function that takes an instance and generates the standard Jenkins configuration of something (such as a builder or a job), thereby describing the model in terms that the rest of Jenkins understands.

This chapter describes the available transformer implementations.

Jelly-based transformation

Jelly is an XML-based template engine. Firstly, it allows you to insert arbitrary Java-like expressions in the ${…​} syntax (see JEXL for the reference of this syntax), and secondly, it allows you to use tags to do basic control-flow operations, such as conditional block, for-loop, switch/case, etc. See Jelly core tag library reference for more details.

Another benefit of Jelly is that it is guaranteed to produce a well-formed XML, and anything short of that is detected while you are defining a transformer. This reduces the chance of invalid transformer configuration.

Through these tags, Jelly can be abused as a programming language, but that is not really what Jelly is designed for (although at times this is handy.) If you start finding yourself using more Jelly tags than literal XML elements, consider using other transformers, such as Groovy template transformation, which offers more concise syntax.

Variable Bindings

During the transformation, all the attributes of the instance are available by their IDs. In addition, the following special variables are available:

instance

this variable refers to the instance object being transformed. This is of type com.cloudbees.hudson.plugins.modeling.Instance.

model

this variable refers to the model that the instance belongs to. This is of type com.cloudbees.hudson.plugins.modeling.Model.

parent

this variable is available for transforming models that produce Items, such as job templates and folder templates (but not builder templates.) This variable refers to the ItemGroup that represents the container of the item, such as a folder or a jenkins.model.Jenkins instance.

parentInstance

See Accessing a folder template’s attributes from a child job template for details.

These variables and their methods/properties can be accessed to perform intelligent transformation.

Groovy template transformation

This transformer is based on Groovy template, which is essentially a JSP in Groovy syntax + ${…​} style expression, such as the following:

<foo>
  <%
    for (int i=0; i<100; i++) {
      if ((i%2)==0) {
  %>
  <data>${i}</data>
  <%
      }
    }
  %>
</foo>

The benefit of this transformation is that the Groovy syntax, with its close alignment with the Java syntax, makes it easier for Java programmers to perform complex transformation.

Take care to escape dollar signs with backslashes in XML snippets you copy, so they are not misinterpreted as Groovy expressions; for example, one line copied from a folder XML configuration should read:

<properties class="hudson.model.View\$PropertyList"/>

Normally, for reasons of safety and robustness, XML metacharacters like < in expression values are escaped before being emitted to the output. If you intend some attribute or computed value to be treated as raw XML inlined into the job configuration, wrap it in the xml function:

<outerElement>${xml(thisIsTreatedAsXmlText)}</outerElement>

In many cases one of your template attributes is actually a Jenkins component (or array of them). To embed it in a job config.xml you must first serialize it. The Templates plugin (as of 4.0) offers a few more helper functions to make this easy. Most simply, if you have a single component like a hudson.tasks.Publisher that you want to emit, use the serialize function:

<publishers>${serialize(publisher)}</publishers>

If you have a collection of components (including an array), for example because you let the user configure zero or more hudson.tasks.Builders, use the serializeAll function:

<builders>${serializeAll(builders)}</builders>

Finally, if you have a single component that you want to bind to a field of a more generic type, the XStream library used by Jenkins for serialization has a special syntax. To emit a component bound to that field, you need to also specify the field name. For example, to let the user configure a hudson.scm.SCM instance and bind that to the scm field as <scm class="hudson.scm.SubversionSCM">…</scm>, use the serializeOption function:

${serializeOption(scm, 'scm')}
Variable Bindings

This transformer exposes the exact same set of variables as the Jelly transformer.

Template Security

If everyone who defines templates is a Jenkins administrator—specifically if they have the Overall/RunScripts permission, used for example by the Script Console link—then they can write whatever scripts they like in templates. These scripts may directly refer to internal Jenkins objects using the same API offered to plugins. (Script languages included Groovy, used for several transformers; Jelly, also used for a couple of transformers; and JEXL, used in computed value attributes.) Such users must be completely trusted, as they can do anything to Jenkins (even changing its security settings or running shell commands on the server).

However, if some template creators/maintainers are “regular users” with only more limited permissions, such as Job/Configure, it is inappropriate to let them run arbitrary scripts. To support such a division of roles, the Templates plugin as of version 4.0 supports two related systems: script approval, and Groovy sandboxing. As of 4.6 it includes a binding to the Script Security plugin, which should be consulted for general information on these systems. The following guide merely notes how the Script Security plugin is used by Templates.

Script Approval

The first, and simpler, security system is to allow any kind of script to be run, but only with an administrator’s approval. There is a globally maintained list of approved scripts which are judged to not perform any malicious actions.

When an administrator saves a template configuration, any scripts it contains are automatically added to the approved list. They are ready to run with no further intervention. (“Saving” usually means from the web UI, but could also mean uploading a new configuration via REST or CLI. See Scripting Templates.)

When a non-administrator saves a template configuration, a check is done whether any contained scripts have been edited from an approved text. (More precisely, whether the requested content has ever been approved before.) If it has not been approved, a request for approval of this script is added to a queue. (A warning is also displayed in the configuration screen UI when the current text of a script is not currently approved.)

An administrator may now go to Manage Jenkins » In-process Script Approval where a list of scripts pending approval will be shown. Assuming nothing dangerous-looking is being requested, just click Approve to let the script be run henceforth.

Builder and publisher templates using unapproved scripts (mainly Groovy or Jelly transformers, but also JEXL computed values) present no special issue: unless and until the script is approved, trying to run a build using that build step will fail. You may retry once the script has been approved.

Job and folder templates using unapproved transformer scripts are a little trickier. Existing template instances will continue to use the former script definition; as soon as the script is approved, they will be updated. New jobs or folders cannot be created from the template until its script is approved. (Currently unapproved computed values will just be left as null, but there is no such fallback option in general for job template transformers.)

Groovy Sandboxing

Waiting for an administrator to approve every change to a script, no matter how seemingly trivial, could be unacceptable in a team spread across timezones or during tight deadlines. As an alternative option, the Templates plugin lets Groovy scripts be run without approval so long as they limit themselves to operations considered inherently safe. This limited execution environment is called a sandbox. (Currently no sandbox implementations are available for the Jelly or JEXL languages, so all such scripts must be approved if configured by non-administrators.)

To switch to this mode, simply check the box Use Groovy Sandbox below the Groovy script’s entry field. Sandboxed scripts can be run immediately by anyone. (Even administrators, though the script is subject to the same restrictions regardless of who wrote it.) When the script is run, every method call, object construction, and field access is checked against a whitelist of approved operations. If an unapproved operation is attempted, the script is killed and the template cannot be used.

The Templates plugin ships with a small default whitelist: enough to print the values of template attributes (including those from auxiliary models), and not much else. The Script Security plugin, or other plugins, may offer other whitelisted operations by default.

But you are not limited to the default whitelist: every time a script fails before running an operation that is not yet whitelisted, that operation is automatically added to another approval queue. An administrator can go to the same page described above for approval of entire scripts, and see a list of pending operation approvals. If Approve is clicked next to the signature of an operation, it is immediately added to the whitelist and available for sandboxed scripts.

Most signatures be of the form method class.Name methodName arg1Type arg2Type…, indicating a Java method call with a specific “receiver” class (this), method name, and list of argument (or parameter) types. (The most general signature of an attempted method call will be offered for approval, even when the actual object it was to be called on was of a more specific type overriding that method.) You may also see staticMethod for static (class) methods, new for constructors, and field or staticField for field accesses (get or set).

As with unapproved scripts, sandboxed scripts using illegal operations pose no special issue for builder and publisher templates; just try rerunning the build. New job or folder instances again cannot be created from the template until it uses only whitelisted operations. Existing job or folder templates instances also will not be updated before all operations run by the script are approved; unlike whole-script approval, in this case the instances are not automatically refreshed just because the whitelist has been expanded. To trigger a refresh, resave the template (or of course you may edit its script to not invoke unapproved operations).

Administrators in security-sensitive environments should carefully consider which operations to whitelist. Operations which change state of persisted objects (such as Jenkins jobs) should generally be denied (and these have no legitimate use in templates anyway). Most getSomething methods are harmless.

Be aware, however, that even some “getter” methods are designed to check specific permissions, whereas scripts are often run by a system pseudo-user to whom all permissions are granted. So, for example, method hudson.model.AbstractItem getParent (which obtains the folder containing a job) is in and of itself harmless, but the possible follow-up call method hudson.model.ItemGroup getItems (which lists jobs by name within a folder) checks Job/Read. This second call would be dangerous to whitelist unconditionally, because it would mean that a user who is granted Job/Create in a folder would be able to read at least some information from any jobs in that folder, even those which are supposed to be hidden according to, for example, RBAC role filtering; it would suffice to create a job template using a Groovy transformer like this:

<project><description>I sniffed ${parent.getItems()}!</description></project>

When instantiated as a throwaway job, its description would display at least the names of supposedly secret projects. You may instead click Approve assuming permission check for getItems; this will permit the call when run as an actual user (for example when someone saves the templatized job), while forbidding it when run as the system user (for example when someone saves the template and its instances are updated in the background). In this case, getItems will actually return only those jobs which the current user has access to, so if run in the former case (by saving the templatized job), the description will show just those jobs they could see anyway. This more advanced button is shown only for method calls (and constructors), and should be used only where you know that Jenkins is doing a permission check.

Advanced Template Techniques

In this section, we discuss various techniques to create and maintain sophisticated templates.

Navigating around objects in context

Sophisticated templates require accessing information available in the rest of the Jenkins object model. For example, let’s say your “test” job template has an attribute that refers to the corresponding “build” job, and you’d want the test job to change its behaviours based on the configuration of the build job.

This kind of template can be best written as “Groovy-based transformer”, since it provides concise syntax to navigate around the object model and access information. In order to work with the Jenkins object model you need some basic familiarity with that API; Jenkins Javadoc is the starting point.

Instance to Item, Item to Instance

The Instance type refers to a Map-like object that retains the values of attributes for a model, and during the transformation, the current instance is always accessible as the instance variable.

When those instances are from templates that map to jobs in Jenkins (such as job template and folder template), these instances are of the com.cloudbees.hudson.plugins.modeling.impl.entity.EntityInstance class, which defines the item property that lets you access the corresponding hudson.model.Item instance (such as FreeStyleProject object and a Folder object.) So for example, instance.item.isDisabled() would check if the job is disabled or not.

Conversely, those items that are created from their templates have the instance property that allows you to navigate back to the instance. In other words, instance.item.instance==instance. For items that are not created from any templates, this property returns null. Also note that this property is defined in Groovy, and not in Java, and therefore it is only accessible when using Groovy transformers.

Accessing containing folder

One of the useful technique to eliminate redundancy in job templates is to access information of the folder that contains jobs. For example, you might define a folder property that specifies the name of the branch, then templatize jobs inside this folder so that it uses this information to determine where to check out.

To facilitate such an use case, there’s the parent property accessible during a transformation of templates that map to jobs in Jenkins (such as job template and folder template.) This property refers to the instance of ItemGroup, which is typically com.cloudbees.hudson.plugins.folder.Folder instance (that represents the folder that contains the job) or Jenkins (if the job isn’t contained in any folder but a top-level.)

Accessing a folder template’s attributes from a child job template

While the parent property can be used to access the parent Folder itself, if you have a Job template as a child of a Folder template it can be very useful to access the attributes of the Folder template directly from the Job template. For this use case the parentInstance variable is analogous to the instance variable only it allows accessing the attributes of the parent Folder template.

Unlike other techniques mentioned above, to use parentInstance you need know nothing about the Jenkins object model, since you have direct access to the template attributes which are exposed as simple objects.

There are a number of issues to be aware of when using the parentInstance variable:

  • parentInstance will be null when the Job template is instantiated outside of a Folder template

  • While parentInstance will be non-null when the Job template is instantiated as one of the jobs in the "Initial Activities" only the parentInstance.name attribute will have a non-null value until the user saves the Folder template configuration

Therefore, when using the parentInstance variable it is necessary to ensure that it is used defensively, as any NullPointerException thrown when attempting to instantiate the Job template can cause the folder template to fail to instantiate also.

When using the parentInstance variable, it is recommended to use the Groovy transformer for the Job template as that provides both the "Elvis" operator (?:) and the safe navigation operator (?.) both of which combine to allow safer template instantiation, e.g. ${parentInstance?.someAttribute?:"someDefault"} is a safe Groovy expression to return the value of the someAttribute attribute of the parent template instance, falling back to someDefault if the job is either instantiated outside of a Folder template or while the value of someAttribute has not been defined by the user.

If you will be repeating multiple references to the same attribute within the same Job template, it would make sense to create a Computed attribute in the Job template and use a JEXL expression to evaluate the value to use as that makes it easier to refactor the expression, e.g. if you need to change the default value. In the context of JEXL expressions, JEXL also supports the "Elvis" operator and performs safe navigation by default, e.g. ${parentInstance.someAttribute?:"someDefault"} is a safe JEXL expression to return the value of the someAttribute attribute of the parent template instance, falling back to someDefault if the job is either instantiated outside of a Folder template or while the value of someAttribute has not been defined by the user.

Note

The recommendation is to test that your Job template can be instantiated with sensible defaults outside of a Folder template before attempting to add the Job template creation as an initial action for a folder template, as this provides a fast way to verify that your template is coded defensively.

There are a couple of additional points you may want to consider:

  • If you have triggers in the jobs, you may want to structure the template such that the triggers are disabled while the attributes from the parent instance are invalid so as to prevent the job from being triggered prior to the parent folder template being configured correctly

  • The "Elvis" operator only supplies the default value for null values. It does not supply the default for empty strings. If you need the default to apply for both null and an empty string, you will need to write your expression accordingly

  • It is easy for users to accidentally add leading/trailing whitespace to values that they input in the template attributes. If such whitespace could affect your template you will need to structure the template accordingly.

Scripting Templates

When working with large installations using many templates or many templatized jobs, it is often useful to be able to perform bulk operations using scripts. (Many of these operations are available only in the 4.0 and later versions of the plugin.)

REST API operations

The most important operation involving templates is creating and updating instances. For builder and publisher templates (and any auxiliary templates they use), there is no special support in Jenkins: the usual REST methods to create and update job configuration (/config.xml) work. Refer to an existing job using such templates for an example of the syntax.

Job and folder templates, however, are special and have a dedicated REST API. A single endpoint (URL) is used for creating and updating job and folder templates: /instantiate, which is added to the URL of the template (never the job). You may POST to this URL, giving it a query parameter job with the full path to the intended or actual templatized job. The message body must be an XML document with root element values; the child element names and text contents define attributes to set on the template instance. This single API can be used to create a job (or folder) from a template; update the attributes of an existing templatized job (using this template); or even to convert a job using a different template, or no template at all. Refer to the REST API link on the template’s index page for details and examples.

(You may also use most regular REST APIs to work with templatized jobs. /createItem can be used to create a templatized job; you must specify the <com.cloudbees.hudson.plugins.modeling.impl.jobTemplate.JobPropertyImpl> section giving the template and attributes, and the main body of the job such as <project>, but the transformer will be run to fill in the details. GET /config.xml works fine, though POST to this same URL does not currently rerun the transformer automatically. /api/xml and the like can be used to obtain information about the associated template and attributes as well.)

Since templates are stored as regular items in the Jenkins folder tree as of 4.0, regular REST APIs for CRUD (create/read/update/delete) of templates themselves work as with anything else. Also, POST to /config.xml on a job/folder template will automatically rerun the transformer on any instances, just as if you had saved it from the UI. You can also use /api/xml and similar to obtain information about a template, such as its attributes or supertype.

Access control is similar to interactive use: you must have Job/Read permission on the template for most operations, Job/Create on a folder to create a templatized job in it, Job/Configure to change the configuration of a templatized job.

CLI commands

The Jenkins CLI is often more convenient to use from scripts than the REST API. There is a command instantiate which has the same capabilities as the REST /instantiate. First give the command the path and name of the (desired or existing) templatized job. You can then add -t specifying the path and name of the template (only necessary when creating a new job, or converting to this template). Then give -a with a name=value pair for each attribute you wish to set.

As with the REST API, there is no special syntax for CRUD on templates themselves. The same commands used for regular jobs work on templates (at least in Jenkins 1.538 and later). Again as a convenience, update-job applied to a job/folder template will reconfigure any instances.

Groovy scripting

Sometimes using the Groovy script console (/script), the Scriptler plugin, etc. is the easiest way to work with templates. Most operations with templates or templatized jobs will not require any special APIs. There are two things you might want to call directly in the Templates plugin when working with job or folder templates:

  • If you have a reference to job or folder template (com.cloudbees.hudson.plugins.modeling.impl.entity.EntityModel), you can call reconfigureInstances() to regenerate any instances after updating its definition (or otherwise changing what the transformers would produce).

  • If you have a reference to a templatized job, you can call EntityInstance.from(Item) to obtain an instance object. This is mainly used to call setValue(String, Object) to change attributes, and save() to run the transformer and update the job definition.

  • You can create a new templatized job or folder as well. If g is a ModifiableTopLevelItemGroup (such as Jenkins root, or a Folder), you can call

    g.createProject(model.asDescriptor(), "new-job-name", true)

    to create a new item. If you need any additional attributes (beyond name), you can use the steps given above starting with EntityInstance.from to define and save them.

Miscellaneous features

Some other features in CloudBees Jenkins Enterprise are designed to enhance your use of templates.

View Creation Filter plugin

This plugin allows you to specify that a view may only contain certain item types. To use it, when creating a new view, pick Restrict what can be created in this view from the Add Job Filter pulldown. Then you can select a list of item types; only items of those types will be permitted to be added to the view via the New Item menu.

While these types might be built-in types such as Freestyle project, you will also see your own job templates listed as choices.

You may also want to use the Template Type Filter to add all instances of a given template by default. Currently you cannot prevent other kinds of items from being added via the Jobs checkbox list (tracked as CJP-1501).

Groovy View plugin

This plugin provides a Groovy Script View you can select when creating a new view in a Jenkins folder or at top level. It allows you to generate any content you like in the view (subject to the limitations of the system’s configured markup language).

To create interesting content, you can refer to the folder as ${it} and calculate some contents. But when used with a templatized folder, you can also use ${instance} to the refer to the template instance, from which you can get the values of template attributes. This is useful since your folder template can predefine a Groovy script view which generates a dynamic view according to both the current contents of the folder and its templatized configuration.

Validated Merge Plugin

Introduction

In a large software development project, where many developers work on the same code, keeping the repository stable is a challenge. Because of sheer number of commits that land on the repository, even if the chance of accidental regression per commit is small, they compound and make the repository unstable, blocking other developers.

A traditional implementation of Jenkins, where it tests the tip of the repository, doesn’t help with this situation as regressions can be only discovered after they land on the tip of the repository.

Therefore often the "solution" to the problem is to have every developer rigorously run tests before changes are pushed to the repository, thereby wasting precious human time on test executions, which ideally should be running on the server.

The validated merge feature for Git in CloudBees Jenkins Enterprise is a plugin that solves this problem by allowing developers to push commits (called "proposed changes") to Jenkins, specifying which branch it is intended for (called "upstream ref", for example "master".) Jenkins will then build the proposed changes, verify that it is of a good quality, and then push that change to the upstream repository.

In this way, developers can now engage in the fire-and-forget style development. Once he is reasonably happy with the changes he made, he can push this into Jenkins and then move on to the next task, without waiting for a lengthy test cycle to complete. If the proposed changes are actually good, it’ll automatically land in the main repository. If not, he will get a notification.

In this way, the validated merge feature allows you to offload more work from developers and their laptops to Jenkins and its agents.

Tutorial

Server setup

This feature is contained in the "git-validated-merge" plugin. It is enabled by default for CloudBees Jenkins Enterprise, but if you don’t find it, go install this plugin. Go to your CI build/test job configuration page and check "Enable Git validated merge support" Once this option is selected, you’ll see the "Git Repository for Validated Merge" item appear in the menu list of the job page:

repo page
Figure 71. The "Git Repository for Validated Merge" screen

Depending on how you configure your security setting, you might see a warning in this page asking you to fix the port number of the Jenkins SSHD service. If you see it, go to the system configuration page and set the SSHD port under the "SSH Server" section, as shown below.

sshd port
Figure 72. Configuring the SSHD port

Then, make sure that Jenkins can push to your upstream Git repository. See Pushing to the upstream repository for more discussion.

Your server is now ready for the validated merge.

Client setup

Now, on a developer laptop where you’d like to use the validated merge feature, register this Jenkins job as the remote repository. The exact URL of the Git repository in the Jenkins job can be obtained by the "Git Repository for Validated Merge" page on the job top page. You might see an HTTP URL instead of SSH URL, if you haven’t enabled the security setting.

$ git remote add jenkins ssh://myserver:2222/foo

Instead of registering this as a separate repository, you can also set Jenkins as the push URL of your upstream repository. In this way, your push will always go through Jenkins automatically, without you typing anything differently from command line. In comparison, the previous approach would require that you consciously select Jenkins as the target of each push.

$ git remote set-url --push origin ssh://myserver:2222/foo

This needs to be repeated for every workspace, but now your client is also ready for the validated merge.

Mental picture of the validated merge

Before we go discussing how you actually do a validated merge, let’s briefly talk about the mental picture of how this whole thing works.

With the validated merge, your Jenkins job acts as an intermediate repository between your workspace and your central/upstream repository. Instead of pushing changes directory to the upstream repository, you’ll push changes to the Jenkins job. We’ll refer to this intermediate repository as the gate repository.

The gate repository implemented by Jenkins is a little bit magical. When you push a ref to the gate repository (for example via git push jenkins master), it’ll lie to you that the push was successful and the ref was moved, to make your client happy, but it actually hasn’t done so. Instead, it’ll just remember the ref that you wanted to push to (master in this case), and the actual commit you pushed.

tres repos
Figure 73. Repository model

Jenkins then schedules a build for your newly pushed commit. In this build, it’ll check out the ref from the upstream you wanted to push the changes into (origin/master in this case), then merge your changes into it. The build will then run as usual, and if the build was successful, this result of the merge that was just built and tested will be pushed to the upstream repository, becoming the new head of the branch. This process happens completely automatically.

Unlike Gerrit+Jenkins integration, in which each commit is separately built and tested, the validated merge in Jenkins only checks the tip of the ref. So you can split your change into several logically separated pieces, without paying the penalty of more build time.

The gate repository hosts additional tags that can be useful. For example, every submitted commit is accessible via changes/N where N is a build number, and every commit that was actually built is accessible via build/N (build/N is a result of a merge of the upstream and changes/N. This allows one developer to look into a failure found in a commit pushed by another developer and help him fix the problem.

Sending changes and have them validated

Let’s get back to the developer laptop and make a change to the code base. When you are done, make a commit. To see the effect of validation, we’ll have this change break the build.

# just to be clear that you are working on the master branch
$ git checkout master

# touch some file and commit
$ touch some.c
$ echo '#!/bin/sh\nfalse' > run.sh
$ git add some.c run.sh
$ git commit -am "adding some file"
[master 363f629] adding some file
 ...

Now the developer is happy with the changes. He thinks he’s done; he hasn’t run a full build/test cycle, but he thinks he made all the necessary changes for fixing a bug.

So let’s send this change to Jenkins to make sure they actually do pass all the tests.

$ git push jenkins master
 ...
remote: Created refs/heads/validated-merge-for/master/20120415-095424
remote:  at 363f629 for refs/heads/master
To ssh://myserver:2022/foo/repo.git
 * [new branch]      master -> master

Switch to the Jenkins web UI, and you’ll see a build scheduled for this new push. If you look at its console output, you’ll see that it’s checking out the upstream repo and then merging the change that was just submitted, and the result (8ded3bb) is getting built.

Using strategy: Build commits submitted for validated merge
Last Built Revision: Revision 4ced883 (master)
Fetching changes from 1 remote Git repository
Fetching upstream changes from /my/upstream.git
  ...
Merging refs/tags/changes/47
Commencing build of Revision 363f629 (master)
[workspace] $ /bin/sh -xe /tmp/hudson9087427762386246304.sh
+ ./run.sh
Build step 'Execute shell' marked build as failure
Finished: FAILURE

The build has failed, and so the push to the upstream repository didn’t happen. Let’s verify this from the perspective of Alice, another developer working on this code base. Pretend that you are Alice, and check out the workspace into another location. You’ll see that the master branch of Alice doesn’t contain the problematic 363f629.

To make this tutorial interesting, let’s make some changes as Alice in the master. In a big repository, it is normal for changes to overlap like this.

$ echo hello >> alice
$ git add hello
$ git commit -am "edit by alice"
[master 834782e] edit by alice
$ git push origin master
...
To /my/upstream.git
   458ea81..834782e  master -> master

At this point, the master branch in the upstream is 834782e as pushed by Alice, while our initial change 363f629 was rejected by Jenkins. We’ll see how this will resolve itself in the end.

Now, switch back to the original workspace, and let’s pretend that we found out that our initial commit was rejected because it didn’t pass the build. Now, make a correction to the commit, and push that to Jenkins again.

$ echo '#!/bin/sh\ntrue' > run.sh
$ git commit -am "fixed a broken build"
[master f664265] fixed a broken build
$ git push jenkins master
remote: Created refs/heads/validated-merge-for/master/20120415-105638
remote:   at f664265 for refs/heads/master
To ssh://myserver:2022/foo/repo.git
 * [new branch]      master -> master

When you make these corrections, you have the choice of amending the commit or doing a separate commit, and Jenkins can handle both correctly. In a silly regression like this, amending a commit is probably better, but for more intricate problems, leaving the record of a failure and its resolution could be useful.

Also note that we did not pull the changes Alice just made.

Now switch to the Jenkins UI once again, and see the console output of a newly scheduled build.

Using strategy: Build commits submitted for validated merge
Fetching changes from 1 remote Git repository
Fetching upstream changes from /my/upstream.git
  ...
Merging refs/tags/changes/48
Commencing build of Revision f884547 (master)
Checking out Revision f884547 (master)
[workspace] $ /bin/sh -xe /tmp/hudson1267891670488600118.sh
+ ./run.sh
Pushing the result to origin
Finished: SUCCESS

As you can see, at the very end Jenkins pushed the changes up to the upstream repository because the build was successful.

Go back to the workspace and pull from the origin to verify that the changes did make it into the upstream:

$ git pull origin
From /my/upstream
   458ea81..f884547  master     -> origin/master
Updating f664265..f884547
Fast-forward
 ...

This updates your local workspace by pulling in all the changes made in the trunk.

To more clearly see what has happened, run the git log to see the commit graph. We first committed 363f629, which didn’t work. Then Alice committed 834782e, which was pushed directly to the master. We then committed f664265 as a follow-up fix, which was merged with 834782e (then tip of the master branch) to produce f884547 (see the above console output for build #48 and you see that it actually tested this), which then became the tip of the master branch.

$ git log --decorate --graph --date-order
*   commit f884547 (HEAD, origin/master, master)
|\  Merge: 834782e f664265
| | Author: Jenkins
| |
| |     Merge commit 'refs/tags/changes/48'
| |
| * commit f664265 (jenkins/master)
| | Author: Kohsuke Kawaguchi
| |
| |     fixed a broken build
| |
* | commit 834782e
| | Author: Alice
| |
| |     edit by alice
| |
| * commit 363f629
|/  Author: Kohsuke Kawaguchi
|
|       adding some file
|
*   commit 458ea81

So now we completed a successful validated merge.

Access Control

Permissions

This feature defines two permissions that can be used to control access to the repository.

Push

This permission controls whether the user can submit a change for a validated merge. This permission is implied by the build permission of the job, meaning those who can trigger a build can submit changes by default.

Pull

This permission controls whether the user can retrieve commits from the gate repository. This permission allows users to retrieve other people’s submissions. This permission is implied by the push permission, meaning those who can push changes into the gate repository will also automatically be able to retrieve changes.

Accessing the gate repository

This feature provides two transports to access the gate repository. One is the smart HTTP protocol, and the other is the SSH protocol.

If your job has the URL http://myjenkins/job/foo/job/bar/, then the Git repository via HTTP is available at http://myjenkins/job/foo/bar/repo.git. If your Jenkins is configured without security, this URL can be used as is. Otherwise log in to Jenkins and visit this page in a web browser to get a personal access URL. That URL allows Git to access the repository under your credentials without sending a password; keep this URL secret.

The SSH access to this same repository is available at ssh://myjenkins:NNNN/foo/bar.git where NNNN is the port number of the Jenkins SSHD service. If your Jenkins is configured with security, you’ll need to register your public key with Jenkins to authenticate the SSH access.

The HTTP protocol access is available all the time, and the SSH access is available if and only if the Jenkins SSHD service is available. The "Git repository for validated merge" UI will by default offer the SSH URL when SSHD is running, falling back to the HTTP URL; but an administrator can override this preference (as well as the SSHD service) in the Jenkins global configuration page.

Pushing to the upstream repository

In many cases the real upstream repository will also have access control and not permit anonymous pushes. To permit Jenkins to push your validated commits (along with associated merge commits) to the upstream repository, click the Advanced button in the job’s configuration page and select Credentials for pushing upstream. This could be a HTTP(S) username and password combination, or an SSH private key. You can also pick <automatic by pushing user> in which case the set of credentials associated with the user pushing to the gate repository will be searched for those matching the upstream URL.

Reusing a job between CI and validated merge

When you enable a validated merge support for a job, the same job can still be used for a regular CI build, where it builds the tips of the upstream repository. Builds triggered outside a push to a gate repository (such as someone clicking "Build Now", scheduled execution, trigger via polling, and trigger from other builds) will result in the regular build behaviour.

Dealing with post-build push failure

When a validated merge build succeeds, Jenkins will push the merge commit to the upstream repository, but if someone else has pushed new commits to the same branch while Jenkins was validating the submitted changes, this push will fail. The Git jargon for this is that the push is not a fast-forward. Causing a build to fail because of this is not necessarily desirable, because there actually wasn’t any problem in the submitted changes. In the "advanced" section of the "Enable Git validated merge support" feature in the job configuration screen, you can tell Jenkins how to deal with this situation.

Merge and push

Given that the submitted changes were OK, Jenkins will perform another merge between the commit that was just validated and the commit that’s currently in the upstream repository, then push the new merge commit to the upstream. If a merge fails, the build will be marked as a failure. This is basically assuming that there are no undesirable interaction between the changes that were submitted to Jenkins and the changes that were pushed to the upstream repository. This requires less computing cycles, but it has a potential risk of broken builds in the branch tip. This is generally a desirable option if your validated merge takes a significant amount of time.

Redo a validated merge

Start a new validated merge all over again, with the newly discovered commit in the upstream repository. The current build gets marked as aborted, then a new one will be started with the same submitted changes. This requires additional computing cycles, but it will guarantee that no untested commits ever land on the upstream repository. This is generally a desirable option if your validated merge completes quickly.

Fail the build if push fails

Cause a build to fail. This is essentially asking the submitter of the changes to deal with the situation. He can then choose to resubmit the changes, or push straight into the upstream. This is a desirable option if none of the other options suit your needs.

This behaviour is extensible. Other plugins can implement custom behaviours.

Refs in the gate repository

Tags

The gate repository holds tags that follow a certain naming conventions:

changes/NNN

This tag indicates the validated merge commit that was submitted by the user and built in the build #NNN.

builds/NNN

This tag indicates the commit that was actually built and tested in the build #NNN. This is the result of a merge between the then-tip of the upstream ref, and changes/NNN. (Therefore if this merge was a fast-forward, this tag points to the same commit as changes/NNN.)

validated-merge-for/UPSTREAM/YYYYMMDD-HHMMSS[.X]

The changes/NNN tags are more easily memorable but they are only available once the build has started. Therefore, when a proposed change is pushed to the gate repository, Jenkins assigns a tag that follows this longer format. The UPSTREAM portion refers to the upstream ref that this change is meant for (such as 'master', but this can include '/' in it, such as 'feature/foo'), and YYYYMMDD-HHMMSS is the timestamp of the submission. If multiple changes are pushed within the same second, the additional .X (where X is a number) is appended to create unique tags.

These tags are useful for developers to collaborate on changes that are being validated or rejected. For example, if a developer submitted a change and it was rejected, another developer can fetch the change, make an additional commit, then submit it to integrate that to the upstream.

Aside from these tags, the gate repository also contains any tags that are set in the upstream repository, as well as tags that are created in the Jenkins workspace. For example, the Git plugin add its own tag for every build.

Since the gate repository tends to host a large number of tags, you normally don’t want to fetch every single tag in it. To fetch a specific tag and that alone, run Git like git fetch -n tag changes/123 where '-n' prevents Git from fetching tags automatically and "tag changes/123" tells Git to retrieve the specific tag.

Branches

The gate repository also contains branches that map to the permalinks in the job in the form permalink/ID where ID refers to the identifier of the permalink, such as lastSuccessfulBuild or lastFailedBuild. Other plugins often define additional permalinks, for example the promoted builds plugin.

These tags can be useful beyond the validated merge use case. For example, when a build is promoted, you can start another job which checks out the branch that corresponds to the promotion and then deploys to the staging server.

VMWare Pool Auto-Scaling Plugin

Introduction

The VMware plugin connects to one or more VMware vSphere installations and uses virtual machines on those installations for better resource utilization when building jobs. Virtual machines on a vSphere installation are grouped into named pools. A virtual machine may be acquired from a pool when a job needs to be built on that machine, or when that machine is assigned to a build of a job for additional build resources. When such a build has completed, the acquired machine(s) are released back to the pool for use by the same or other jobs. When a machine is acquired it may be powered on, and when a machine is released it may be powered off.

This plugin should work with VMware vSphere 4.0 and later (normally tested on 5.1).

The plugin was introduced in Nectar 10.10.

Configuration

Before jobs can utilize virtual machines it is necessary to configure one or more machine centers, that connect to vSphere installations, and one or more machine pools from within those machine centers, from which builds may acquire machines.

From the main page click on "Pooled Virtual Machines", then click on "Configure" to goto the plugin configuration page.

After configuration the "Pooled Virtual Machines" page will display the current list of configured machine centers. From this page the click on a machine center to see the list of machine pools, and from that page click on a machine pool to see the list of machines.

Machine Centers

To configure a machine center enter a name for the center, which will be referred to later when configuring clouds or jobs, the host name where the vCenter service is located that manages the vSphere installation, and the user name/password of the user to authenticate to the vCenter and who is authorized to perform appropriate actions on machines in pools.

One or more machine centers may be configured to the same or different vCenter. For example, different pools may require different users that have authorized capabilities to the same vCenter, or there may be multiple vCenters that can be used. Details can be verified by clicking on the "Test Connection" button.

Machine Pools

One or more machine pools may be added to a machine center. There are currently two types of machines pool that can be added. A static pool and a folder pool. In either case power-cycling and power-on wait conditions may be configured for all machines in such pools.

It is guaranteed that a virtual machine will be acquired at most once, even if that machine is a member of two or more pools of two or more centers.

Static Pools

To configure a static machine pool enter a name for the pool, which will be referred to later when configuring clouds or jobs. Then, add one or more static machines to the pool. The name of the static machine is the name of the virtual machine as presented in the vCenter.

Note that there can be two or more virtual machines present in the vCenter with the same name, for example if those machines are located in separate folders or vApps. In such cases it is undetermined which machine will be associated with the configuration.

If the machine is assigned to a build then a set of optional properties, "Injected Env Vars", can be declared that will be injected into the build as environment variables.

Folder Pools

To configure a folder machine pool enter a name for the pool, which will be referred to later when configuring clouds or jobs. Then, declare the path to a folder or vApp where the machines contained in that folder or vApp comprise of the pool of machines. If the "recurse" option is selected then machines in sub-folders or sub-vApps will also be included in the pool.

If the machine is assigned to a build then IP address of the virtual machine will be declared in environment variable "VMIP" that is injected into the build.

Virtual machines may be added and removed from the folder or vApp without requiring changes to the configuration. Such pools are more dynamic than static pools.

Power Cycling

Power-on actions can be specified after a machine has been acquired from a pool, but before the machine has been assigned to a build. Power-off actions can be specified after a build has finished with the machine, but before that machine has been released back to the pool.

The set of power-on actions are as follows:

Power up

Powers up the machine. If the machine is already powered up this action does nothing.

Revert to last snapshot and power up

Revert the machine to the last known snapshot, and then power up. This can be useful to power up the machine in a known state.

Do nothing

No actions are performed on the machine and it is assumed the machine is already powered on.

The set of power-off actions are as follows:

Power off

Powers off the machine. This can be useful to save resources. Note that it can take some time for a machine to be powered on and fully booted, hence builds may take longer if the power-cycling represents a significant portion of the overall build time.

Suspend

Suspend the machine. This can be useful to save resources while being able to power on the machine more quickly.

Take snapshot after power off

Power off the machine, and then take a snapshot.

Take snapshot after suspend

Suspend the machine, and then take a snapshot.

Do nothing

No actions are performed on the machine, and it will be left powered-on.

Power-on Wait Conditions

After a machine is powered on, if power-on actions are configured, and before the machine is assigned to a build, certain wait conditions may be configured that ensure the machine is in an appropriate state.

The set of power-on wait conditions are:

Wait for a TCP port to start listening

A timeout, default of 5 minutes, can be specified. If waiting for the TCP port to become active takes longer that the timeout, then the machine will be released back to the pool and an error will result.

Wait for VMware Tools to come up

Optionally, also wait for the machine to obtain an IP address. A timeout, default of 5 minutes, can be specified. If waiting for VMware Tools takes longer that the timeout, then the machine will be released back to the pool and an error will result.

Building Jobs on Virtual Machines

To declare a machine pool as an agent pool, where machines in the pool are agents used to build jobs, it is necessary to configure a VMware cloud. From the main page of Jenkins click on "Manage Jenkins", click on "Configure", goto the "Cloud" section, click on the "Add a new cloud", and select "Pooled VMware Virtual Machines". Select the machine center and a pool from that machine center that shall be used as the pool of agents. Then, configure the other parameters as appropriate. For example, if all machines are Unix machines configured with SSH with the same SSH credentials then select the "Launch slave agents on Unix via SSH".

When a build is placed on the build queue, whose label matches the configured VMware cloud, then, a machine will be acquired from the pool, powered-on (if configured), and added as node that is assigned to build the job after the power-on wait conditions have been met.

If you have selected the checkbox Only one build per VM in the cloud configuration, then after the build completes the machine will be powered-off (if configured) and released back to the pool. (In this case it only makes sense to configure one executor.) Otherwise, the agent may accept multiple concurrent builds, according to the executor count, and will remain online for a while after the initial build in case other builds waiting in the queue can use it; Jenkins will release the agent to the pool only after it has been idle for a while and does not seem to be needed.

Note that if there are no machines free in the machine pool then the build will wait in the queue until a machine becomes available.

Reserving a Virtual Machine for a build

To reserve a machine for a build go to the configuration page of an appropriate job, go to the "Build Environment" section, select "Reserve a VMWare machine for this build", and select the machine center and a pool from that machine center that shall be used as the pool of reservable machines.

When a build of such a configured job is placed on the build queue then, the build will start, a machine will be acquired from the pool, powered-on (if configured), and the build will wait until the power-on wait conditions have been met. After the build completes the machine will be powered-off (if configured) and released back to the pool.

The build itself is not run on this machine; it is merely made available. To actually use the VM from your build, you would need to somehow connect to it, typically using the VMIP environment variable.

Note that if there are no machines free in the machine pool then the build will wait until a machine becomes available.

You may also select multiple machines. In this case all of the build will proceed once all of them are available. (The build may tentatively acquire some, release them, and later reacquire them, to avoid deadlocks.)

Taking virtual machines offline

From time to time you may need to perform maintenance work on particular VMs. During this time you would prefer for these VMs not to be used by Jenkins.

Click on Pooled Virtual Machines, click on a (static or folder) pool, and click on a machine in that pool. Just press the Take Offline button to prevent Jenkins from reserving it. (You can enter a description of what you are doing for the benefit of other users.) Later when the machine is ready, go back to the same page and press Take Online.

Offline status is remembered across Jenkins restarts.

Plugin Usage Plugin

Introduction

The Plugin Usage plugin helps you keep track of which plugins you are actively using, and where. This can be valuable in large Jenkins installations with dozens of plugins installed, some of which might have been installed for projects which are long gone.

The Plugin Usage plugin was introduced in CloudBees Jenkins Enterprise 12.11.

Using the Plugin Usage plugin

Go to Manage Jenkins > Plugin Usage. You will see a table of installed plugins (with strikethrough indicating disabled plugins), a numeric usage count, and a list of usages. A progress bar runs while Jenkins scans the configuration of all jobs and some global settings too. Once the scan is complete, you can try clicking on the Usage Count column to sort by usage count.

usage count
Figure 74. Usage Count

The third column shows where each plugin appears to be used, hyperlinked to the relevant configuration screen. Jenkins refers to global Jenkins configuration. Most other labels are names of configurable items such as jobs (using » to show folder structure where relevant); configuration associated with Jenkins users (such as credentials) will also be shown as such. Code packaged as a plugin but really used only as a library shared between several “real” plugins (Async Http Client Plugin, for example) will be shown as used by those plugins. »… is appended to items which have nested elements also using the plugin; for example, a native Maven project may have several modules all of which refer to the plugin somehow.

Note

When security is enabled, only a Jenkins administrator can perform a complete analysis. In general, usages will only be shown of items whose configuration you can view. Usually this is those items you can configure, though the Jenkins Extended Read Permission plugin might allow you to view but not modify some configuration.

Limitations

Beware that some plugins will not be listed in this table, because they are not directly mentioned in any stored configuration. They may affect how Jenkins runs in various ways without configuration; the Jenkins Translation Assistance Plugin is an example.

Conversely, a plugin might have some stored configuration which is of no interest or never used at all. For example, Jenkins Email Extension Plugin displays various controls in the global Configure System screen; when you click Save, Email Extension configuration is saved in hudson.plugins.emailext.ExtendedEmailPublisher.xml in your $JENKINS_HOME, even if you have made no customizations. This plugin will thus appear to be “in use” by Jenkins. Only if you have enabled the Editable Email Notification post-build action for a job will it have a usage count greater than 1, however.

In short, you can use the plugin usage table as a starting point for investigating plugins that might be unused, but it is not relevant for all plugins. The table works best for checking the usage of plugins which offer concrete additions to configuration, such as build steps or publishers in jobs.

Plugins used in configuration generated by a builder (or publisher) template will not be listed (in either the template or jobs using it), since this configuration is created on the fly during a build rather than being stored permanently in the job. Configuration generated by a job (or folder) template is considered.

Wikitext Security Plugin

Introduction

The WikiText Security plugin addresses security concerns that arise from permitting arbitrary HTML in description fields in Jenkins. The plugin allows you to enter descriptions in one of multiple wiki markup languages.

The WikiText Security plugin was introduced in Nectar 11.04.

Overview

Jenkins lets users enter description fields in HTML as shown in Setting up project-specific descriptions. For some teams that are concerned about XSS attacks via the description fields, or have policy preferences towards writing wiki over HTML, this CloudBees Jenkins Enterprise plugin allows you to use one of several well-known wiki markup formats in place of HTML.

wikitext description
Figure 75. Setting up project-specific descriptions

Supported Wiki Markup Languages

The WikiText Security plugin supports the following wiki markup languages today:

Configuration

Select the desired wiki language by going "Manage Jenkins" and "Configure Global Security". Under "Markup Formatter" are the supported wiki languages.

wikitext configuration

Using the Wikitext Security plugin

Once the plugin is installed and configured, click the “add description” link on a particular project and enter the description in the preferred wiki markup language. In Wikitext Usage, we have used the wiki markup to render the description in bold. Wikitext Output shows the corresponding output.

wikitext usage
Figure 76. Wikitext Usage
wikitext output
Figure 77. Wikitext Output

Secure Copy Plugin

Introduction

The Secure Copy plugin provides the ability to share artifacts meeting defined criteria between two jobs on different Jenkins instances (or the same Jenkins instance). The job that will consume the artifacts creates an import build step and is assigned a random key for that build step. The job that will produce the artifacts is then (manually) given the importer’s key and generates a secret which is (manually) given to the import build step. This sets up the trusted channel between the two jobs. Every time the consuming job runs, it will copy the latest build artifacts meeting the criteria defined in the producing job.

The Secure Copy plugin was introduced in CloudBees Jenkins Enterprise 12.04.

Creating an exported permalink

The Secure Copy plugin creates a one way link between two jobs. We will call the job producing the artifacts the Producer, and the job that will consume the artifacts the Consumer.

The Producer job must be configured to archive the artifacts to be shared.

  • For jobs using the Maven job type, if the Maven lifecycle is advanced to a phase on or after the “package” phase, all the artifacts that are attached to the Maven reactor will be automatically archived by the Jenkins job.

    Note
    Do not confuse a Maven project built using a Freestyle job type and a Maven Build Step with a Maven project built using a Maven job type. The first does not have automatic archiving of artifacts, while the second does.
  • For all other job types (e.g. Freestyle, Matrix, etc), it is necessary to configure the “Archive Artifacts” publisher.

    Note
    A Maven project built with a Freestyle job type will need the ”Archive Artifacts” publisher configured.

The Producer job will also need the appropriate permalink which is used to select the build from which the artifacts will be copied. Jenkins comes with a number of built-in permalinks:

  • Last build - the most recent build. This is rarely useful, as it can include a currently running build.

  • Last stable build - the most recent stable build. Usually a good choice.

  • Last successful build - the most recent successful build. A good choice if you can accept artifacts from builds with test failures.

  • Last failed build - the most recent failed build. Probably not useful unless implementing some sort of automated failure analysis.

  • Last unstable build - the most recent unstable build. Again rarely useful.

  • Last unsuccessful build - the most recent unsuccessful build. Again rarely useful.

In addition, Jenkins plugins can define their own permalinks. For example, the “Promoted builds” plugin defines a permalink for each promotion process.

Creating an exported permalink starts with adding an “Import artifacts from an exported permalink” build step to the Consumer job. See Adding the “Import artifacts from an exported permalink” and After adding the “Import artifacts from an exported permalink”

sec copy sel buildstep
Figure 78. Adding the “Import artifacts from an exported permalink”
sec copy key generated
Figure 79. After adding the “Import artifacts from an exported permalink”

When the build step has been added, a random key will be generated. This key will be provided to the Producer job, which will generate a secret to be given to the Consumer job.

Note

If the Producer and Consumer jobs are on the same Jenkins instance, or if you are navigating away from the Consumer job’s configure screen, be sure to save the configuration after adding the build step. The key is generated when the build step is added, and if the job is not saved, the next time the build step is added a new key will be generated.

On the Producer job navigate to the “Exports” screen (The exported permalinks screen) and select the “Create” option. Enter the key from the Consumer job (Creating an exported permalink) and select the permalink (Selecting the permalink to export). Click on the “Create” button to generate the secret (An exported permalink with generated secret).

sec copy producer exports
Figure 80. The exported permalinks screen
sec copy export create
Figure 81. Creating an exported permalink
sec copy secret generated
Figure 83. An exported permalink with generated secret

On the Consumer job’s configure screen enter the secret generated by the Producer job (A fully configured “Import artifacts from an exported permalink” build step) along with any other configuration options, such as:

  • An Apache Ant-style glob pattern to select a subset of artifacts to copy.

  • The target directory into which the artifacts should be copied.

  • Whether to flatten any directory structure information from the Producer job.

  • Whether this build step is optional; if not, the Consumer build will be marked as failed if there are no artifacts to copy.

sec copy secret entered
Figure 84. A fully configured “Import artifacts from an exported permalink” build step

High Availability

This section has moved to Installation Guide / High Availability

Pipeline Plugin Suite

Introduction

Pipeline is a set of plugins for Jenkins that make it possible to run complex, multistage builds based on a single programmatic script. CloudBees Jenkins Enterprise includes all the freely available core Pipeline features, augmented with some additional plugins useful for larger systems.

The Pipeline tutorial is the best place to get started understanding how to write Pipelines generally. Here we will discuss the CloudBees Jenkins Enterprise additions.

Checkpoints

All Pipelines are resumable: if Jenkins needs to be restarted (or crashes, or the server reboots) while a flow is running, it should resume at the same point in its program after Jenkins starts back up. Similarly, if a flow is running a lengthy sh or bat step when an agent unexpectedly disconnects, no progress should be lost when connectivity is restored.

However, in some cases a Pipeline will have done a great deal of work and proceeded to a point where a transient error occurred: one which does not reflect the inputs to this build, such as source code changes. For example, after completing a lengthy build and test of a software component, final deployment to a server might fail for a silly reason, such as a DNS error or low disk space. After correcting the problem you might prefer to restart just the last portion of the Pipeline, without needing to redo everything that came before.

The CJE-only checkpoint step makes this possible. Simply place a checkpoint at a safe point in your script, after performing some work and before doing something that might fail randomly:

node {
    sh './build-and-test'
}
checkpoint 'Completed tests'
node {
    sh './deploy'
}

Whenever build-and-test completes normally, this checkpoint will be recorded as part of the Pipeline, along with any program state at that point, such as local variables. If deploy in this build fails (or just behaved differently than you wanted), you can later go back and restart from this checkpoint in this build. (You can use the Checkpoints link in the sidebar of the original build, or the Retry icon in the stage view, mentioned below.) A new flow build (with a fresh number) will be started which skips over all the steps preceding checkpoint and just runs the remainder of the flow.

Restoring files

Restarted Pipelines retain the program state, such as the values of variables, from each run, but they do not maintain the workspaces on your agents. Subsequent runs of the same Pipeline will overwrite any files used in a particular run.

The workspace used for a specific run of a Pipeline is not guaranteed to be used again after restarting from a checkpoint. So if your post-checkpoint steps rely on local files in a workspace, not just the command you run, you will need to consider how to get those files back to their original condition before the checkpoint.

The correct approach is to always keep the checkpoint step outside of any node block, not associated with either an agent or a workspace. Prior to the checkpoint, use the stash step to save any important files, such as build products. If and when this build is restarted from the checkpoint, all of its stashes will be copied into the new build first. You can then use the unstash step to restore some or all of the saved files into your new workspace.

When using Declarative Pipeline syntax you are able to define an agent at the top-level, giving all stages of the Pipeline access to the same workspace and files. This removes the need to worry about multiple agents or workspaces but it prevents you from running steps of your Pipeline without an agent context and its associated workspace. In order to properly use a checkpoint you cannot use a top-level agent but rather set the agent directive to use for each stage. The checkpoint may be isolated in its own stage with no agent or it is possible to use the node step directly in the stage’s `steps

Example using checkpoint in its own stage

pipeline {
  agent none
  stages {
    stage("Build") {
      agent any
      steps {
        echo "Building"
        stash includes: 'path/to/things/*', name: 'my-files'
      }
    }
    stage("Checkpoint") {
      agent none
      steps {
        checkpoint 'Completed Build'
      }
    }
    stage("Deploy") {
      agent any
      steps {
        unstash 'my-files'
        sh 'deploy.sh'
      }
    }
  }
}

Alternative example using node within a stage

pipeline {
  agent none

  stages{
    stage("Build"){
      steps{
        node('') { // this is equivalent to 'agent any'
          echo "Building"
          stash includes: 'path/to/things/*', name: 'foo'
        }
        checkpoint "Build Done"
      }
    }
    stage("Deploy") {
      agent any
      steps {
        unstash 'my-files'
        sh 'deploy.sh'
      }
    }
  }
}

This Jenkinsfile gives a more complex example of this technique in Scripted Pipeline syntax. (If you are still using CloudBees Pipeline older than 1.5, or OSS Pipeline older than 1.10, the archive and unarchive steps can be used for the same purpose.)

Alternately, you could use any other technique to recover the original files. For example, if prior to the checkpoint you uploaded an artifact to a repository manager, and received an identifier or permalink of some kind which you saved in a local variable, after the checkpoint you can retrieve it using the same identifier.

Jenkins will not prevent you from restoring a checkpoint inside a node block (currently only a warning is issued), but this is unlikely to be useful because you cannot rely on the workspace being identical after the restart. You will still need to use one of the above methods to restore the original files. Also note that Jenkins will attempt to grab the same agent and workspace as the original build used, which could fail in the case of transient “cloud” agents. By contrast, when the checkpoint is outside node, the post-restart node can specify a label which can match any available agent (older documentation uses the term 'slave').

Stage View

When you have complex build Pipelines, it is useful to be able to see the progress of each stage. To that end, CloudBees Jenkins Enterprise includes an extended visualization of Pipeline build history on the index page of a flow project, under Stage View. (You can also click on Full Stage View to get a full-screen view.)

To take advantage of this view, you need to define stages in your flow. You can have as many stages as you like, in a linear sequence. (They may be inside other steps such as node if that is convenient.) Give each a short name that will appear in the GUI.

stage 'Checkout'
node {
  svn 'https://svn.mycorp/trunk/'
  stage 'Build'
  sh 'make all'
  stage 'Test'
  sh 'make test'
}

When you run some builds, the stage view will appear with Checkout, Build, and Test columns, and one row per build. When hovering over a stage cell, you can click the Logs button to see log messages printed in that stage: after the preceding stage step and before the next one.

Other important events are also indicated with special buttons, either in a stage cell or for the build as a whole. If a build is waiting in an input step, you will be able to tell it to proceed (or not). Download can be used to obtain any artifacts archived by a build. Retry can restart from a checkpoint if one was recorded.

Each row also records the build number, date it was started, and any changelog entries from a version control system. Progress bars also indicate how long each stage is taking and how long it might still be expected to take, based on historical averages.

Pipeline Job Templates

CloudBees Jenkins Enterprise supports Pipeline Job Templates, allowing you to capture common job types in a Pipeline Job Template and then to use that template to create instances of that Job type.

As an example, let us create a very simple "Hello print Pipeline template" that simply prints "Hello" to the console. The number of times "Hello" is printed is captured as a template attribute. We will then use that template to create an instance of that Job to print "Hello" 4 times.

Step 1: Create the Pipeline Template

Do this simply by selecting to create a "New Item" on the Jenkins home page. On the "New Item" form, enter the template name and select "Job Template" as the item type and press the "OK" button.

templates 1

Step 2: Configure the Pipeline Template

After creating the initial template you’ll need to configure it by:

  1. Specifying the template attributes. These will be injected as variables into the Pipeline script.

  2. Select "Groovy template for Pipeline" as the Transformer Type.

  3. Specifying the Pipeline script, using the template attributes (defined above) as variables.

templates 2

Step 3: Create and run a Pipeline Job from the Pipeline Template

Now we can create a Build Job from the template:

  1. From the Jenkins home page, select "New Item".

  2. Enter the name the Job name e.g. "Hello print Pipeline job".

  3. Select the template (created and configured in Step #1 and #2 above) as the Job type.

  4. Press the "OK" button.

templates 3

Once the Job instance is created you’ll be allowed configure the template attributes (as specified in Step #2 above). In the case of our simple "Hello print Pipeline job" Job, we want to print the "Hello" message 4 times:

templates 4

After specifying all attributes and pressing the "Save" button, you’ll then be able to run a build of the job and check its status in the Stage View or in the Console, for example.

templates 5

Pipeline script attribute controls

If you have defined a template using the Groovy template for Pipeline transformer, you may wish to allow the instance configurer (end user of the template) to write some Pipeline script.

For example, the template might define an overall structure for the job: selecting a node with an approved label, doing a checkout of an approved SCM, running a sh step against a user-defined main script, and using catchError or a try-finally block to run some mandatory reporting. The administrator may wish to allow the user to define an optional set of extra steps to run during the reporting phase, say.

To support this use case, define an attribute using the Pipeline script text control. This control is identical to the Text area control except that it offers a Pipeline-specific code editor. Suppose you named the attribute script. Then you need merely include the following fragment in the appropriate point in your overall script:

evaluate(script)

Custom Pipeline as Code Scripts

Pipeline as Code describes how to create multibranch projects and organization folders automatically from SCM repositories containing a Jenkinsfile. This file serves two functions: it contains the Pipeline script which defines the project’s behavior; and it indicates which repositories and branches are ready to be built by Jenkins.

For large organizations with a homogeneous build environment, keeping a Jenkinsfile in every repository could be undesirable for several reasons:

  • The content could become redundant. This risk can be minimized by using global libraries to define a DSL for the organization’s builds, though the syntax is limited to what Groovy supports.

  • Developers may not want to see any Pipeline script in their repositories. Again this can be minimized using global libraries to some extent: the learning curve can be restricted to copying and editing a short example Jenkinsfile.

  • Some organizations may not want to have any Jenkins-specific files in repositories at all.

  • Administrators may not trust developers to write Pipeline script, or more broadly to define job behavior. While the Groovy sandbox defends against malicious scripts (for example, changing system configuration or stealing secrets), it does not prevent overuse of shared build agents, questionable plugin usage, lack of timeouts or resource caps, or other mistakes or poor decisions.

To support administrators wanting to restrict how Pipelines are defined, CloudBees Jenkins Enterprise includes an additional option for multibranch projects and organization folders. With this option, the configuration in Jenkins consists of both the name of a marker file, and the Pipeline script to run when it is encountered.

For a single repository, create a Multibranch Pipeline. Configure your branch source as usual, but then under Build Configuration » Mode, select Custom script. Select a Marker file, which will be the name of a file in matching repositories. Under Pipeline » Definition, you may type in a Pipeline script directly, or use Pipeline script from SCM to load the definition from an external source.

custom factory single

More commonly, you will want to configure a project definition applicable across an entire organization. To do this, create a GitHub Organization. Configure your repository source as usual, but then under Project Recognizers, delete Pipeline Jenkinsfile and add Custom script instead.

custom factory multiple

In this example, any repository containing a Maven pom.xml at top level will be recognized as a buildable Jenkins project. As usual with Pipeline as Code, checkout scm checks out the repository sources on a build node.

Note that you may have multiple project recognizers defined for a single organization: say, one for pom.xml and one for build.gradle. Only one recognizer will be used for a given repository: the first one that finds its marker file in at least one branch. You can even include the standard Pipeline Jenkinsfile recognizer in the list as one option.

Trusted files

There is also a facility for limiting what changes may be made in untrusted branches (such as pull requests filed by people not authorized to push to the main repository). In the standard recognizer, Jenkinsfile will always be read from a trusted branch (such as the base branch of an untrusted pull request). This ensures that an outsider cannot alter the overall build process merely by filing a pull request.

When using custom factories, or in some cases when using Jenkinsfile, you may want to restrict modifications to other files too. To do so, just have your script call the readTrusted step, which returns the text of a repository file, aborting if it has been modified in an untrusted branch. For example, to simulate the standard Pipeline Jenkinsfile recognizer you could use a custom recognizer based on Jenkinsfile with the script

evaluate(readTrusted 'Jenkinsfile')

(Currently the standard recognizer would quietly ignore modifications to Jenkinsfile in untrusted pull requests, whereas this script would fail the build.)

The return value could be ignored if you merely wanted to ensure that a file used by some other process was not modified. With the marker file Makefile:

readTrusted 'Makefile'
node {
  checkout scm
  sh 'make'
}

Untrusted pull requests could edit regular source files, but not Makefile.

For a more complex example, suppose you wanted to run make inside a Docker image, optionally specified in the repository, and optionally performing some custom Pipeline steps, such as sending mail notifications or publishing test results:

node {
  checkout scm
  docker.image(fileExists('image') ? readFile('image').trim() : 'ubuntu').inside {
    sh 'make'
  }
  if (fileExists 'extra.groovy') {
    evaluate(readTrusted 'extra.groovy')
  }
}

Here anyone is free to edit Makefile, because it is run inside a container, and to create or edit image to specify the build environment. But only committers to the repository can create or edit extra.groovy.

Restarting Aborted Builds Plugin

Introduction

When running a large installation with many users and many jobs, the CloudBees Jenkins Enterprise High Availability feature helps bring your Jenkins back up quickly after a crash, power outage, or other system failure. If any jobs were waiting in the scheduler queue at the time of the crash, they will be there after a restart.

Note
Running builds, however, cannot be resumed where they left off. Recreating the context of the build is, in general, too complex. Long-running builds may be used when the context is simple.

Furthermore, after a hard crash (as opposed to a graceful shutdown), Jenkins normally may lose even the build record.

The Restart Aborted Builds plugin in CloudBees Jenkins Enterprise helps manage exceptional halts like this. First off, it ensures that at least a partial record of every build is saved immediately after it starts and after every configured build step.

  • If the build completes within Jenkins’s control—including build failures, manual aborts, and termination due to scheduled shutdown—nothing special is done.

  • If Jenkins is halted suddenly, whether due to a crash or freeze of Jenkins itself or a general system crash, the list of all currently-running builds is recorded for use when Jenkins is next restarted. At that time, all aborted builds are displayed in an administrative page, where they can be inspected or easily restarted.

The Restart Aborted Builds plugin was introduced in CJE 13.05 and requires Jenkins core 1.509 or later.

Using the Restart Aborted Builds Plugin

If your Jenkins instance is abruptly terminated after restart, navigate to Manage Jenkins. If there are builds in progress, you see a warning at the top of this screen.

admin warning
Figure 85. Aborted Builds Administrative Warning
  1. Click on the link to see a list of all the running builds known to have been interrupted. You can click on any build to see details about it, such as the changelog or console log up to the break point. If the job was parameterized, the list will display the parameter values for that build as a convenience.

    index
    Figure 86. Aborted Builds List
  2. Click the Restart build button next to an aborted build. A new build of the same job is scheduled including any associated parameters. Restarting the build either with this button or in any other manner will remove the item from the list.

Note
If diagnosis of bugs is needed, the list of aborted builds is saved in $JENKINS_HOME/abortedBuilds.xml.

Long-Running Build Plugin

Introduction

What happens to builds that were running when Jenkins crashes, or is restarted not in “safe” mode (waiting for running builds to complete)? Whether or not you are using High Availability to start another Jenkins master, builds of regular projects that were already running will be aborted. The Restart Aborted Builds plugin will at least let you find and reschedule them. But for builds of projects which normally take a long time, perhaps hours or even days, this is not enough.

To address the needs of people who have builds that are just too long to interrupt every time a Jenkins agent is reconnected (or Jenkins is restarted for a plugin update!), CloudBees Jenkins Enterprise includes a plugin offering a “long-running” project type. The configuration is almost the same as for a standard free-style project, with one difference: the part of your build that you want to run apart from Jenkins should be configured as a (Unix) shell or (Windows) batch step. Of course this script could in turn run Maven, Make, or other tools.

If the agent is reconnected or Jenkins restarted during this “detached build” phase, your build keeps on running uninterrupted on the agent machine (so long as that machine is not rebooted of course). When Jenkins makes contact with the agent again, it will continue to show log messages where it left off, and let the build continue. After the main phase is done, you can run the usual post-build steps, such as archiving artifacts or recording JUnit-style test results.

The Long-Running Build plugin was introduced in CJE 14.05.

Using the Long-Running Build plugin

Make a new job and select Long-Running Project and note the Detached Build section. Pick a kind of build step to run—Bourne shell for Unix agents, or batch script for Windows agents—and enter some commands to run in your build’s workspace.

long-running-build-img-config
Figure 87. Long-Running Build Configuration

When the project is built, initially the executor widget will look the same as it would for a freestyle project. During this initial phase, SCM checkouts/updates and similar prebuild steps may be performed. Soon you will see a task in the widget with the (detached) annotation. This means that your main build step is running, and should continue running even if the Jenkins server is halted or loses its connection to the agent. (So long as the connection is open, you should see any new output produced by the detached build step in your build log, with a delay of a few seconds.)

exec
Figure 88. Long-Running Build Execution

Finally the task label will say (post steps) while any post-build actions are performed, such as archiving artifacts or recording JUnit test results. This phase does not survive a crash: it requires a constant connection from the Jenkins master to the agent.

Limitations

There are a number of limitations and restrictions on what Jenkins features work in long-running builds. Generally speaking, anything that works in a freestyle project in the pre-build or post-build phase should also work. But general build steps are not available except as part of the non-detached pre-build phase, and build wrappers will generally not work. Also surviving a restart can only work if Jenkins can reconnect to the exact same agent without that machine having rebooted, so this will generally not work on cloud-provisioned agents. Consult the release notes for this plugin to see other issues.

Nodes Plus Plugin

Introduction

The Nodes Plus plugin provides additional functionality for managing Jenkins build nodes, including:

  • The ability to assign owners to nodes who will be notified when the node availability changes.

Node owners

In a Jenkins instance that is shared by a large team (or a number of teams) there can often be a pattern whereby certain individuals are responsible for certain specific build nodes. If the build node goes off-line and fails to come back on-line then builds which are tied to that build node can end up stuck in the build queue. Most Jenkins users, eventually settle on filtering out the e-mail notification that is associated with a successful build (either by configuring the build only to email when the build fails, or by setting up rules in their mail client). Thus if a specific project’s builds are stuck in the build queue, nobody may notice until they actually browse the Jenkins instance in their web browser.

The Node Owners property introduced by the Nodes Plus plugin provides an e-mail notification to the designated owners based on a configurable set of availability triggers

Note

In some cases Jenkins can take a number of minutes to confirm that a node is off-line. E-mail notifications are only sent after Jenkins has confirmed that the node is off-line, which may involve waiting for socket connections to time-out.

Configuring Node owners

On the node configuration screen enable the Node owners checkbox in the Node Properties section. The configuration options should then be visible (Configuring the node owners)

node owners property
Figure 89. Configuring the node owners

Enter the list of people who should receive e-mails when the node availability changes in the Owner(s) input box. E-mail addresses should be separated by whitespace or blank lines or ,.

Send email when connected

This trigger fires when the communication channel has been established with the node. If the node has been marked as Temporarily off-line then no build jobs will be accepted by the node. + NOTE: This trigger will only fire on transition from disconnected to connected.

Send email when disconnected

This trigger fires when the communication channel with the node has been confirmed dead. Where a node is configured to be kept on-line as much as possible Jenkins will immediately try to reconnect the node, and so in such cases the email from this trigger will be immediately followed by either the launch failure or successful connection email. As such this trigger is typically most useful where one of the other availability strategies has been selected for the node. + NOTE: This trigger will only fire on transition from connected to disconnected.

Send email on launch failure

This trigger fires when an attempt to establish a communication channel with the node fails. + NOTE: This trigger will fire each and every time a connection attempt fails.

Send email when temporary off-line mark applied

This trigger fires when the node is marked as temporarily off-line. + NOTE: This trigger will fire within the first 5 seconds of the node being marked off-line.

Send email when temporary off-line mark removed

This trigger fires when the node mark of being temporarily off-line has been removed from the node. + NOTE: This trigger will fire within the first 5 seconds of the node ceasing to be marked off-line.

Save the node configuration to apply the changes.

Support Plugin

Introduction

The Support plugin provides the ability to generate a bundle of all the commonly requested information used by CloudBees when resolving support issues.

Since some CloudBees customers purchase support contracts without being licensed for CloudBees Jenkins Enterprise, this plugin is also available for download and installation into any Jenkins installation. Simply go to the plugin’s release notes, pick the latest version, and click the link in the Download section (such as to http://jenkins-updates.cloudbees.com/download/plugins/cloudbees-support/*latest*/cloudbees-support.hpi). You can install the downloaded plugin using Manage Jenkins » Manage Plugins » Advanced » Upload Plugin.

Generating a bundle

To generate a bundle, simply go to the CloudBees Support link on the Jenkins instance.

generate bundle
Figure 90. Generating a bundle

The support bundle screen provides a list of all the classes of information that can be included in the support bundle (Generating a bundle) Normally it is best to include all the selected information, however there may be some information which you do not want to share for reasons of confidentiality. In such cases you can de-select the information you do not want to include in the bundle.

Click the Generate Bundle to download the bundle to your machine. A bundle is a simple .zip file containing mostly plain text files. You can inspect the contents of this file to assure yourself that it does not contain information you do not want to share with CloudBees. You can even unpack and repack the bundle if there is some specific piece of information that you need to remove from the bundle.

When you are happy with the bundle, just attach the bundle to your CloudBees support ticket.

Getting a bundle from the CLI

If there is a problem accessing the Jenkins web UI, of course it is going to be difficult to diagnose those by going to that same UI and getting a bundle. As an alternative, you can use the Jenkins CLI to obtain a bundle from a shell. Before you encounter problems, make sure you have downloaded jenkins-cli.jar from the server (go to /cli/ to get a link). You may also need to make sure you have authenticated to the CLI, for example by uploading an SSH public key to your Jenkins user account. Then try:

java -jar /path/to/jenkins-cli.jar -s http://server/ support

This will generate a support bundle, save it to your local computer, and print the filename. You can also request only specific components to include; to see the currently available list, ask for the command’s help:

java -jar /path/to/jenkins-cli.jar -s http://server/ help support

Getting a bundle when Jenkins will not start

When the CloudBees Support Plugin is installed, it automatically stores a bundle every hour in $JENKINS_HOME/support. These bundles are purged using an exponential retention strategy so that they do not overflow disk space.

If your Jenkins instance is not starting correctly, this is usually due to some class-loading conflict among the set of plugins that you have installed in your instance. Normally such class-loading conflicts will just result in one of your plugins failing to load, however in some extreme cases a plugin failing to load can cause a second plugin to render the UI of your Jenkins instance inaccessible.

In such cases, the latest support bundle (and some representative historical versions) can be very helpful for CloudBees in ensuring a rapid response and restoration of your Jenkins to an accessible state.

The support bundles are the files $JENKINS_HOME/cloudbees-support/cloudbees-support_YYYY-MM-DD_hh.mm.ss.zip where YYYY-MM-DD and hh.mm.ss are the UTC date and time respectively when the bundle was generated.

You can inspect the contents of these file(s) to assure yourself that they do not contain information you do not want to share with CloudBees. You can even unpack and repack bundle(s) if there is some specific piece of information that you need to remove from some bundle(s).

When you are happy with the bundle(s), just attach the bundle(s) to your CloudBees support ticket.

Controlling who can generate Support bundles

By default, only Jenkins administrators can generate support bundles. The CloudBees Support Plugin defines an additional security permission within Jenkins (CloudBees Support/DownloadBundle). You can assign this permission to any users that you want to be able to generate support bundles, even the Anonymous user. The content of such bundles can be useful in diagnosing authentication issues that specific users are having.

Note

Our general recommendation is only to allow trusted users to generate support bundles. Enabling the anonymous user to generate a bundle should be something that is time bounded and used specifically to obtain a bundle from a user who is having issues with authentication.

Note

In order to reduce the possibility of the support plugin from leaking confidential information about your Jenkins installation, when a non-administrator generates a support bundle, certain specific components are not available for selection. For example, the Environment variables, System properties and Thread dumps components are disabled (Generating a bundle as a user who is not a Jenkins administrator) as these components tend to include information that is often not for general consumption.

generate restricted
Figure 91. Generating a bundle as a user who is not a Jenkins administrator

Consolidated Build View Plugin

Introduction

In Jenkins it is common to create a number of jobs to implement one overall project or process. While build steps within a job must run serially on one computer, distinct jobs can run in parallel on different machines. The basic upstream job trigger in Jenkins core, as well as facilities from various plugins, let you define relationships between jobs so that builds are triggered in the correct order.

One difficulty is in visualizing the results of all these builds. The Consolidated Build View Plugin lets you quickly see which builds were run for what reason, and immediately jump to the console log for each.

The Consolidated Build View plugin was introduced in CloudBees Jenkins Enterprise 13.05.

Using the Consolidated Build View plugin

To start up the consolidated build view, select the most upstream job of your set: typically the one which is directly triggered by hand or on a schedule. Click Configure on this job and check Show Consolidated Build View.

configure
Figure 92. Enabling the consolidated build view

Under Builds you can select how Jenkins will locate related builds of interest. Initially only Downstream Builds is supported. This mode tracks builds considered by Jenkins core to be “downstream” of another build. Those might be from jobs configured to be triggered with the option Build after other projects are built, equivalently configured upstream as the post-build action Build other projects. Also included are builds from arbitrary other projects which fingerprint a workspace file (e.g. from the Copy Artifact plugin) that is identical to a fingerprinted artifact of an upstream build. The consolidated view will recursively show downstream builds of downstream builds as well.

Under Columns you can pick what bits of information should be displayed for each matching build, in addition to the basic build information (job name, number, and status). You can choose to why the build was started, such as having been triggered by an upstream build; how long the build took (or has taken, if ongoing); and which node (master or agent) ran that build. Link to console output lets you jump to the build log.

view
Figure 93. Showing the consolidated build view

To see the consolidated build view, just click the Consolidated Build View link for one of the upstream builds, or choose the corresponding item from its context menu.

Quiet Start Plugin

Normally when Jenkins is restarted, it immediately starts trying to schedule jobs to run. If you are taking the server down for maintenance, you may not want everything to start running immediately.

If so, go to Manage Jenkins » Quiet Restart and just check the box you see there. When the server is restarted, it will be in the “quieting down” state. An administrator can cancel that state using the regular UI.

You may also wish to uncheck this box after maintenance is complete, so that after subsequent restarts jobs will start running.

NIO SSH Agents Plugin

Introduction

The NIO SSH Agents plugin is an alternative SSH agent connector that uses a Non-Blocking I/O architecture. This alternative connector shows different scalability properties from the standard SSH agent connector.

There are three main differences in scalability characteristics:

  • The Non-blocking I/O connector limits the number of threads that are used to maintain the SSH channel. Thus when there are a large number of channels (i.e. many SSH agents) the Non-blocking connector will use less threads and consequently the Jenkins UI will remain more responsive than with the standard SSH agent connector.

  • When the Non-blocking I/O connector requires more CPU resources than are available, it responds by applying back-pressure to the channels generating the load. This allows the system to remain responsive at a consequence of increasing build times. It is important to note that under this type of load the traditional SSH agent connector typically has lost the connection with a corresponding build failure.

  • The Non-blocking I/O connector is optimized for reduced connection time. For example, it avoids copying the agent JAR file unless necessary; and by default it suppresses the logging of the agent environment.

Another important technical note is that the SSH client library used in the Non-Blocking I/O connector currently only supports RSA and DSA key types and the maximum key size is determined by the JCE policy of the Jenkins Master’s JVM. Without installation of the unrestricted policy, the RSA key size will be limited to 2048 bits. Finally the Non-Blocking I/O connector currently does not support connecting to CYGWIN or other Microsoft Windows based SSH servers.

The NIO SSH Agents plugin was introduced in CloudBees Jenkins Enterprise 13.11.

Using the NIO SSH Agents plugin

The plugin adds an additional launch method: Launch agents on Unix machines via SSH (Non-blocking I/O) The configuration options are almost identical to those in the traditional SSH Agents connector.

configure
Figure 94. Configuring the NIO SSH Agents launcher

The differences between the two plugins configuration parameters are as follows:

  • If specified, the Prefix Start Agent Command will be appended with a space character before being prepended to the start agent command. The traditional SSH Agents connector requires that the user remember to end the prefix with a space or a semicolon in order to avoid breaking the agent launch command.

  • If specified, the Suffix Start Agent Command will be prepended with a space character before being appended to the start agent command. The traditional SSH Agents connector requires that the user remember to start the suffix with a space or a semicolon in order to avoid breaking the agent launch command.

  • The traditional SSH Agents connector will always copy the agent environment variables to the agent launch log. The NIO SSH Agents connector provides this as an option that defaults off. The logging of the agent environment is only of use when debugging initial connection issues and has a performance impact on the startup time. Once the agent channel has been established the agent environment variables are accessible via the Node’s System Information screen.

Monitoring Plugin

Introduction

The open source metrics plugin for Jenkins collects various metrics about how Jenkins is performing. The CloudBees Monitoring plugin adds alerting functionality based on when metrics deviate from user defined ranges.

The Monitoring plugin was introduced in CJE 14.05.

Support for metrics based views has been dropped from the Monitoring plugin as of version 2.0 of the plugin in favour of CloudBees Jenkins Analytics.

Metrics based alerts

This feature allows you to define different metrics based alerts and have Jenkins send emails when the alerts start and finish

When the feature is enabled it adds an Alerts action to the top level Jenkins actions. The Alerts action allows viewing the status of all the defined alerts as well as providing the ability to silence specific alerts.

Note

In order for the alerting via email to function, Jenkins must be configured to be able to send emails

Creating some basic alerts

The following instructions will create four basic alerts:

  • An alert that triggers if any of the health reports are failing

  • An alert that triggers if the file descriptor usage on the master goes above 80%

  • An alert that triggers if the JVM heap memory usage is over 80% for more than a minute

  • An alert that triggers if the 5 minute average of HTTP/404 responses goes above 10 per minute for more than five minutes

These instructions assume you have configured Jenkins with the SMTP settings required for sending emails.

  1. Login as an administrator and navigate to the main Jenkins configuration screen.

    gs ca01
  2. Scroll down to the Alerts section.

    gs ca02
  3. Click the Add corresponding to the Conditions

  4. Select the Health check score option

    gs ca03
  5. Specify Health checks as the Alert title. Leave the Alert after at 5 seconds. If you want to specify additional recipients for this health check only you can add them. Emails will be sent to the Global Recipients as well as any alert specific Recipients

    gs ca04
  6. Click the Add corresponding to the Conditions

  7. Select the Local metric gauge within range option

    gs ca05
  8. Specify vm.file.descriptor.ratio as the Gauge. Specify 0.8 as Alert if above. Specify File descriptor usage below 80% as the Alert title. Leave the Alert after at 5 seconds.

    gs ca06
  9. Click the Add corresponding to the Conditions

  10. Select the Local metric gauge within range option

    gs ca07
  11. Specify vm.memory.heap.usage as the Gauge. Specify 0.8 as Alert if above. Specify JVM heap memory usage below 80% as the Alert title. Specify the Alert after as 60 seconds.

    gs ca08
  12. Click the Add corresponding to the Conditions

  13. Select the Local metric meter within range option

    gs ca09
  14. Specify http.responseCodes.badRequest as the Meter. Specify 5 minute average as the Value. Specify 0.16666666 as Alert if above

    • the meter rates all report in events per second. Specify Less than 10 bad requests per minute as the Alert title. Specify the Alert after as 300 seconds.

      gs ca10
  15. Click the Add corresponding to the Global Recipients

  16. Select the Email notifications option

    gs ca11
  17. Specify the alert email recipients as a whitespace or comma separated list in the Email addresses text box.

    gs ca12
  18. Save the configuration.

  19. The main Jenkins root page should now have an Alerts action. Click on this action to view the alerts

    gs ca13

Managing alerts

Each alert can be in one of four states:

Table 14. Alert states
Icon State When

icon

Failing

The alert condition is met for less than the Alert after duration

icon

Failed

The alert condition has been met for at least the Alert after duration

icon

Recovering

The alert condition is not met for less than the Alert after duration

icon

Recovered

The alert condition is not met for at least the Alert after duration

Notification emails will be sent for any alarms that are not silenced on either of the transitions:

  • Failing to Failed

  • Recovering to Recovered

The alerts are checked every 5 seconds. The Alerts page displays the current value of each alert condition. If the condition has changed in between these alert checks then the UI may show the alert in a mixed state such as in An alert where the condition has changed prior to the periodic checks running.

gs ma01
Figure 95. An alert where the condition has changed prior to the periodic checks running

However, once the periodic check runs, the condition will enter either the Failing or Recovering state.

gs ma02
Figure 96. An alert having entered the Failing state

If the condition changes before the condition’s Alert after time expires then no notifications will be sent.

gs ma03
Figure 97. An alert having entered the Recovering state

On the other hand, if the condition stays constant for the entire Alert after time then a notification will be sent.

gs ma04
Figure 98. An alert having entered the Failed state

The Silence button can be used to supress the sending of notifications for specific alerts. The alerts are re-enabled using the Enable button.

gs ma05
Figure 99. Some alerts having been silenced

Reference

The open source Jenkins Metrics Plugin defines an API for integrating the Dropwizard Metrics API within Jenkins and defines a number of standard metrics and provides some basic health checks. This section details the standard metrics and basic health checks available in version 3.0.3 of the Metrics Plugin.

Standard metrics

There are five types of metric defined in the Dropwizard Metrics API:

  • A gauge is an instantaneous measurement of a value

  • A counter is a gauge that tracks the count of something

  • A meter measures the rate of events over time. Meters provide five metrics:

    • the number of observed events

    • the average rate of all observed events

    • the average rate of observed events in the past minute

    • the average rate of observed events in the past five minutes

    • the average rate of observed events in the past fifteen minutes

  • A histogram measures the statistical distribution of values in a stream of data. Histograms provide the following metrics:

    • the number of observed values

    • the average of all observed values

    • the standard deviation of observed values

    • the minimum observed value

    • the maximum observed value

    • the 50th percentile observed value

    • the 75th percentile observed value

    • the 95th percentile observed value

    • the 98th percentile observed value

    • the 99th percentile observed value

    • the 99.9th percentile observed value

      Histograms also maintain a reservoir sample of the stream data. In the Jenkins Metrics Plugin the standard metric histograms use exponentially decaying reservoirs based on a forward-decaying priority reservoir with an exponential weighting towards newer data. Unlike some other exponentially decaying reservoirs this strategy has the advantage of maintaining a statistically representative sampling reservoir.

  • A timer is basically a histogram of the duration of events coupled with a meter of the rate of the event occurence. Timers provide the following metrics:

    • the number of observed events

    • the average rate of all observed observed

    • the average rate of observed events in the past minute

    • the average rate of observed events in the past five minutes

    • the average rate of observed events in the past fifteen minutes

    • the average duration of all observed events

    • the standard deviation of observed event durations

    • the minimum observed event duration

    • the maximum observed event duration

    • the 50th percentile observed event duration

    • the 75th percentile observed event duration

    • the 95th percentile observed event duration

    • the 98th percentile observed event duration

    • the 99th percentile observed event duration

    • the 99.9th percentile observed event duration

      Timers also maintain a exponentially decaying reservoir sample of the event duration data. These exponentially decaying reservoirs are use a forward-decaying priority reservoir with an exponential weighting towards newer data. Unlike some other exponentially decaying reservoirs this strategy has the advantage of maintaining a statistically representative sampling reservoir.

System and Java Virtual Machine metrics
system.cpu.load (gauge)

The system load on the Jenkins master as reported by the JVM’s Operating System JMX bean. The calculation of system load is operating system dependent. Typically this is the sum of the number of processes that are currently running plus the number that are waiting to run. This is typically comparable against the number of CPU cores.

vm.blocked.count (gauge)

The number of threads in the Jenkins master JVM that are currently blocked waiting for a monitor lock.

vm.count (gauge)

The total number of threads in the Jenkins master JVM. This is the sum of: vm.blocked.count, vm.new.count, vm.runnable.count, vm.terminated.count, vm.timed_waiting.count and vm.waiting.count

vm.cpu.load (gauge)

The rate of CPU time usage by the JVM per unit time on the Jenkins master. This is equivalent to the number of CPU cores being used by the Jenkins master JVM.

vm.daemon.count (gauge)

The number of threads in the Jenkins master JVM that are marked as Daemon threads.

vm.deadlocks (gauge)

The number of threads that have a currently detected deadlock with at least one other thread.

vm.file.descriptor.ratio (gauge)

The ratio of used to total file descriptors. (This is a value between 0 and 1 inclusive)

vm.gc..count (gauge)

The number of times the garbage collector has run. The names are supplied by and dependent on the JVM. There will be one metric for each of the garbage collectors reported by the JVM.

vm.gc..time (gauge)

The amount of time spent in the garbage collector. The names are supplied by and dependent on the JVM. There will be one metric for each of the garbage collectors reported by the JVM.

vm.memory.heap.committed (gauge)

The amount of memory, in the heap that is used for object allocation, that is guaranteed by the operating system as available for use by the Jenkins master JVM. (Units of measurement: bytes)

vm.memory.heap.init (gauge)

The amount of memory, in the heap that is used for object allocation, that the Jenkins master JVM initially requested from the operating system. (Units of measurement: bytes)

vm.memory.heap.max (gauge)

The maximum amount of memory, in the heap that is used for object allocation, that the Jenkins master JVM is allowed to request from the operating system. This amount of memory is not guaranteed to be available for memory management if it is greater than the amount of committed memory. The JVM may fail to allocate memory even if the amount of used memory does not exceed this maximum size. (Units of measurement: bytes)

vm.memory.heap.usage (gauge)

The ratio of vm.memory.heap.used to vm.memory.heap.max. (This is a value between 0 and 1 inclusive)

vm.memory.heap.used (gauge)

The amount of memory, in the heap that is used for object allocation, that the Jenkins master JVM is currently using.(Units of measurement: bytes)

vm.memory.non-heap.committed (gauge)

The amount of memory, outside the heap that is used for object allocation, that is guaranteed by the operating system as available for use by the Jenkins master JVM. (Units of measurement: bytes)

vm.memory.non-heap.init (gauge)

The amount of memory, outside the heap that is used for object allocation, that the Jenkins master JVM initially requested from the operating system. (Units of measurement: bytes)

vm.memory.non-heap.max (gauge)

The maximum amount of memory, outside the heap that is used for object allocation, that the Jenkins master JVM is allowed to request from the operating system. This amount of memory is not guaranteed to be available for memory management if it is greater than the amount of committed memory. The JVM may fail to allocate memory even if the amount of used memory does not exceed this maximum size. (Units of measurement: bytes)

vm.memory.non-heap.usage (gauge)

The ratio of vm.memory.non-heap.used to vm.memory.non-heap.max. (This is a value between 0 and 1 inclusive)

vm.memory.non-heap.used (gauge)

The amount of memory, outside the heap that is used for object allocation, that the Jenkins master JVM is currently using. (Units of measurement: bytes)

vm.memory.pools..usage (gauge)

The usage level of the memory pool, where a value of 0 represents an unused pool while a value of 1 represents a pool that is at capacity. The names are supplied by and dependent on the JVM. There will be one metric for each of the memory pools reported by the JVM.

vm.memory.total.committed (gauge)

The total amount of memory that is guaranteed by the operating system as available for use by the Jenkins master JVM. (Units of measurement: bytes)

vm.memory.total.init (gauge)

The total amount of memory that the Jenkins master JVM initially requested from the operating system. (Units of measurement: bytes)

vm.memory.total.max (gauge)

The maximum amount of memory that the Jenkins master JVM is allowed to request from the operating system. This amount of memory is not guaranteed to be available for memory management if it is greater than the amount of committed memory. The JVM may fail to allocate memory even if the amount of used memory does not exceed this maximum size. (Units of measurement: bytes)

vm.memory.total.used (gauge)

The total amount of memory that the Jenkins master JVM is currently using.(Units of measurement: bytes)

vm.new.count (gauge)

The number of threads in the Jenkins master JVM that have not currently started execution.

vm.runnable.count (gauge)

The number of threads in the Jenkins master JVM that are currently executing in the JVM. Some of these threads may be waiting for other resources from the operating system such as the processor.

vm.terminated.count (gauge)

The number of threads in the Jenkins master JVM that have completed execution.

vm.timed_waiting.count (gauge)

The number of threads in the Jenkins master JVM that have suspended execution for a defined period of time.

vm.uptime.milliseconds (gauge)

The number of milliseconds since the Jenkins master JVM started

vm.waiting.count (gauge)

The number of threads in the Jenkins master JVM that are currently waiting on another thread to perform a particular action.

Web UI metrics
http.activeRequests (counter)

The number of currently active requests against the Jenkins master Web UI.

http.responseCodes.badRequest (meter)

The rate at which the Jenkins master Web UI is responding to requests with a HTTP/400 status code

http.responseCodes.created (meter)

The rate at which the Jenkins master Web UI is responding to requests with a HTTP/201 status code

http.responseCodes.forbidden (meter)

The rate at which the Jenkins master Web UI is responding to requests with a HTTP/403 status code

http.responseCodes.noContent (meter)

The rate at which the Jenkins master Web UI is responding to requests with a HTTP/204 status code

http.responseCodes.notFound (meter)

The rate at which the Jenkins master Web UI is responding to requests with a HTTP/404 status code

http.responseCodes.notModified (meter)

The rate at which the Jenkins master Web UI is responding to requests with a HTTP/304 status code

http.responseCodes.ok (meter)

The rate at which the Jenkins master Web UI is responding to requests with a HTTP/200 status code

http.responseCodes.other (meter)

The rate at which the Jenkins master Web UI is responding to requests with a non-informational status code that is not in the list: HTTP/200, HTTP/201, HTTP/204, HTTP/304, HTTP/400, HTTP/403, HTTP/404, HTTP/500, or HTTP/503

http.responseCodes.serverError (meter)

The rate at which the Jenkins master Web UI is responding to requests with a HTTP/500 status code

http.responseCodes.serviceUnavailable (meter)

The rate at which the Jenkins master Web UI is responding to requests with a HTTP/503 status code

http.requests (timer)

The rate at which the Jenkins master Web UI is receiving requests and the time spent generating the corresponding responses.

Jenkins specific metrics
jenkins.executor.count.value (gauge)

The number of executors available to Jenkins. This is corresponds to the sum of all the executors of all the on-line nodes.

jenkins.executor.count.history (histogram)

The historical statistics of jenkins.executor.count.value.

jenkins.executor.free.value (gauge)

The number of executors available to Jenkins that are not currently in use.

jenkins.executor.free.history (histogram)

The historical statistics of jenkins.executor.free.value.

jenkins.executor.in-use.value (gauge)

The number of executors available to Jenkins that are currently in use.

jenkins.executor.in-use.history (histogram)

The historical statistics of jenkins.executor.in-use.value.

jenkins.health-check.count (gauge)

The number of health checks associated with the HealthCheckRegistry defined within the Jenkins Metrics Plugin

jenkins.health-check.duration (timer)

The rate at which the health checks are being run and the duration of each health check run. + The Jenkins Metrics Plugin, by default, will run the health checks once per minute. The frequency can be controlled by the jenkins.metrics.api.Metrics.HEALTH_CHECK_INTERVAL_MINS system property. In addition, the Metrics Plugin’s Operational Servlet can be used to request the health checks be run on demand.

jenkins.health-check.inverse-score (gauge)

The ratio of health checks reporting failure to the total number of health checks. Larger values indicate decreasing health as measured by the health checks. (This is a value between 0 and 1 inclusive)

jenkins.health-check.score (gauge)

The ratio of health checks reporting success to the total number of health checks. Larger values indicate increasing health as measured by the health checks. (This is a value between 0 and 1 inclusive)

jenkins.job.blocked.duration (timer)

The rate at which jobs in the build queue enter the blocked state and the amount of time they spend in that state.

jenkins.job.buildable.duration (timer)

The rate at which jobs in the build queue enter the buildable state and the amount of time they spend in that state.

jenkins.job.building.duration (timer)

The rate at which jobs are built and the time they spend building.

jenkins.job.queuing.duration (timer)

The rate at which jobs are queued and the total time they spend in the build queue.

jenkins.job.total.duration (timer)

The rate at which jobs are queued and the total time they spend from entering the build queue to completing building

jenkins.job.waiting.duration (timer)

The rate at which jobs enter the quiet period and the total amount of time that jobs spend in their quiet period.

Jenkins allows configuring a quiet period for most job types. While in the quiet period multiple identical requests for building the job will be coalesced. Traditionally this was used with source control systems that do not provide an atomic commit facility - such as CVS - in order to ensure that all the files in a large commit were picked up as a single build.

With more modern source control systems the quiet period can still be useful, for example to ensure that push notification of the came commit via redundant parallel notification paths get coalesced.

jenkins.job.count.value (gauge)

The number of jobs in Jenkins

jenkins.job.count.history (histogram)

The historical statistics of jenkins.job.count.value.

jenkins.job.scheduled (meter)

The rate at which jobs are scheduled. If a job is already in the queue and an identical request for scheduling the job is received then Jenkins will coalesce the two requests.

This metric gives a reasonably pure measure of the load requirements of the Jenkins master as it is unaffected by the number of executors available to the system.

Multiplying this metric by jenkins.job.building.duration gives an approximate measure of the number of executors required in order to ensure that every build request results in a build.

A more accurate measure can be obtained from a job-by-job summation of the scheduling rate for that job and the average build duration of that job.

The most accurate measure would require maintaining separate sums partitioned by the labels that each job can run against in order to determine the number of each type of executor required.

Such calculations assume that: every build node is equivalent and/or the build times are comparable across all build nodes; and build times are unaffected by other jobs running in parallel on other executors on the same node.

However in most cases even the basic result from multiplying jenkins.job.scheduled by jenkins.job.building.duration gives a reasonable result. Where larger than jenkins.executor.count.value by more than 10-15% the Jenkins build queue is typically observed to grow rapidly until most jobs have at least one build request sitting in the build queue. Whereas when less than jenkins.executor.count.value by at least 20-25% the build queue will tend to remain small, except for those cases where there are a large number of build jobs fighting for a small number of executors on nodes with specific labels.

jenkins.node.count.value (gauge)

The number of build nodes available to Jenkins, both on-line and off-line.

jenkins.node.count.history (histogram)

The historical statistics of jenkins.node.count.value.

jenkins.node.XXX.builds (timer)

The rate of builds starting on the XXX node and the amount of time they spend building.

There will be one metric for each XXX named node. The metric is lazily created after the JVM starts up when the first build starts on that node.

jenkins.node.offline.value (gauge)

The number of build nodes available to Jenkins but currently off-line.

jenkins.node.offline.history (histogram)

The historical statistics of jenkins.node.offline.value.

jenkins.node.online.value (gauge)

The number of build nodes available to Jenkins and currently on-line.

jenkins.node.online.history (histogram)

The historical statistics of jenkins.node.online.value.

jenkins.plugins.active (gauge)

The number of plugins in the Jenkins instance that started successfully.

jenkins.plugins.failed (gauge)

The number of plugins in the Jenkins instance that failed to start. A value other than 0 is typically indicative of a potential issue within the Jenkins installation that will either be solved by explicitly disabling the plugin(s) or by resolving the plugin dependency issues.

jenkins.plugins.inactive (gauge)

The number of plugins in the Jenkins instance that are not currently enabled.

jenkins.plugins.withUpdate (gauge)

The number of plugins in the Jenkins instance that have an newer version reported as available in the current Jenkins update center metadata held by Jenkins. This value is not indicative of an issue with Jenkins but high values can be used as a trigger to review the plugins with updates with a view to seeing whether those updates potentially contain fixes for issues that could be affecting your Jenkins instance.

jenkins.queue.blocked.value (gauge)

The number of jobs that are in the Jenkins build queue and currently in the blocked state.

jenkins.queue.blocked.history (histogram)

The historical statistics of jenkins.queue.blocked.value.

jenkins.queue.buildable.value (gauge)

The number of jobs that are in the Jenkins build queue and currently in the blocked state.

jenkins.queue.buildable.history (histogram)

The historical statistics of jenkins.queue.buildable.value.

jenkins.queue.pending.value (gauge)

The number of jobs that are in the Jenkins build queue and currently in the blocked state.

jenkins.queue.pending.history (histogram)

The historical statistics of jenkins.queue.pending.value.

jenkins.queue.size.value (gauge)

The number of jobs that are in the Jenkins build queue.

jenkins.queue.size.history (histogram)

The historical statistics of jenkins.queue.size.value.

jenkins.queue.stuck.value (gauge)

The number of jobs that are in the Jenkins build queue and currently in the blocked state.

jenkins.queue.stuck.history (histogram)

The historical statistics of jenkins.queue.stuck.value.

Standard health checks

The Dropwizard Metrics API includes a contract for health checks. Health checks return a simple PASS/FAIL status and can include an optional message.

disk-space

Returns FAIL if any of the Jenkins disk space monitors are reporting the disk space as less than the configured threshold. The message will reference the first node which fails this check. There may be other nodes that fail the check, but this health check is designed to fail fast.

plugins

Returns FAIL if any of the Jenkins plugins failed to start. A failure is typically indicative of a potential issue within the Jenkins installation that will either be solved by explicitly disabling the failing plugin(s) or by resolving the corresponding plugin dependency issues.

temporary-space

Returns FAIL if any of the Jenkins temporary space monitors are reporting the temporary space as less than the configured threshold. The message will reference the first node which fails this check. There may be other nodes that fail the check, but this health check is designed to fail fast.

thread-deadlock

Returns FAIL if there are any deadlocked threads in the Jenkins master JVM.

Pull-Request Builder for GitHub Plugin

Introduction

The Pull-Request Builder for GitHub plugin let you configure Jenkins to verify pull-request on a GitHub project and check the proposed changes validates Continuous Integration criteria set on your Jenkins instance.

The feature was introduced in CloudBees Jenkins Enterprise 14.11.

Overview

Development teams using GitHub rely on pull-request mechanism for code-review before some changes are adopted and merged into the mainstream development branch. This is a very convenient process as GitHub provides a nice UI to discuss changes and annotate modified lines in code. Manual code review anyway can’t catch all possible mistakes, and may be a waste of time when the proposed changes has some negative impact on project that the Jenkins continuous integration job can detect. The Pull-Request Builder for GitHub plugin takes advantage of GitHub APIs to get notified when a pull-request is created / updated, and can trigger a build to automatically check the proposed changes. The build status is then reported on pull request, and reviewer can just ignore an invalid pull-request, as the requester can check the log and fix his changes.

Setting up Pull-Request Builder for GitHub

The Pull Request Tester plugin is available within CloudBees Jenkins Enterprise update center. CloudBees Git Validated merge plugin is required, as Jenkins Github plugin.

For full setup automation, credentials have to be configured in Jenkins global settings. Github API authentication require to create an access token. We recommend you create a dedicated pseudo-user on github to handle API interaction from Jenkins in Github, and have no interaction with actual developers profiles.

  • Go to your GitHub settings page.

  • In the left sidebar, click Personal Access Token.

  • Click Generate new token.

  • Give your token a descriptive name

  • Select the scopes to grant to this token. Pull request tester plugin require permission to administer repository hooks and access repositories: repo, public_repo, admin:repo_hook, repo:status.

  • Click Generate token.

  • Copy the token to your clipboard. For security reasons, after you navigate off this page, no one will be able to see the token again

Configure your access token in Jenkins and validate Github API access.

global config 201607
Figure 100. Global Configuration

You also can configure Jenkins so that it doesn’t manage the web hook for you, but then will have to ensure web hooks are well registered for all your repositories

Using Pull-Request Builder for GitHub

Plugin introduce a new trigger option to Jenkins jobs, "Build pull requests to the repository", which needs to be enabled. This option only makes sense if the job is building a project using a Github repository as SCM. Selecting this option will automatically register a web hook on Github so Jenkins get notified when pull-requests are created/updated.

"Enable Git validated merge support" in the job configuration has to be enabled as well, even if you don’t use this feature by yourself, because Pull Request tester plugin relies on Git Validated Merge plugin.

As a pull-request is opened on GitHub repository, Jenkins will trigger a build and report result in GitHub UI as "commit status" (a note associated to the latest commit of the pull-request). Any update to the pull request (new commit pushed, or amendment to existing commits in pull-request) will trigger another build and update the commit status.

Note: Merge process need to be done manually by a member with admin permissions in that repository.

commit status 201607
Figure 101. Github Commit Status (test failed in this case)

Visual Studio Team Services Plugin

Introduction

The Visual Studio Team Services plugin let you configure Jenkins to integrate with your Visual Studio Team Services account and setup CI jobs for your VSTS Git repositories. Plugin do support branch and pull-request detection, as well as post-commit build triggers.

Overview

Development teams using Visual Studio Team Services can trigger Jenkins jobs based on source Git repository using a post-commit web hook. With a minimal setup on VSTS, they can setup a complete Continuous Delivery pipeline using such web hooks.

They also can rely on Jenkins to connect on your VSTS account and automatically detect branches and pull-requests (using multi-branch job types). With such a setup, team can rely on branches or pull-requests based development pipeline, and have Jenkins automatically setup to monitor the branches being used and publish build status.

Installing the Visual Studio Team Services plugin

  • Navigate to 'Manage Jenkins / Plugin Manager' and ensure that the Visual Studio Team Services plugin is installed.

plugin manager visual studio team services
Figure 102. Jenkins - Plugin Manager - Visual Studio Team Services plugin

Setting up Visual Studio Team Services Web Hook

The Visual Studio Team Services plugin do expose a generic web-hook receiver for Visual Studio Team Services to notify Jenkins on repository events. On VSTS repository administration view, 'Service Hooks' tab you can register hooks to integrate with external services. Visual Studio Team Services plugin do support generic "Web Hook" with following events:

  • Code pushed

  • Pull request created

  • Pull request updated

Service Hook can be created for all repositories in your account or for a specific repository and/or branch. Configuring a global hook for all repositories and branches make it simpler for you to configure your Visual Studio Team Services / Jenkins integration, as you won’t need additional setup on Visual Studio Team Services to configure new projects.

  • Go to your Visual Studio Team Services account dashboard.

  • Select 'Manage Account' (gear icon top right).

  • Select Default Collection to be configured.

  • Click 'View the project administration page'

  • Select 'Service Hooks' tab Please note Visual Studio Team Services do also offer a "Jenkins" Service Hook which can be used to trigger a specific Jenkins job with adequate authorization token. This Service Hook isn’t related to Visual Studio Team Services Plugin.

service hook
Figure 103. Service Hook Setup
  • Click the Add (plus) button.

  • Select 'Web Hooks'

  • Select event trigger

  • Enter CloudBees Jenkins Enterprise URL and append /vsts-webhook/. You don’t need authentication for this specific web hook URL.

  • You can use the 'Test' button to check everything is well setup, then save.

service hook2
Figure 104. Service Hook Setup

In your Jenkins job(s), you can now configure Build trigger "Build when a change is pushed to Visual Studio Team Services". With this option enabled, Jenkins will listen to Visual Studio Team Services events, and will compare enabled jobs SCM configuration to the notification source Git repository URL. If current job do match a Git commit notification, a polling cycle will run to check repository for changes and trigger a build if necessary.

build trigger
Figure 105. Build Trigger Setup

Creating a Multi Branch job

The Visual Studio Team Services Plugin is compatible with the Jenkins Branch API. Visual Studio Team Services git repositories appear as SCM sources in job types compatible with the Jenkins Branch API such as the Multibranch Pipeline Plugin.

Using "Jenkinsfile" to define the build configuration with the source code

Multiple branches of a source code repository may require different build configurations. To solve this problem, the Multibranch Pipeline plugin uses a job definition stored with the source code in a file named Jenkinfile located at the root of the repository.

The Jenkinsfile uses the standard Jenkins Pipeline syntax. You just have to replace the source code checkout step (e.g. git 'https://github.com/my-org/my-repo.git') with a special step checkout scm.

Sample 'Jenkinsfile' for a Maven project
node {
  stage 'Build and Test'
  env.PATH = "${tool 'Maven 3'}/bin:${env.PATH}"
  checkout scm
  sh 'mvn clean package'
}

The Jenkins Multibranch Pipeline plugin will index the git repository and create one build job per branch and per pull request containing a file Jenkinsfile.

Jobs created for pull requests are automatically deleted when the pull requests are closed.

Installing the Multibranch Pipeline plugin

plugin manager workflow multi branch
Figure 106. Jenkins - Plugin Manager - Multibranch Pipeline plugin

Creating a Jenkins Multibranch Pipeline job to build all the branches and pull requests

  • Create a Jenkins Pipeline Multi Branch project

new multibranch workflow
Figure 107. Jenkins - New Pipeline Multi Branch Project
  • Navigate to the 'Branch Sources' section,

  • Select 'Visual Studio Team Services' in the 'Add Sources' dropdown list,

scm source
Figure 108. Jenkins - Multibranch Pipeline Project - Branch Sources
scm source 2
Figure 109. Jenkins - Multibranch Pipeline Project - Branch Sources - Visual Studio Team Services
  • Enter the name of the Visual Studio Team Services account (i.e. https://{account name}.visualstudio.com),

  • Enter the desired Visual Studio Team Services credentials.

    • Generate a Visual Studio Team Services Personal Access Token on the Visual Studio Team Services profile page ("Security" tab) granting at least the code read scope.

    • Create Jenkins credentials of type 'username/password credentials', the username will not be used, you can use your account name for triage

personal access token
Figure 110. Visual Studio Team Services - Profile Page - Personal Access Token
  • Once valid credentials are entered, the 'Repository' drop down list is populated with the list of available Git repositories

  • Select the desired repository

Once the project is saved, Jenkins indexes the Git repository and creates one sub-project per branch and per pull-request. Such a setup lets you get a dedicated build history per branch, or per pull-request, and adopt a branch based git development pipeline.

multibranch workflow job
Figure 111. Jenkins - Multibranch Pipeline Project - Sub Jobs

Docker Pipeline Plugin

Introduction

Many organizations are using Docker to unify their build and test environments across machines and provide an efficient way to deploy applications into production. This plugin offers a convenient domain-specific language (DSL) for performing some of the most commonly needed Docker operations in a continuous-deployment pipeline from a Pipeline script.

The entry point for all of this plugin’s functionality is a docker global variable, available without import to any flow script when the plugin is enabled. To get detailed information on available methods, open the Pipeline Syntax available on any Pipeline Job page:

dsl help

Running build steps inside containers

It is commonplace for Jenkins projects to require a specific toolset or libraries to be available during a build. If many projects in a Jenkins installation have the same requirements, and there are few agents, it is not hard to just preconfigure those agents accordingly. In other cases it is feasible to keep such files in project source control. Finally, for some tools—especially those with a self-contained, platform-independent download, like Maven—it is possible to use the Jenkins tool installer system with the Pipeline tool step to retrieve tools on demand. However many cases remain where these techniques are not practical.

For builds which can run on Linux, Docker provides an ideal solution to this problem. Each project need merely select an image containing all the tools and libraries it would need. (This might be a publicly available image like maven, or it might have been built by this or another Jenkins project.) Developers can also run build steps locally using an environment identical to that used by the Jenkins project.

There are two ways to run Jenkins build steps in such an image. One is to include a Java runtime and Jenkins slave agent (slave.jar) inside the image, and add a Docker cloud using the Docker plugin. Then the entire agent runs wholly inside the image, so any builds tied to that cloud label can assume a particular environment.

For cases where you do not want to “pollute” the image with Java and the Jenkins agent, or just want a simpler and more flexible setup, this plugin provides a way to run build steps inside an arbitrary image. In the simplest case, just select an image and run your whole build inside it:

docker.image('maven:3.3.3-jdk-8').inside {
  git '…your-sources…'
  sh 'mvn -B clean install'
}

The above is a complete Pipeline script. inside will:

  1. Automatically grab an agent and a workspace (no extra node block is required).

  2. Pull the requested image to the Docker server (if not already cached).

  3. Start a container running that image.

  4. Mount the Jenkins workspace as a “volume” inside the container, using the same file path.

  5. Run your build steps. External processes like sh will be wrapped in docker exec so they are run inside the container. Other steps (such as test reporting) run unmodified: they can still access workspace files created by build steps.

  6. At the end of the block, stop the container and discard any storage it consumed.

  7. Record the fact that the build used the specified image. This unlocks features in other Jenkins plugins: you can track all projects using an image, or configure this project to be triggered automatically when an updated image is pushed to the Docker registry. If you use Cloudbees Docker Traceability, you will be also able to see a history of the image deployments on Docker servers.

Tip

If you are running a tool like Maven which has a large download cache, running each build inside its own image will mean that a large quantity of data is downloaded from the network on every build, which is usually undesirable. The easiest way to avoid this is to redirect the cache to the agent workspace, so that if you run another build on the same agent, it will run much more quickly. In the case of Maven:

docker.image('maven:3.3.3-jdk-8').inside {
  git '…your-sources…'
  writeFile file: 'settings.xml', text: "<settings><localRepository>${pwd()}/.m2repo</localRepository></settings>"
  sh 'mvn -B -s settings.xml clean install'
}

(If you wanted to use a cache location elsewhere on the agent, you would need to pass an extra --volume option to inside so that the container could see that path.)

Another solution is to pass an argument to inside to mount a sharable volume, such as -v m2repo:/m2repo, and use that path as the localRepository. Just beware that the default local repository management in Maven is not thread-safe for concurrent builds, and install:install could pollute the local repository across builds or even across jobs. The safest solution is to use a nearby repository mirror as a cache.

Note

For inside to work, the Docker server and the Jenkins agent must use the same filesystem, so that the workspace can be mounted. The easiest way to ensure this is for the Docker server to be running on localhost (the same computer as the agent). Currently neither the Jenkins plugin nor the Docker CLI will automatically detect the case that the server is running remotely; a typical symptom would be errors from nested sh commands such as

cannot create /…@tmp/durable-…/pid: Directory nonexistent

or negative exit codes.

When Jenkins can detect that the agent is itself running inside a Docker container, it will automatically pass the --volumes-from argument to the inside container, ensuring that it can share a workspace with the agent.

Customizing agent allocation

All DSL functions which run some Docker command automatically acquire an agent (executor) and a workspace if necessary. For more complex scripts which perform several commands using the DSL, you will typically want to run a block, or the whole script, on the same agent and workspace. In that case just wrap the block in node, selecting a label if desired:

node('linux') {
  def maven = docker.image('maven:latest')
  maven.pull() // make sure we have the latest available from Docker Hub
  maven.inside {
    // …as above
  }
}

Here we ensure that the same agent runs both pull and inside, so the local image cache update by the first step is seen by the second.

Building and publishing images

If your build needs to create a Docker image, use the build method, which takes an image name with an optional tag and creates it from a Dockerfile. This also returns a handle to the result image, so you can work with it further:

node {
  git '…' // checks out Dockerfile & Makefile
  def myEnv = docker.build 'my-environment:snapshot'
  myEnv.inside {
    sh 'make test'
  }
}

Here the build method takes a Dockerfile in your source tree specifying a build environment (for example RUN apt-get install -y libapr1-dev). Then a Makefile in the same source tree describes how to build your actual project in that environment.

If you want to publish a newly created image to Docker Hub (or your own registry—discussed below), use push:

node {
  git '…' // checks out Dockerfile and some project sources
  def newApp = docker.build "mycorp/myapp:${env.BUILD_TAG}"
  newApp.push()
}

Here we are giving the image a tag which identifies the Jenkins project and build number that created it. (See the documentation for the env global variable.) The image is pushed under this tag name to the registry.

To push an image into a staging or production environment, a common style is to update a predefined tag such as latest in the registry. In this case just specify the tag name:

node {
  stage 'Building image'
  git '…'
  def newApp = docker.build "mycorp/myapp:${env.BUILD_TAG}"
  newApp.push() // record this snapshot (optional)
  stage 'Test image'
  // run some tests on it (see below), then if everything looks good:
  stage 'Approve image'
  newApp.push 'latest'
}

The build method records information in Jenkins tied to this project build: what image was built, and what image that was derived from (the FROM instruction at the top of your Dockerfile). Other plugins can then identify the build which created an image known to have been used by a downstream build, or deployed to a particular environment. You can also have this project be triggered when an update is pushed to the ancestor image (FROM) in a registry.

Running and testing containers

To run an image you built, or pulled from a registry, you can use the run method. This returns a handle to the running container. More safely, you can use the withRun method, which automatically stops the container at the end of a block:

node {
  git '…'
  docker.image('mysql').withRun {c ->
    sh './test-with-local-db'
  }
}

The above simply starts a container running a test MySQL database and runs a regular build while that container is running. Unlike inside, shell steps inside the block are not run inside the container, but they could connect to it using a local TCP port for example.

You can also access the id of the running container, which is passed as an argument to the block, in case you need to do anything further with it:

// …as above, but also dump logs before we finish:
sh "docker logs ${c.id}"

Like inside, run and withRun record the fact that the build used the specified image.

Specifying a custom registry and server

So far we have assumed that you are using the public Docker Hub as the image registry, and connecting to a Docker server in the default location (typically a daemon running locally on a Linux agent). Either or both of these settings can be easily customized.

To select a custom registry, wrap build steps which need it in the withRegistry method on docker (inside node if you want to specify an agent explicitly). You should pass in a registry URL. If the registry requires authentication, you can add the ID of username/password credentials. (Credentials link in the Jenkins index page or in a folder; when creating the credentials, use the Advanced section to specify a memorable ID for use in your pipelines.)

docker.withRegistry('https://docker.mycorp.com/', 'docker-login') {
  git '…'
  docker.build('myapp').push('latest')
}

The above builds an image from a Dockerfile, and then publishes it (under the latest tag) to a password-protected registry. There is no need to preconfigure authentication on the agent.

To select a non-default Docker server, such as for Docker Swarm, use the withServer method. You pass in a URI, and optionally the ID of Docker Server Certificate Authentication credentials (which encode a client key and client/server certificates to support TLS).

docker.withServer('tcp://swarm.mycorp.com:2376', 'swarm-certs') {
  docker.image('httpd').withRun('-p 8080:80') {c ->
    sh "curl -i http://${hostIp(c)}:8080/"
  }
}
def hostIp(container) {
  sh "docker inspect -f {{.Node.Ip}} ${container.id} > hostIp"
  readFile('hostIp').trim()
}

Note that you cannot use inside or build with a Swarm server, and some versions of Swarm do not support interacting with a custom registry either.

Advanced usage

If your script needs to run other Docker client commands or options not covered by the DSL, just use a sh step. You can still take advantage of some DSL methods like imageName to prepend a registry ID:

docker.withRegistry('https://docker.mycorp.com/') {
  def myImg = docker.image('myImg')
  // or docker.build, etc.
  sh "docker pull --all-tags ${myImg.imageName()}"
  // runs: docker pull --all-tags docker.mycorp.com/myImg
}

and you can be assured that the environment variables and files needed to connect to any custom registry and/or server will be prepared.

Demonstrations

A Docker image is available for you to run which demonstrates a complete flow script which builds an application as an image, tests it from another container, and publishes the result to a private registry. Instructions on running this demonstration

CloudBees Docker Build and Publish plugin

This Jenkins plugin allows to build Docker images on a Docker server and then publish them to Docker Hub or other Docker registries.

Features summary:

  • The entire functionality is implemented as a single build step

  • Support of Dockerfile specifications

  • Publishing to docker index/registry including Docker Hub

  • Credentials support for Docker servers and registries (provided by Docker Commons Plugin)

  • On-demand tagging of built images

  • On-demand fingerprinting of built images

  • Image clean build with --no-cache option (rebuild of all steps from Dockerfile steps)

Usage guidelines

Note
Detailed usage guidelines are available on the plugin’s README on GitHub. This article addresses cases related to CloudBees Jenkins Enterprise and other plugins documented in this manual.

The entire functionality is implemented as a single build step.

  • In Advanced options you can disable building and publishing sub-steps using Skip Push and Skip Pull controls.

  • Force pull is enabled by default in order to build images with the latest version of the source image specified in the Dockerfile

  • Image fingerprint creation is enabled by default in order to support value-added features like Docker Traceability

Below you can find a sample configuration of the build step:

build step config
Figure 112. Docker Build and Publish build step configuration (with Advanced options)

Limitations

Docker CLI tool location should be set in the environment

CloudBees Docker Build and Publish plugin uses Docker command line tool to interact with Docker servers and registries. Currently there is no integration with Docker ToolInstallation provided by Docker Commons Plugin. This plugin expects that docker binary is available in the PATH environment variable on the node where the build is being performed.

Workaround: Define Docker CLI binary in PATH environment variable of each node, which may be used to run builds with Docker Build and Publish steps.

Docker Hub Notification Trigger Plugin

Introduction

The Docker Hub Notification Trigger plugin lets you configure Jenkins to trigger builds when an image is pushed to Docker Hub. E.g. to run verification for the container.

Using Docker Hub Notification Trigger

This Plugin introduces a new trigger option to Jenkins jobs; Monitor Docker Hub for image changes.

On this trigger type, you can specify a list of one or more Docker Hub repositories from which to receive build trigger notifications following a push on any of the listed repositories. You can also select to receive notifications for any Docker Hub repository consumed by other parts of the job (default) e.g. the image referenced by a "Pull Docker image from Docker Hub" Build Step. Click the help link for the Any referenced Docker image can trigger this job option to see a list of all currently installed plugins that supports this.

configure trigger
Figure 113. Trigger section of a freestyle job configuration

Once a Jenkins job is configured to use a Docker Hub Notification Trigger, you then need to go to Docker Hub and configure a Webhook on the repository from which you would like your Jenkins build trigger to receive notification. After selecting the repository, select Webhooks in the Settings.

dockerhub repo
Figure 114. Adding a Webhook

Then add the Webhook, using http://your-jenkins/dockerhub-webhook/notify as the webhook URL.

configure webhook
Figure 115. Docker Hub Webhook Configuration

Note that automatic Webhook configuration will be supported by this plugin soon after the Docker Hub service supports Webhook configuration via their public API.

On receipt of a Webhook notification from Docker Hub, a build of the job is triggered. The build will receive two additional build parameters specific to the Docker Hub Trigger:

  • DOCKER_TRIGGER_REPO_NAME containing the name of the repo e.g. "acme/prod", and

  • DOCKER_TRIGGER_DOCKER_HUB_HOST containing the name of the docker host that that is to receive the callback e.g. "registry.hub.docker.com" (i.e. the same that is shown for the build cause).

Multiple jobs can be configured to receive a trigger from a single repository. In that case, the callback to Docker Hub will contain a link to a page in Jenkins listing the builds that were triggered and the callback state will be based on the worst build result.

result landingpage
Figure 116. Callback landing page

Pull image build step

The plugin provides a simple freestyle build step called Pull Docker image from Docker Hub that simply performs a docker pull of an image from the registry with specified credentials. The build step is compatible with the trigger’s Any referenced Docker image can trigger this job option described above.

List Views

You can create a list view that lists all jobs that are triggering on specified image names and displays what images they are triggering on. On the Jenkins top page; click on new View (the plus sign at the right of the view tabs). Specify a name for the new View, choose List View from the available view types and then press OK. Select Triggered by push to Docker Hub from the Add Job Filter selection dropdown.

edit view
Figure 117. Job filter in the list view configuration

It’s possible to narrow the list of images shown in the View by entering a Regular Expression in the Pattern text box (one pattern per line to match against a list of images). Leaving this field empty results in the listing of all jobs that have a Docker Hub Trigger configured.

You will have noticed that the top level Jenkins View contains a column named "DHT" (Docker Hub Trigger). This View column shows the name(s) of the Docker image(s) responsible for triggering the Jenkins Job listed in a given row of the View table. By default, this column is not show in the List View. To make it visible, simply select Docker Hub image names being triggered on from the Add column selection dropdown, located in the "Columns" section below the "Job Filters" section. You can manage the number of image names listed in this column (per row) by configuring the "Max" option for the column. Leave "Max" at 0 to list all images names. This option is useful when job triggering on many image names.

select column
Figure 118. Column selection of the list view configuration

CloudBees Docker Traceability

This Jenkins plugin allows the tracking of the creation and use of Docker containers in Jenkins and their future use.

Plugin Summary

  • Container deployments summary page

  • Tracking of Docker image and container deployments produced in Jenkins

  • Tracking of Docker container events

  • Tracking of Docker container states being retrieved from docker inspect calls

  • Advanced API for Docker reports analysis

  • Submission of events from Docker and Docker Swarm installations

  • Data polling from Jenkins (including Docker API-compatible JSONs)

  • Support of search queries

root action
Figure 119. Docker Traceability root page
Warning
The current version of the plugin has several limitations, which may affect common use-cases. See Known Issues for more info.

Installation Guidelines

Plugin setup

  1. Install CloudBees Docker Traceability plugin from Jenkins Update Center

  2. Install other Jenkins plugins, which produce image fingerprints to be traced by the plugin (see Integrations)

  3. Configure security. This step is highly recommended, because the plugin can store raw JSON, which may contain sensitive info like passwords

    • The plugin introduces new permissions, which allow to restrict the access

      • Read - Allows to retrieve details like full container/info dumps. Web interfaces are being managed by the global Jenkins.Read permission, so they won’t be affected

      • Submit - Allows the submission deployment records from the remote API

      • Delete - Allows the deletion deployment records or entire fingerprints

    • It is recommended to restrict the access to the deployments records submission to a limited group of users

  4. Optional: Edit plugin settings on the Jenkins configuration page to adjust the behavior. Main options are listed below:

    1. Docker Traceability link on the main panel

      1. By default, Jenkins does not display the Docker Traceability link on the main left panel. This link can be enabled on the configuration page

      2. Even if the root action is disabled, the plugin remains fully operational. The main action can be accessed using the direct $JENKINS_URL/docker-traceability link.

    2. Image fingerprints creation mode

      1. By default, the plugin expects image fingerprints to be created by other Docker plugins based on Docker Commons Plugin.

      2. The behavior can be adjusted on the plugin’s global configuration page

Client-side configuration

Note
CloudBees team is working on a specialized fault-tolerant client, which will monitor events on Docker and Docker Swarm servers and then submit reports to Docker CloudBees Traceability Plugin. This client will become available soon. Currently the reports can be submitted using the plugin’s remote API commands.

Use-cases

Submitting deployment records

The plugin does not support an automatic polling of events from external Docker servers. The events should be submitted by external clients or other Jenkins plugins.

Warning! Currently the plugin accepts the info for previously registered fingerprints only. Other submissions will be ignored. Initial image records should be created by other plugins using (Integrations)

From external items using REST API

The plugin’s API provides several commands, which allow to submit new events from remote client applications.

If you use a secured instance, in addition to credentials clients must be aware aware about Cross-Site Request Forgery Protection on Jenkins. Otherwise the requests may be rejected . See the configuration guidelines on Remote access API Wiki page.

submitContainerStatus is a simple call, which may be used from user scripts without a generation of additional JSON files.

Examples:

curl http://localhost:8080/jenkins/docker-traceability/submitContainerStatus
    --data-urlencode inspectData="$(docker inspect CONTAINER_ID)"
curl http://localhost:8080/jenkins/docker-traceability/submitContainerStatus
    --data-urlencode status=create
    --data-urlencode imageName=jenkinsci/workflow-demo
    --data-urlencode hostName=dev-server-1
    --data-urlencode hostName=development
    --data-urlencode inspectData="$(docker inspect CONTAINER_ID)"

submitEvent is a more complex call, which allows submit the all available data about a Docker container via a single REST API call. This call can be used from external plugins.

From other plugins

The plugin provides the DockerEventListener extension point, which is being used to notify listeners about new records.

Docker Traceability functionality also listens to these endpoints, so it is possible to notify the plugin about new records using DockerEventListener#fire()` method.

Getting info from the plugin

For each container record the plugin publishes the info on the container summary page. A summary status about deployments is being also added to the parent image page.

container page
Figure 120. Container info and deployment events on the container page
image page
Figure 121. Container deployments summary on the image page

If an external client submits information about the image (which can be retrieved using docker inspect imageId command), the plugin captures this image and adds a new facet to the image fingerprint page.

docker image facet
Figure 122. Docker image info facet

Raw data is accessible via the plugin’s API or via hyperlinks on info pages.

You can search deployments by container IDs using the "Search" control on the Docker Traceability page. You can also query containers using the plugin’s API.

Integrations

CloudBees Docker Traceability plugin is based on fingerprints provided by Docker Commons Plugin. The plugin just adds additional facets to main fingerprint pages, so any other plugin can contribute to the UI by adding additional facets to the fingerprint.

CloudBees Docker Pipeline and Docker Build and Publish plugins can create such fingerprints. See Docker Commons Plugin Wiki page to get an info about other existing fingerprint contributors.

API

The detailed description of API endpoints is available in the "api" page of the Docker Traceability page (see $(JENKINS_URL)/docker-traceability/api)

Known issues

Below you can find a list of the issues, which may affect the plugins usage. CloudBees team is working on these issues in order to resolve them soon.

Fingerprints automatic cleanup by Jenkins (JENKINS-28655)

  • If CloudBees Docker Traceability plugin creates a new container or image fingerprint from a client’s request, it cannot reference its original build due to the missing info

  • Jenkins has a garbage collector, which removes fingerprints without build references every 24 hours

Impact:

  • If the created fingerprint is not referenced by a build, it may be deleted within 24 hours

  • If any other plugin references a fingerprint during the build, it will be retained till the build gets deleted

Workaround:

  • You can disable the fingerprint cleanup thread from the $JENKINS_HOME/init.groovy script, which will be launched every time Jenkins starts up

  • In such case Jenkins will retain all fingerprints, so be aware about possible disk overflows

  • Sample script:

import hudson.*;
import hudson.model.*;

ExtensionList.lookup(AsyncPeriodicWork.class).get(FingerprintCleanupThread.class).cancel()

Microsoft Azure CLI Plugin

Introduction

The Microsoft Azure CLI plugin provisions the Azure CLI in your Jenkins jobs so that you can deploy applications or interact with a Microsoft Azure environment.

Configuring a Job

To enable the Microsoft Azure CLI in a job, do the following:

  1. Navigate to the Configuration page of the job and, in the Build Environment section, check Configure Azure CLI.

    You can then select:

    • Installation: Azure CLI executable to use for this job.

    • Credentials: Microsoft Azure Publisher Settings to use when invoking the Microsoft Azure CLI.

      azure job config
      Figure 123. Microsoft Azure CLI job configuration
  2. If you have Azure CLI installed on build executor, you can select System PATH. Alternatively, in Jenkins Global configuration, you can configure another Azure CLI installation, and select automatic installation as a nodeJS package - this requires you to define and select a NodeJs installation as well.

azure installation config
Figure 124. Microsoft Azure Installation configuration

Using the Microsoft Azure CLI in a Job

Once you set up Microsoft Azure CLI in a job, you can use it in any Execute Shell or Execute Windows Batch Command step.

AWS CLI Plugin

Introduction

The AWS CLI plugin provisions the AWS CLI in your Jenkins jobs so that you can deploy applications or interact with an Amazon Web Services environment.

Note
The AWS CLI only supports Linux distributions.

Configuring a Job

To enable the Amazon Web Services CLI in a job:

  1. Navigate to the Configuration page of the job and, in the Build Environment section, check Setup Amazon Web Services CLI.

You can then select:

  • API Credentials: Amazon Web Services Access Key to use when invoking the Amazon Web Services CLI (cf aws configure).

  • Default Region: The Amazon Web Services Default Region (cf aws configure).

Using the AWS CLI in a Job

Once you set up Amazon Web Services CLI in a job, you can use it in any Execute Shell or Execute Windows Batch Command step.

aws cli sample shell step
Figure 125. AWS CLI sample shell step

Using the AWS CLI in a Pipeline Job

You can use it in any Pipeline job as a build wrapper.

AWS CLI Pipeline syntax

The Groovy syntax looks like:

node {
   // ...
   wrap([$class: 'AmazonAwsCliBuildWrapper', (1)
         credentialsId: 'aws-cleclerc-beanstalk', (2)
         defaultRegion: 'us-east-1']) { (3)

        sh ''' (4)
           # COPY CREATED WAR FILE TO AWS S3
           aws s3 cp target/petclinic.war s3://cloudbees-aws-demo/petclinic.war
           # CREATE A NEW BEANSTALK APPLICATION VERSION BASED ON THE WAR FILE LOCATED ON S3
           aws elasticbeanstalk create-application-version \
              --application-name petclinic \
              --version-label "jenkins$BUILD_DISPLAY_NAME" \
              --description "Created by $BUILD_TAG" \
              --source-bundle=S3Bucket=cloudbees-aws-demo,S3Key=petclinic.war
           # UPDATE THE BEANSTALK ENVIRONMENT TO CREATE THE NEW APPLICATION VERSION
           aws elasticbeanstalk update-environment \
              --environment-name=petclinic-qa-env \
              --version-label "jenkins$BUILD_DISPLAY_NAME"
        '''
   }
}
  1. "General Build Wrapper" step for a AmazonAwsCliBuildWrapper

  2. ID of the IAM credentials to configure the AWS CLI

  3. Default AWS region

  4. aws commands used in sh steps, the AWS CLI is configured with the desired credentials and AWS region.

Evaluating aws commands result

Evaluating the result of the aws command is key to the logic of your Pipeline.

Pipeline snippet generator

You can generate the Pipeline statement to setup the AWS CLI using the Jenkins Pipeline Snippet Generator.

aws cli workflow snippet
Figure 126. AWS CLI Pipeline snippet

Watching and Learning

Learn more on the integration of the AWS CLI in CloudBees Jenkins Platform with our screencasts.

Amazon Web Services Elastic Beanstalk

You can deploy your Java applications to Amazon Web Services Elastic Beanstalk using Jenkins. Although Amazon Web Services Elastic Beanstalk supports multiple languages, the CloudBees deployer only supports Java at this time.

Getting Started with Amazon Web Services Elastic Beanstalk

To get started, do the following:

  1. In the Update Center, navigate to Manage JenkinsManage Plugins to enable the two new plugins to support Elastic Beanstalk deployment, which you will find under Available Plugins:

    EBPlugins

  2. Then, use the Manage JenkinsManage Credentials options and add your AWS credentials. Start by identifying the type of credentials you’re adding:

    AWSCredentials

  3. Provide the necessary keys to Jenkins so you will be able to access AWS Elastic Beanstalk:

    ConfigureCredentials

  4. Finally, for any job that produces a standard .WAR file deploying to Tomcat, use Configure Jenkins, and as a post-build action, choose Elastic Beanstalk as the Host Service for the CloudBees Deployer:

    AWSHostService

  5. Select the AWS credentials that you provided earlier, and configure any parameters associated with the deployment:

ConfigureAWSElasticBeanstalk

Deploying the App to Elastic Beanstalk

Once you configure the job, the app deploys to AWS Elastic Beanstalk as a post-build step. Below, an example of a sample job deploying:

RunningEBDeployment

Tip
The following is the deployed application running on Amazon Web Services Elastic Beanstalk. This example is one provided by Amazon as a simple web app that displays your AWS resources, which is available as a public build job at: https://partnerdemo.ci.cloudbees.com/job/aws-eb-deployer-test/

ExampleEBApp

Cloud Foundry CLI Plugin

Introduction

The Cloud Foundry CLI plugin provisions the Cloud Foundry CLI in your Jenkins jobs so that you can deploy applications or interact with a Cloud Foundry platform.

You can install the version of the Cloud Foundry CLI of your choice.

This plugin is tested with the following Cloud Foundry platforms:

  • Pivotal Web Services (aka PWS)

  • Pivotal CF (aka PCF)

Plugin Installation

To install Cloud Foundry CLI plugin, from the main page, click on Manage Jenkins, then Manage Plugins and on the Available tab, check Cloud Foundry CLI and then click on Download Now and Install After Restart.

cloudfoundry plugin install
Figure 127. Cloud Foundry CLI plugin installation

Jenkins System Configuration

Before jobs can utilize the Cloud Foundry CLI it is necessary to define the CLI version(s) to make available on Jenkins.

From the main page click on Manage Jenkins, then click on Configure System to goto the Jenkins configuration page.

In the Cloud Foundry CLI section of the screen, click on the button Cloud Foundry Installations …​ and then on the button Add Cloud Foundry CLI.

Choose a name for the CLI installation (e.g. Cloud Foundry 6.9.0) and select the installation method. By default, the installer Installer from cloudfoundry.org is select and a drop-down list let you select the version.

cloudfoundry global config
Figure 128. Cloud Foundry CLI installer configuration

Note: if the dropdown list of available versions is not visible and replaced by an input box, you have to trigger the reloading of the update center: Click on Manage Jenkins then Manage Plugins then go on the Advanced tab and click on Check Now.

cloudfoundry reload update center
Figure 129. Reloading Jenkins Update Center

Job Configuration

To enable the Cloud Foundry CLI in a job, goto the Configuration page of the job and, in the Build Environment section, check Set up Cloud foundry CLI.

You can then select Cloud Foundry CLI version:: The version of the CLI to use in this job. This CLI installation has been configuration in Jenkins System Configuration. API EndPoint:: URL of the Cloud Foundry API (e.g. https://api.run.pivotal.io). Skip ssl validation:: Disable SSL certificate validation when invoking the Cloud Foundry API, please don’t disable SSL validation. API Credentials:: Login/password to use when invoking the Cloud Foundry API (see cf auth). Organization:: The Cloud Foundry Organization (see cf target). Space:: The Cloud Foundry Space (see cf target).

cloudfoundry job config
Figure 130. Cloud Foundry CLI job configuration

Using the Cloud Foundry CLI in a Job

Once the Cloud Foundry CLI is setup in a job, you can use it in any Execute Shell or Execute Windows Batch Command step.

cloudfoundry sample shell step
Figure 131. Cloud Foundry CLI sample shell step

Using the Cloud Foundry CLI in a Pipeline Job

Once the Cloud Foundry CLI is setup in the System Configuration, you can use it in any Pipeline job as a build wrapper.

Cloud Foundry CLI Pipeline syntax

The groovy syntax looks like:

node {
   // ...

   wrap([$class: 'CloudFoundryCliBuildWrapper',  (1)
      apiEndpoint: 'https://api.cf-domain.example.com', (2)
      skipSslValidation: false, (3)
      cloudFoundryCliVersion: 'Cloud Foundry CLI (built-in)', (4)
      credentialsId: 'cf-user-cleclerc-credentials',  (5)
      organization: 'my-org', (6)
      space: 'development']) { (7)

       sh 'cf push my-app-name -p target/my-app*.war' (8)

   }
}
  1. "General Build Wrapper" step for a CloudFoundryCliBuildWrapper

  2. URL of the Cloud Foundry API endpoint (e.g. "http://api.cf-domain.example.com")

  3. Skip SSL validation when using a self signed certificate for the API endpoint (please don’t)

  4. Name of the Cloud Foundry CLI version defined in the Jenkins System Configuration screen

  5. ID of the credentials to configure the Cloud Foundry CLI

  6. Default Cloud Foundry Elastic Runtime organization

  7. Default Cloud Foundry Elastic Runtime space

  8. cf commands used in sh steps, the Cloud Foundry CLI is configured with the desired API endpoint, credentials and default organization and space.

Evaluating cf commands result

Evaluating the result of the cf command is key to the logic of your Pipeline.

Pipeline snippet generator

You can generate the Pipeline statement to setup the Cloud Foundry CLI using the Jenkins Pipeline Snippet Generator.

cloudfoundry cli workflow snippet
Figure 132. Cloud Foundry CLI Pipeline snippet

Security Notes

The Cloud Foundry CLI is configured per job so that 2 jobs running on the same agent will not see the settings of each other (different CF_HOME for each job). Moreover, the Cloud Foundry CLI settings are deleted from the agent after the job execution (CF_HOME is deleted).

Known issue

This plugin is not compatible with CloudBees Docker Custom Build Environment. If the later is used to run a Docker container to host the build, the CloudFoundry CLI plugin won’t have the expected effect to setup credentials so the CLI can be used to interact with CloudFoundry platform (JENKINS-33000).

OpenShift CLI Plugin

Introduction

The OpenShift CLI plugin provisions the os CLI in your Jenkins jobs so that you can deploy applications or interact with an Openshift environment.

Global Configuration

OpenShift CLI installations can be configured on Jenkins global configuration (Jenkins > Manage > Configure System) Default installer do download The OpenShift CLI from OpenShift Origin distribution site, and install it on executor first time it’s used by a job. Alternatively, you can configure a custom installer to download a tar.gz archive from a custom location.

Job Configuration

To enable the OpenShift CLI in a job, goto the Configuration page of the job and, in the Build Environment section, check Setup OpenShift CLI.

You can then select CLI installation:: OpenShift CLI installation to use. CLI can be configured for automatic installation on executor, please read next section.

OpenShift master URL

URL for the OpenShift master endpoint URL.

Insecure

Checking this option will disable the SSL certificate validation, so can be used to connect to an OpenShift master which uses a self-signed HTTPS certificate. Please note this option introduce security risks and should not be used for production environments.

Credentials

OpenShift credentials to setup so build steps within this job will run within an authenticated context.

Once the Openshift CLI is setup in a job, you can use it in any Execute Shell or Execute Windows Batch Command step.

job config
Figure 133. OpenShift CLI job configuration

Using the OpenShift CLI in a Pipeline Job

Once the OpenShift CLI is setup in the System Configuration, you can use it in any Pipeline job as a build wrapper.

OpenShift CLI Pipeline syntax

The groovy syntax looks like:

node {
   // ...

   wrap([$class: 'OpenShiftBuildWrapper',  (1)
      installation: 'oc (latest)', (2)
      url: 'https://openshift.example.com:8443', (3)
      insecure: true, (4)
      credentialsId: 'openshift-credentials']) { (5)

       sh 'oc scale --replicas=3 replicationcontrollers webapp' (6)

   }
}
  1. "General Build Wrapper" step for a OpenShiftBuildWrapper

  2. Name of the OpenShift CLI version defined in the Jenkins System Configuration screen

  3. URL of the OpenShift Master endpoint (e.g. "https://openshift.example.com:8443")

  4. Skip SSL validation when using a self signed certificate for the API endpoint (please don’t)

  5. ID of the credentials to configure the OpenShift CLI

  6. oc commands used in sh steps, the OpenShift CLI is configured with the desired API endpoint and credentials.

Evaluating oc commands result

Evaluating the result of the oc command is key to the logic of your Pipeline.

Pipeline snippet generator

You can generate the Pipeline statement to setup the OpenShift CLI using the Jenkins Pipeline Snippet Generator.

workflow snippet
Figure 134. OpenShift CLI Pipeline snippet

CloudBees Bitbucket Branch Source Plugin

Introduction

CloudBees Bitbucket Branch Source Plugin (Bitbucket Plugin up now) allows use of Bitbucket Cloud and Server as a multi-branch project source in two different ways:

  • Single repository source: automatic creation of jobs for branches and pull requests in a specific repository.

  • Team/Project folders: automatic creation of multi-branch projects for each visible repository in a specific Bitbucket Team or Project.

    IMPORTANT: This plugin is not compatible with versions of Bitbucket Server previous to 4.0.

Branches and pull requests auto-discovering

This plugin adds an additional item in the "Branch Sources" list of multi-branch projects. Once configured, branches and pull requests are automatically created and built as branches in the multi-branch project.

Follow these steps to create a multi-branch project with Bitbucket as a source:

  1. Create the multi-branch project. This step depends on which multi-branch plugin is installed. For example, "Multibranch Pipeline" should be available as project type if Pipeline Multibranch plugin is installed.

    screenshot 1
  2. Select "Bitbucket" as Branch Source

    screenshot 2
  3. Set credentials to access Bitbucket API and checkout sources (see "Credentials configuration" section below).

  4. Set the repository owner and name that will be monitored for branches and pull requests.

  5. If using Bitbucket Server the server base URL needs to be configured (expand the Advanced section to do it).

    screenshot 4
  6. Finally, save the project. The initial indexing process will run and create projects for branches and pull requests.

    screenshot 5

Team Folders

Bitbucket Team/Project Folders project type can be used to automatically track branches and pull requests in all repositories in a Bitbucket Team or Project.

  1. Create a project of type Bitbucket Team/Project. The project name will be proposed automatically as default Owner (or Team) name.

    screenshot 6
  2. Configure the repository owner (if the proposed value does not match with the actual team or username). It could be:

    1. A Bitbucket Cloud Team name: all repositories in the team are imported as Multibranch projects.

    2. A Bitbucket Server Project ID: all repositories in the project are imported as Multibranch projects. Note that the project ID needs to be used instead of the project name.

    3. A regular username: all repositories which the username is owner of are imported.

      screenshot 8
  3. Save the configuration and an initial indexing process starts, once it finishes a Multibranch project is created for each repository.

    screenshot 9

Webhooks registering

The use of Bitbucket webhooks allows to trigger builds on branches and pull requests just when a new commit is done. Bitbucket plugin expose a special service to listen to this webhook requests and acts accordingly by triggering a new reindex and finally triggering builds on matching branches or pull requests.

For both Bitbucket Multibranch projects and Bitbucket Team projects there is an option in the configuration page to let Jenkins to automatically register the webhooks in all involved repositories.

Important
This feature is only available for Bitbucket Cloud at the moment.
screenshot 4
Important
In order to have the auto-registering process working fine the Jenkins base URL must be properly configured in Manage Jenkins » Configure System

Credentials configuration

The configuration of Bitbucket Plugin (for both Bitbucket Multibranch projects and Bitbucket Team/Project) has two credentials to configure:

  1. Scan Credentials: credentials used to access Bitbucket API in order to discover repositories, branches and pull requests. If not set then anonymous access is used, so only public repositories, branches and pull requests are discovered and managed. Note that the Webhooks auto-register feature requires scan credentials to be set. Only HTTP credentials are accepted in this field.

  2. Checkout Credentials: credentials used to checkout sources once the repository, branch or pull request is discovered. HTTP and SSH credentials are allowed. If not set then Scan Credentials are used.

screenshot 3

GitHub Branch Source Plugin

Introduction

The GitHub Branch Source Plugin allows you to create a new project based on the repository structure from one or more GitHub users or organizations. You can either:

  • Import all or a subset of repositories as jobs into the workspace from a GitHub user or organization

  • Import a single repository’s branches as jobs from a GitHub user or organization

The GitHub Organization project scans the repository, importing the Pipeline jobs it identifies according to your criteria. To define a job, you create a Pipeline script in a Jenkinsfile in the root directory of the project or branch. You can have different Pipeline scripts defined in project branches, or add and remove `Jenkinsfile`s depending on your use cases. After a project is imported, Jenkins immediately runs a job. Removing a Pipeline script from a branch or from the entire project, removes the item from the view. You can restrict imports using a regular expression on the GitHub Organization configuration page. This plugin fits into the larger "Pipeline as Code" story; for more information, see the CloudBees Cookbook article.

The GitHub Branch Source Plugin creates each repository as a folder containing each branch with a Jenkinsfile in the top level directory as a different job. The Jenkinsfile contains a Pipeline script outlining the steps of the job. Scripts can be different for each branch.

GitHub pull requests

The GitHub Branch Source plugin allows the project to import externally originated pull requests containing a Pipeline script. The following features are included:

  • GitHub user or organization repository’s pull requests are treated as jobs

  • Pull request build result status reporting

Pull requests will be added to Jenkins as long as the pull request originates from a remote repository, contains a Pipeline script in a Jenkinsfile, and is mergable. Even when you make changes to your Jenkinsfile, the checked out code is at the revision as the script.

When the proper webhooks are configured, Jenkins will report the status of the pull request’s job to GitHub. The status is reported as follows:

  • Success - the job was successful

  • Failure - the job failed and the pull request is not ready to be merged

  • Error - something unexpected happened; example: the job was aborted in Jenkins

  • Pending - the job is waiting in the build queue

Creating a project

This plugin creates a new project type. To use this plugin, you must have GitHub credentials with the appropriate access level to your project (see the section on credentials below) installed in Jenkins. Set up your GitHub Organization as follows:

Go to Jenkins > New Item > GitHub Organization screenshot1

In the Repository Sources section, complete the section for "GitHub Organization". You can specify a username or an organization in the "Owner" field. Select or add new "Scan credentials". These credentials should allow you to at least see the contents of your targeted repository. Optionally, change the "Repository name pattern". screenshot2

Save, and wait for the "Folder Computation" to run. Job progress is displayed to the left hand side. screenshot3

You may need to refresh the page after it runs to see your repositories. screenshot4

A this point, you are finished with basic project configuration. You can now explore your imported repositories, configuring different settings on each of those folders if needed. You can also investigate the results of the jobs run as part of the Folder Computation. screenshot5

Credentials

There are two different types of credentials for the GitHub organization:

  • scan credentials: used to access the GitHub API

  • checkout credentials: used to clone the repository from GitHub

You can use different credentials for each as long as they have the appropriate permissions. There is an anonymous option for checkout. Regardless, you should create a GitHub access token to use to avoid storing your password in Jenkins and prevent any issues when using the GitHub API. When using a GitHub access token, you must use standard Username with password credentials, where the username is the same as your GitHub username and the password is your access token.

Exploring an imported repository

Each of your imported repositories is a folder containing a project for each branch with a Pipeline script defined in a Jenkinsfile in the root directory. You cannot modify the settings on these folders.

Build triggers & webhooks

You can change the build configuration to perform the folder computation at different times. The default setting will scan your GitHub repository for changes if it has not received any change notifications from GitHub within the past day. This scan will catch the repositories you’ve added or removed from the managed GitHub repository. You can always force the Folder Computation to run from the GitHub Organization page.

screenshot6

While the build triggers are often enough, you can set up webhooks to automatically trigger builds when changes are pushed to your GitHub repositories. To do this you must have a GitHub login with a token.

  1. Go to the main configuration settings page, Manage Jenkins > Configure System

  2. In the GitHub Plugin Configuration section, add a server with your credentials

  3. If you need a token, generate one with the Additional Actions > Convert login to password and token

You can also configure this manually through GitHub itself by registering the URL provided in the help section of the server config.

Controlling what is built

By default, Jenkins will build any branches it finds in the “origin” repository (the actual repository found in your organization). Also, any pull requests that have been filed from “forks” (clones in user accounts) will also be built; Jenkins will try to merge the pull request with the “base” (target) branch on every commit, to check for possible integration conflicts. Pull requests filed from the origin repository are skipped as these are already getting built as plain branches.

If you would like to customize this behavior, as of version 1.8 of the plugin you can click the Advanced button and then turn on and off certain kinds of branch projects:

build types

In particular, you can choose to build origin-repository pull requests in addition to, or instead of, building the plain branches; and you can choose to build pull requests without merging with the base branch, or in addition to building the merge product.

CloudBees Assurance Program

Introduction

The CloudBees Assurance Program adds stability and security to your Jenkins instance by automatically monitoring and optionally modifying Jenkins plugins for compatibility. Once a Jenkins instance is enrolled in the CloudBees Assurance Program, the Beekeeper Plugin monitors installed plugin versions and reports on the configuration to the verified and trusted CAP Collection. The Beekeeper Page visualizes the CAP Collection and through CAP enforcement, Beekeeper automatically configures plugin versions to further improve the stability and security of your Jenkins instance.

Note If you are testing functionality known to require plugin versions newer than those currently identified as verified or trusted (for example, beta plugins), you have the option of disabling Beekeeper for the duration of your tests.

CAP Collection

The CAP Collection specifies the set of plugins, plugin versions, and plugin dependencies that are either verified or trusted, depending on how much they have been tested. Not only are these plugins independently stable, but they are tested as a whole (in aggregate) to ensure compatibility with each other and your Jenkins instance.

Beekeeper Page

The Beekeeper Page is the administrative dashboard of the CloudBees Assurance Program. It provides a centralized view of the monitored Jenkins plugins, recommended actions, and configuration options. The Beekeeper assessment of plugin status is displayed in a table on the Beekeeper page. If a plugin is installed at the CAP Collection version, that status is indicated in a column to the right of the plugin name. Any problematic plugins are listed separately under the heading "Issues found in analyzed components". A column entry in the "Issues" describes recommended action for those problematic plugins.

Configuration

The following configuration options are available, one of which (Enrollment) is selected by default:

  • Enroll this instance in the CloudBees Assurance Program

  • Allow automatic upgrades of plugins on restart

  • Allow automatic downgrades of plugins on restart

When enrolled in the CloudBees Assurance Program, two changes are immediately made to the Jenkins instance: the Beekeeper Page displays the status of the CAP Collection and the Update Center configuration is modified to the CloudBees CAP Update Center. The CloudBees CAP Update Center modifies the list of "Updates" and "Available" plugins in the plugin manager to hide plugins that would be incompatible with CAP (because of dependencies on version of plugins that are outside of CAP, etc).

Provided you do not uncheck the "Enroll this instance in the CloudBees Assurance Program" option, you can select automatic upgrade, automatic downgrade, or both. These actions will automatically install the CAP Collection version of plugins relative to the version of the currently installed plugins, so as to ensure that your instance runs efficiently and is free of potential plugin incompatibilities.

Note Although it is possible to enable "Allow automatic downgrades of plugins on restart", CloudBees does not generally recommend doing that.

CAP Enforcement

Automatic Plugin upgrade and plugin downgrade together comprise CAP Enforcement, which is only possible if your instance is enrolled in the CloudBees Assurance Program. When CAP Enforcement is applied, plugins are automatically modified to the CAP Envelope version. Plugins which are not part of the CAP Envelope are not changed.

Upgrade Assistant

Simplified Upgrades

Beekeeper Upgrade Assistant now allows users to review and install upgrades for verified components in the CloudBees Assurance Program. The Beekeeper Plugin provides, as of version 2.32.0.1, an upgrade assistant through which CloudBees can update the configuration between releases, so that fixes can be safely deployed, keeping the Jenkins instance safely in the recommended configuration. A Simplified Upgrade only involves plugin upgrades, never a core upgrade.

The Beekeeper Plugin will display an alert if there is an upgrade available:

monitor
Figure 135. Administrative Monitor advising of a new upgrade

By clicking on the Manage button, a new screen will be displayed showing information about the upgrade and the effect it will have on the running instance:

upgradeView
Figure 136. Information about the available upgrade and possible actions to perform

The information and actions offered are:

  1. The current version and revision.

  2. The offered revision.

  3. The plugin operations to perform with the current CAP configuration.

    1. A button (Upgrade Now) to install the upgrade with the current configuration.

  4. The plugin operations to perform if "Allow automatic upgrades of plugins on restart" config value was enabled. This information is only shown if the mentioned config value is disabled.

    1. A button (Update config and upgrade now) to install the upgrade after enabling "Allow automatic upgrades of plugins on restart". When performing a Simplified Upgrade, this is the recommended approach to take.

Once an upgrade is installed, the set of recommended plugins that is enforced by the CloudBees Assurance Program will automatically change so the report offered by the Beekeeper Plugin may automatically change from compliant to non-compliant. This would be completely normal because restarting the instance is required for the upgrade to complete. After the restart, the Beekeeper report should be back to its previous state. The following message prompts for restart:

success
Figure 137. Successful upgrade

When an upgrade fails, the Jenkins admin is informed and Retry and Cancel options are offered:

failed
Figure 138. Failed upgrade

Appendix - Plugins Included in CJE 2.x

Introduction

Each CJE 2.x release includes a set of plugins. These plugins are classified as required and optional. This classification affects the behaviour in the different install and upgrade scenarios. As an example, for a fresh installation:

  • Required plugins will always be installed.

  • Optional plugins will only be installed if selected in the install wizard. The rest of the plugins will still be available through the Plugin Manager.

Rolling Train

CJE 2.32.3.2

CJE 2.32.3.2 includes Jenkins Core 2.32.3

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.32.0.5

  • cloudbees-folder 6.0.2

  • cloudbees-folders-plus 3.1

  • cloudbees-license 9.8

  • credentials 2.1.13

  • display-url-api 1.1.1

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • junit 1.20

  • mailer 1.20

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.6

  • nectar-rbac 5.15

  • operations-center-agent 2.32.0.2

  • operations-center-client 2.32.0.2

  • operations-center-context 2.32.0.13

  • structs 1.6

  • support-core 2.39

  • token-macro 2.0

  • variant 1.1

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 2.4

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.2

  • blueocean-autofavorite 0.6

  • blueocean-commons 1.0.0-rc1

  • blueocean-config 1.0.0-rc1

  • blueocean-dashboard 1.0.0-rc1

  • blueocean-display-url 1.5.1

  • blueocean-events 1.0.0-rc1

  • blueocean-git-pipeline 1.0.0-rc1

  • blueocean-github-pipeline 1.0.0-rc1

  • blueocean-i18n 1.0.0-rc1

  • blueocean-jwt 1.0.0-rc1

  • blueocean-personalization 1.0.0-rc1

  • blueocean-pipeline-api-impl 1.0.0-rc1

  • blueocean-pipeline-editor 0.2-rc1

  • blueocean-rest-impl 1.0.0-rc1

  • blueocean-rest 1.0.0-rc1

  • blueocean-web 1.0.0-rc1

  • blueocean 1.0.0-rc1

  • branch-api 2.0.8

  • build-timeout 1.18

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.6

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.17

  • cloudbees-bitbucket-branch-source 2.1.1

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.7

  • cloudbees-support 3.8

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.6

  • cloudbees-workflow-ui 2.1

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.10

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.2

  • docker-commons 1.6

  • docker-traceability 1.2

  • docker-workflow 1.10

  • dockerhub-notification 2.2.0

  • durable-task 1.13

  • email-ext 2.57.2

  • external-monitor-job 1.7

  • favorite 2.0.4

  • git-client 2.3.0

  • git-server 1.7

  • git-validated-merge 3.20

  • git 3.1.0

  • github-api 1.85

  • github-branch-source 2.0.4

  • github-organization-folder 1.6

  • github-pull-request-build 1.10

  • github 1.26.1

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.33

  • javadoc 1.4

  • jquery-detached 1.2.1

  • jquery 1.11.2-0

  • ldap 1.14

  • matrix-auth 1.5

  • matrix-project 1.8

  • maven-plugin 2.14

  • mercurial 1.59

  • momentjs 1.1.1

  • monitoring 1.63.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.2

  • openshift-cli 1.3

  • operations-center-analytics-config 2.32.0.1

  • operations-center-analytics-reporter 2.32.0.1

  • operations-center-cloud 2.32.0.2

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.4

  • pipeline-github-lib 1.0

  • pipeline-graph-analysis 1.3

  • pipeline-input-step 2.5

  • pipeline-milestone-step 1.3

  • pipeline-model-api 1.0.2

  • pipeline-model-declarative-agent 1.0.2

  • pipeline-model-definition 1.0.2

  • pipeline-rest-api 2.6

  • pipeline-stage-step 2.2

  • pipeline-stage-tags-metadata 1.0.2

  • pipeline-stage-view 2.6

  • plain-credentials 1.4

  • promoted-builds 2.28.1

  • pubsub-light 1.7

  • run-condition 1.0

  • scm-api 2.1.0

  • script-security 1.27

  • secure-requester-whitelist 1.1

  • skip-plugin 4.0

  • sse-gateway 1.15

  • ssh-agent 1.14

  • ssh-credentials 1.13

  • ssh-slaves 1.16

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.5

  • workflow-api 2.12

  • workflow-basic-steps 2.4

  • workflow-cps-checkpoint 2.4

  • workflow-cps-global-lib 2.7

  • workflow-cps 2.29

  • workflow-durable-task-step 2.10

  • workflow-job 2.10

  • workflow-multibranch 2.14

  • workflow-scm-step 2.4

  • workflow-step-api 2.9

  • workflow-support 2.13

CJE 2.32.3.1

CJE 2.32.3.1 includes Jenkins Core 2.32.3

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.32.0.3

  • cloudbees-folder 6.0.2

  • cloudbees-folders-plus 3.1

  • cloudbees-license 9.8

  • credentials 2.1.13

  • display-url-api 1.1.1

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • junit 1.20

  • mailer 1.20

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.6

  • nectar-rbac 5.13

  • operations-center-agent 2.32.0.2

  • operations-center-client 2.32.0.2

  • operations-center-context 2.32.0.13

  • structs 1.6

  • support-core 2.39

  • token-macro 2.0

  • variant 1.1

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 2.3

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.2

  • blueocean-autofavorite 0.6

  • blueocean-commons 1.0.0-rc1

  • blueocean-config 1.0.0-rc1

  • blueocean-dashboard 1.0.0-rc1

  • blueocean-display-url 1.5.1

  • blueocean-events 1.0.0-rc1

  • blueocean-git-pipeline 1.0.0-rc1

  • blueocean-github-pipeline 1.0.0-rc1

  • blueocean-i18n 1.0.0-rc1

  • blueocean-jwt 1.0.0-rc1

  • blueocean-personalization 1.0.0-rc1

  • blueocean-pipeline-api-impl 1.0.0-rc1

  • blueocean-pipeline-editor 0.2-rc1

  • blueocean-rest-impl 1.0.0-rc1

  • blueocean-rest 1.0.0-rc1

  • blueocean-web 1.0.0-rc1

  • blueocean 1.0.0-rc1

  • branch-api 2.0.8

  • build-timeout 1.18

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.6

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.17

  • cloudbees-bitbucket-branch-source 2.1.1

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.7

  • cloudbees-support 3.8

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.6

  • cloudbees-workflow-ui 2.1

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.10

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.2

  • docker-commons 1.6

  • docker-traceability 1.2

  • docker-workflow 1.10

  • dockerhub-notification 2.2.0

  • durable-task 1.13

  • email-ext 2.57.1

  • external-monitor-job 1.7

  • favorite 2.0.4

  • git-client 2.3.0

  • git-server 1.7

  • git-validated-merge 3.20

  • git 3.1.0

  • github-api 1.85

  • github-branch-source 2.0.4

  • github-organization-folder 1.6

  • github-pull-request-build 1.10

  • github 1.26.1

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.33

  • javadoc 1.4

  • jquery-detached 1.2.1

  • jquery 1.11.2-0

  • ldap 1.14

  • matrix-auth 1.4

  • matrix-project 1.8

  • maven-plugin 2.14

  • mercurial 1.59

  • momentjs 1.1.1

  • monitoring 1.63.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.2

  • openshift-cli 1.3

  • operations-center-analytics-config 2.32.0.1

  • operations-center-analytics-reporter 2.32.0.1

  • operations-center-cloud 2.32.0.2

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.4

  • pipeline-github-lib 1.0

  • pipeline-graph-analysis 1.3

  • pipeline-input-step 2.5

  • pipeline-milestone-step 1.3

  • pipeline-model-api 1.0.2

  • pipeline-model-declarative-agent 1.0.2

  • pipeline-model-definition 1.0.2

  • pipeline-rest-api 2.6

  • pipeline-stage-step 2.2

  • pipeline-stage-tags-metadata 1.0.2

  • pipeline-stage-view 2.6

  • plain-credentials 1.4

  • promoted-builds 2.28.1

  • pubsub-light 1.7

  • run-condition 1.0

  • scm-api 2.1.0

  • script-security 1.27

  • secure-requester-whitelist 1.1

  • skip-plugin 4.0

  • sse-gateway 1.15

  • ssh-agent 1.14

  • ssh-credentials 1.13

  • ssh-slaves 1.15

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.5

  • workflow-api 2.12

  • workflow-basic-steps 2.4

  • workflow-cps-checkpoint 2.4

  • workflow-cps-global-lib 2.7

  • workflow-cps 2.29

  • workflow-durable-task-step 2.10

  • workflow-job 2.10

  • workflow-multibranch 2.14

  • workflow-scm-step 2.4

  • workflow-step-api 2.9

  • workflow-support 2.13

CJE 2.32.2.7

CJE 2.32.2.7 includes Jenkins Core 2.32.2

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.32.0.3

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0

  • cloudbees-license 9.6

  • cloudbees-support 3.8

  • credentials 2.1.10

  • display-url-api 1.1.1

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • junit 1.20

  • mailer 1.20

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.4

  • nectar-rbac 5.13

  • operations-center-agent 2.32.0.2

  • operations-center-client 2.32.0.2

  • operations-center-context 2.32.0.10

  • structs 1.5

  • support-core 2.38

  • token-macro 2.0

  • variant 1.1

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 2.3

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.2

  • branch-api 1.11.1

  • build-timeout 1.18

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.6

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.17

  • cloudbees-bitbucket-branch-source 1.9

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.7

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.5

  • cloudbees-workflow-ui 2.1

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.10

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.2

  • docker-commons 1.6

  • docker-traceability 1.2

  • docker-workflow 1.10

  • dockerhub-notification 2.2.0

  • durable-task 1.13

  • email-ext 2.57.1

  • external-monitor-job 1.7

  • git-client 2.2.1

  • git-server 1.7

  • git-validated-merge 3.20

  • git 3.0.1

  • github-api 1.82

  • github-branch-source 1.10.1

  • github-organization-folder 1.5

  • github-pull-request-build 1.10

  • github 1.25.1

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.33

  • javadoc 1.4

  • jquery-detached 1.2.1

  • jquery 1.11.2-0

  • ldap 1.14

  • matrix-auth 1.4

  • matrix-project 1.8

  • maven-plugin 2.14

  • mercurial 1.57

  • momentjs 1.1.1

  • monitoring 1.63.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.32.0.1

  • operations-center-analytics-reporter 2.32.0.1

  • operations-center-cloud 2.32.0.2

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.4

  • pipeline-graph-analysis 1.3

  • pipeline-input-step 2.5

  • pipeline-milestone-step 1.3

  • pipeline-model-api 1.0.1

  • pipeline-model-declarative-agent 1.0.1

  • pipeline-model-definition 1.0.1

  • pipeline-rest-api 2.5

  • pipeline-stage-step 2.2

  • pipeline-stage-tags-metadata 1.0.1

  • pipeline-stage-view 2.5

  • plain-credentials 1.3

  • promoted-builds 2.28.1

  • run-condition 1.0

  • scm-api 1.3

  • script-security 1.25

  • secure-requester-whitelist 1.1

  • skip-plugin 4.0

  • ssh-agent 1.14

  • ssh-credentials 1.12

  • ssh-slaves 1.15

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.4

  • workflow-api 2.11

  • workflow-basic-steps 2.4

  • workflow-cps-checkpoint 2.4

  • workflow-cps-global-lib 2.6

  • workflow-cps 2.27

  • workflow-durable-task-step 2.9

  • workflow-job 2.10

  • workflow-multibranch 2.9.2

  • workflow-scm-step 2.3

  • workflow-step-api 2.9

  • workflow-support 2.13

CJE 2.32.2.6

CJE 2.32.2.6 includes Jenkins Core 2.32.2

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.32.0.3

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0

  • cloudbees-license 9.6

  • cloudbees-support 3.8

  • credentials 2.1.10

  • display-url-api 0.5

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • junit 1.20

  • mailer 1.18

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.4

  • nectar-rbac 5.13

  • operations-center-agent 2.32.0.2

  • operations-center-client 2.32.0.2

  • operations-center-context 2.32.0.10

  • structs 1.5

  • support-core 2.38

  • token-macro 2.0

  • variant 1.1

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.48

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.2

  • branch-api 1.11.1

  • build-timeout 1.18

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.6

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.17

  • cloudbees-bitbucket-branch-source 1.9

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.7

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.5

  • cloudbees-workflow-ui 2.1

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.10

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.2

  • docker-commons 1.6

  • docker-traceability 1.2

  • docker-workflow 1.10

  • dockerhub-notification 2.2.0

  • durable-task 1.13

  • email-ext 2.53

  • external-monitor-job 1.7

  • git-client 2.2.1

  • git-server 1.7

  • git-validated-merge 3.20

  • git 3.0.1

  • github-api 1.82

  • github-branch-source 1.10.1

  • github-organization-folder 1.5

  • github-pull-request-build 1.10

  • github 1.25.1

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.33

  • javadoc 1.4

  • jquery-detached 1.2.1

  • jquery 1.11.2-0

  • ldap 1.14

  • matrix-auth 1.4

  • matrix-project 1.8

  • maven-plugin 2.14

  • mercurial 1.57

  • momentjs 1.1.1

  • monitoring 1.63.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.32.0.1

  • operations-center-analytics-reporter 2.32.0.1

  • operations-center-cloud 2.32.0.2

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.4

  • pipeline-graph-analysis 1.3

  • pipeline-input-step 2.5

  • pipeline-milestone-step 1.3

  • pipeline-model-api 1.0.1

  • pipeline-model-declarative-agent 1.0.1

  • pipeline-model-definition 1.0.1

  • pipeline-rest-api 2.5

  • pipeline-stage-step 2.2

  • pipeline-stage-tags-metadata 1.0.1

  • pipeline-stage-view 2.5

  • plain-credentials 1.3

  • promoted-builds 2.28.1

  • run-condition 1.0

  • scm-api 1.3

  • script-security 1.25

  • secure-requester-whitelist 1.1

  • skip-plugin 4.0

  • ssh-agent 1.14

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.4

  • workflow-api 2.11

  • workflow-basic-steps 2.4

  • workflow-cps-checkpoint 2.4

  • workflow-cps-global-lib 2.6

  • workflow-cps 2.27

  • workflow-durable-task-step 2.9

  • workflow-job 2.10

  • workflow-multibranch 2.9.2

  • workflow-scm-step 2.3

  • workflow-step-api 2.9

  • workflow-support 2.13

CJE 2.32.2.1

CJE 2.32.2.1 includes Jenkins Core 2.32.2

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.32.0.1

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0

  • cloudbees-license 9.3

  • cloudbees-support 3.7

  • credentials 2.1.10

  • display-url-api 0.5

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • junit 1.19

  • mailer 1.18

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.3

  • nectar-rbac 5.12

  • operations-center-agent 2.32.0.1

  • operations-center-client 2.32.0.1

  • operations-center-context 2.32.0.7

  • structs 1.5

  • support-core 2.33

  • token-macro 2.0

  • variant 1.1

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.48

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.2

  • branch-api 1.11.1

  • build-timeout 1.18

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.6

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.17

  • cloudbees-bitbucket-branch-source 1.8

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.7

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.5

  • cloudbees-workflow-ui 2.1

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.10

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.2

  • docker-commons 1.5

  • docker-traceability 1.2

  • docker-workflow 1.9.1

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.53

  • external-monitor-job 1.6

  • git-client 2.2.0

  • git-server 1.7

  • git-validated-merge 3.20

  • git 3.0.1

  • github-api 1.77

  • github-branch-source 1.10

  • github-organization-folder 1.5

  • github-pull-request-build 1.10

  • github 1.25.0

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.32

  • javadoc 1.4

  • jquery-detached 1.2.1

  • jquery 1.11.2-0

  • ldap 1.13

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.14

  • mercurial 1.57

  • momentjs 1.1.1

  • monitoring 1.62.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.32.0.1

  • operations-center-analytics-reporter 2.32.0.1

  • operations-center-cloud 2.32.0.1

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.4

  • pipeline-graph-analysis 1.2

  • pipeline-input-step 2.5

  • pipeline-milestone-step 1.3

  • pipeline-rest-api 2.4

  • pipeline-stage-step 2.2

  • pipeline-stage-view 2.4

  • plain-credentials 1.3

  • promoted-builds 2.28

  • run-condition 1.0

  • scm-api 1.3

  • script-security 1.25

  • secure-requester-whitelist 1.1

  • skip-plugin 4.0

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.4

  • workflow-api 2.8

  • workflow-basic-steps 2.3

  • workflow-cps-checkpoint 2.4

  • workflow-cps-global-lib 2.5

  • workflow-cps 2.24

  • workflow-durable-task-step 2.8

  • workflow-job 2.9

  • workflow-multibranch 2.9.2

  • workflow-scm-step 2.3

  • workflow-step-api 2.7

  • workflow-support 2.12

CJE 2.32.1.1

CJE 2.32.1.1 includes Jenkins Core 2.32.1

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.32.0.1

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0

  • cloudbees-license 9.3

  • cloudbees-support 3.7

  • credentials 2.1.10

  • display-url-api 0.5

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • junit 1.19

  • mailer 1.18

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.3

  • nectar-rbac 5.9

  • operations-center-agent 2.32.0.1

  • operations-center-client 2.32.0.1

  • operations-center-context 2.32.0.3

  • structs 1.5

  • support-core 2.33

  • token-macro 2.0

  • variant 1.1

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.48

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.2

  • branch-api 1.11.1

  • build-timeout 1.18

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.6

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.17

  • cloudbees-bitbucket-branch-source 1.8

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.7

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.5

  • cloudbees-workflow-ui 2.1

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.10

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.2

  • docker-commons 1.5

  • docker-traceability 1.2

  • docker-workflow 1.9.1

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.53

  • external-monitor-job 1.6

  • git-client 2.2.0

  • git-server 1.7

  • git-validated-merge 3.20

  • git 3.0.1

  • github-api 1.77

  • github-branch-source 1.10

  • github-organization-folder 1.5

  • github-pull-request-build 1.10

  • github 1.25.0

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.32

  • javadoc 1.4

  • jquery-detached 1.2.1

  • jquery 1.11.2-0

  • ldap 1.13

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.14

  • mercurial 1.57

  • momentjs 1.1.1

  • monitoring 1.62.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.32.0.1

  • operations-center-analytics-reporter 2.32.0.1

  • operations-center-cloud 2.32.0.1

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.4

  • pipeline-graph-analysis 1.2

  • pipeline-input-step 2.5

  • pipeline-milestone-step 1.3

  • pipeline-rest-api 2.4

  • pipeline-stage-step 2.2

  • pipeline-stage-view 2.4

  • plain-credentials 1.3

  • promoted-builds 2.28

  • run-condition 1.0

  • scm-api 1.3

  • script-security 1.25

  • secure-requester-whitelist 1.1

  • skip-plugin 4.0

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.4

  • workflow-api 2.8

  • workflow-basic-steps 2.3

  • workflow-cps-checkpoint 2.4

  • workflow-cps-global-lib 2.5

  • workflow-cps 2.24

  • workflow-durable-task-step 2.8

  • workflow-job 2.9

  • workflow-multibranch 2.9.2

  • workflow-scm-step 2.3

  • workflow-step-api 2.7

  • workflow-support 2.12

CJE 2.19.4.2

CJE 2.19.4.2 includes Jenkins Core 2.19.4

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.7.3.4

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0

  • cloudbees-license 9.2

  • credentials 2.1.6

  • display-url-api 0.5

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • junit 1.19

  • mailer 1.18

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • nectar-rbac 5.9

  • operations-center-agent 2.19.0.3

  • operations-center-client 2.19.0.4

  • operations-center-context 2.19.2.3

  • structs 1.5

  • support-core 2.33

  • token-macro 2.0

  • variant 1.0

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.48

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.2

  • branch-api 1.11.1

  • build-timeout 1.17.1

  • build-view-column 0.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.6

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.17

  • cloudbees-bitbucket-branch-source 1.8

  • cloudbees-consolidated-build-view 1.5

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.5

  • cloudbees-support 3.7

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.5

  • cloudbees-workflow-ui 2.1

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.8

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.1

  • docker-commons 1.5

  • docker-traceability 1.2

  • docker-workflow 1.9.1

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.51

  • external-monitor-job 1.6

  • git-client 1.19.7

  • git-server 1.7

  • git-validated-merge 3.20

  • git 2.5.3

  • github-api 1.77

  • github-branch-source 1.10

  • github-organization-folder 1.5

  • github-pull-request-build 1.10

  • github 1.22.3

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.31

  • javadoc 1.4

  • jquery-detached 1.2.1

  • jquery 1.11.2-0

  • ldap 1.13

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mercurial 1.57

  • momentjs 1.1.1

  • monitoring 1.62.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.19.0.2

  • operations-center-analytics-reporter 2.19.0.2

  • operations-center-cloud 2.19.0.2

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.3

  • pipeline-graph-analysis 1.2

  • pipeline-input-step 2.3

  • pipeline-milestone-step 1.1

  • pipeline-rest-api 2.1

  • pipeline-stage-step 2.2

  • pipeline-stage-view 2.1

  • plain-credentials 1.2

  • promoted-builds 2.27

  • run-condition 1.0

  • scm-api 1.3

  • script-security 1.24

  • secure-requester-whitelist 1.0

  • skip-plugin 4.0

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • tfs 4.1.0

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.4

  • workflow-api 2.5

  • workflow-basic-steps 2.2

  • workflow-cps-checkpoint 2.4

  • workflow-cps-global-lib 2.4

  • workflow-cps 2.21

  • workflow-durable-task-step 2.5

  • workflow-job 2.8

  • workflow-multibranch 2.9

  • workflow-scm-step 2.2

  • workflow-step-api 2.4

  • workflow-support 2.10

CJE 2.19.3.1

CJE 2.19.3.1 includes Jenkins Core 2.19.3

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.7.3.2

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0 suggested

  • cloudbees-license 9.2

  • credentials 2.1.6

  • display-url-api 0.5

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • junit 1.19

  • mailer 1.18 suggested

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • nectar-rbac 5.9 suggested

  • operations-center-agent 2.19.0.2

  • operations-center-client 2.19.0.4

  • operations-center-context 2.19.2.3

  • structs 1.5

  • support-core 2.33

  • token-macro 2.0

  • variant 1.0

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.47

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.1

  • branch-api 1.11.1

  • build-timeout 1.17.1

  • build-view-column 0.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.6

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.17

  • cloudbees-bitbucket-branch-source 1.8

  • cloudbees-consolidated-build-view 1.5

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5 suggested

  • cloudbees-ha 4.7 suggested

  • cloudbees-jsync-archiver 5.5 suggested

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5 suggested

  • cloudbees-nodes-plus 1.14 suggested

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.5 suggested

  • cloudbees-support 3.7 suggested

  • cloudbees-template 4.26 suggested

  • cloudbees-view-creation-filter 1.3 suggested

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.5 suggested

  • cloudbees-workflow-ui 2.1 suggested

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.8 suggested

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.1

  • docker-commons 1.5

  • docker-traceability 1.2

  • docker-workflow 1.9.1

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.51 suggested

  • external-monitor-job 1.6

  • git-client 1.19.7 suggested

  • git-server 1.7

  • git-validated-merge 3.20

  • git 2.5.3 suggested

  • github-api 1.77

  • github-branch-source 1.10 suggested

  • github-organization-folder 1.5 suggested

  • github-pull-request-build 1.10

  • github 1.22.3

  • gradle 1.25 suggested

  • handlebars 1.1.1

  • infradna-backup 3.31 suggested

  • javadoc 1.4

  • jquery-detached 1.2.1

  • ldap 1.13 suggested

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mercurial 1.57

  • momentjs 1.1.1

  • monitoring 1.62.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.19.0.2

  • operations-center-analytics-reporter 2.19.0.2 suggested

  • operations-center-cloud 2.19.0.2 suggested

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.3

  • pipeline-graph-analysis 1.2

  • pipeline-input-step 2.3

  • pipeline-milestone-step 1.1

  • pipeline-rest-api 2.1

  • pipeline-stage-step 2.2

  • pipeline-stage-view 2.1 suggested

  • plain-credentials 1.2

  • promoted-builds 2.27

  • run-condition 1.0

  • scm-api 1.3 suggested

  • script-security 1.24

  • secure-requester-whitelist 1.0

  • skip-plugin 4.0

  • ssh-agent 1.13

  • ssh-credentials 1.12 suggested

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • tfs 4.1.0

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7 suggested

  • windows-slaves 1.2

  • workflow-aggregator 2.4 suggested

  • workflow-api 2.5

  • workflow-basic-steps 2.2

  • workflow-cps-checkpoint 2.4 suggested

  • workflow-cps-global-lib 2.4

  • workflow-cps 2.21

  • workflow-durable-task-step 2.5

  • workflow-job 2.8

  • workflow-multibranch 2.9

  • workflow-scm-step 2.2

  • workflow-step-api 2.4

  • workflow-support 2.10

CJE 2.7.21.1

CJE 2.7.21.1 includes Jenkins Core 2.7.21

The list of plugins is the same than in the previous version 2.7.20.2

CJE 2.7.20.2

CJE 2.7.20.2 includes Jenkins Core 2.7.20

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.2

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0 suggested

  • cloudbees-license 8.1

  • credentials 2.1.4

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • mailer 1.17 suggested

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • nectar-rbac 5.9 suggested

  • operations-center-agent 2.7.0.0

  • operations-center-client 2.7.0.1

  • operations-center-context 2.7.0.5

  • structs 1.5

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.47

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.1

  • bouncycastle-api 1.648.3

  • branch-api 1.11

  • build-timeout 1.17.1

  • build-view-column 0.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.5

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.15

  • cloudbees-bitbucket-branch-source 1.2

  • cloudbees-consolidated-build-view 1.5

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5 suggested

  • cloudbees-ha 4.7 suggested

  • cloudbees-jsync-archiver 5.5 suggested

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5 suggested

  • cloudbees-nodes-plus 1.14 suggested

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.5 suggested

  • cloudbees-support 3.7 suggested

  • cloudbees-template 4.26 suggested

  • cloudbees-view-creation-filter 1.3 suggested

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.4 suggested

  • cloudbees-workflow-ui 2.1 suggested

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.8 suggested

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • display-url-api 0.5

  • docker-build-publish 1.3.1

  • docker-commons 1.4.0

  • docker-traceability 1.2

  • docker-workflow 1.8

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.45 suggested

  • external-monitor-job 1.6

  • git-client 1.19.7 suggested

  • git-server 1.7

  • git-validated-merge 3.20

  • git 2.5.3 suggested

  • github-api 1.77

  • github-branch-source 1.10 suggested

  • github-organization-folder 1.5 suggested

  • github-pull-request-build 1.10

  • github 1.19.1

  • gradle 1.25 suggested

  • handlebars 1.1.1

  • infradna-backup 3.30 suggested

  • javadoc 1.4

  • jquery-detached 1.2.1

  • junit 1.18

  • ldap 1.12 suggested

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mercurial 1.56

  • momentjs 1.1.1

  • monitoring 1.59.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0 suggested

  • operations-center-cloud 2.7.0.0 suggested

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.3

  • pipeline-graph-analysis 1.1

  • pipeline-input-step 2.1

  • pipeline-milestone-step 1.1

  • pipeline-rest-api 2.1

  • pipeline-stage-step 2.2

  • pipeline-stage-view 2.1 suggested

  • plain-credentials 1.2

  • promoted-builds 2.27

  • run-condition 1.0

  • scm-api 1.3 suggested

  • script-security 1.23

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12 suggested

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • tfs 4.1.0

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7 suggested

  • windows-slaves 1.2

  • workflow-aggregator 2.4 suggested

  • workflow-api 2.4

  • workflow-basic-steps 2.2

  • workflow-cps-checkpoint 2.3 suggested

  • workflow-cps-global-lib 2.4

  • workflow-cps 2.17

  • workflow-durable-task-step 2.5

  • workflow-job 2.6

  • workflow-multibranch 2.9

  • workflow-scm-step 2.2

  • workflow-step-api 2.4

  • workflow-support 2.8

CJE 2.7.19.1

CJE 2.7.19.1 includes Jenkins Core 2.7.19

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.1

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0 suggested

  • cloudbees-license 8.1

  • credentials 2.1.4

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • mailer 1.17 suggested

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.0

  • nectar-rbac 5.8 suggested

  • operations-center-agent 2.7.0.0

  • operations-center-client 2.7.0.0

  • operations-center-context 2.7.0.0

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.47

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.1

  • bouncycastle-api 1.648.3

  • branch-api 1.10.2

  • build-timeout 1.17.1

  • build-view-column 0.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.5

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.15

  • cloudbees-bitbucket-branch-source 1.2

  • cloudbees-consolidated-build-view 1.5

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5 suggested

  • cloudbees-ha 4.7 suggested

  • cloudbees-jsync-archiver 5.5 suggested

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5 suggested

  • cloudbees-nodes-plus 1.14 suggested

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.5 suggested

  • cloudbees-support 3.7 suggested

  • cloudbees-template 4.26 suggested

  • cloudbees-view-creation-filter 1. suggested

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.3 suggested

  • cloudbees-workflow-ui 2.0

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.8 suggested

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.1

  • docker-commons 1.4.0

  • docker-traceability 1.2

  • docker-workflow 1.7

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.45 suggested

  • external-monitor-job 1.6

  • git-client 1.19.7 suggested

  • git-server 1.7

  • git-validated-merge 3.20

  • git 2.5.3 suggested

  • github-api 1.76

  • github-branch-source 1.8.1 suggested

  • github-organization-folder 1.4 suggested

  • github-pull-request-build 1.10

  • github 1.19.1

  • gradle 1.25 suggested

  • handlebars 1.1.1

  • infradna-backup 3.30 suggested

  • javadoc 1.4

  • jquery-detached 1.2.1

  • junit 1.18

  • ldap 1.12 suggested

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mercurial 1.56

  • momentjs 1.1.1

  • monitoring 1.59.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-cloud 2.7.0.0 suggested

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.2

  • pipeline-input-step 2.0

  • pipeline-rest-api 1.6

  • pipeline-stage-step 2.1

  • pipeline-stage-view 1.6 suggested

  • plain-credentials 1.2

  • promoted-builds 2.27

  • run-condition 1.0

  • scm-api 1.2 suggested

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12 suggested

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • tfs 4.1.0

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7 suggested

  • windows-slaves 1.2

  • workflow-aggregator 2.2 suggested

  • workflow-api 2.1

  • workflow-basic-steps 2.1

  • workflow-cps-checkpoint 2.2 suggested

  • workflow-cps-global-lib 2.1

  • workflow-cps 2.10

  • workflow-durable-task-step 2.4

  • workflow-job 2.4

  • workflow-multibranch 2.8

  • workflow-scm-step 2.2

  • workflow-step-api 2.3

  • workflow-support 2.2

Fixed Train

CJE 2.7.22.0.3

CJE 2.7.22.0.3 includes Jenkins Core 2.7.22

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 1.648.3

  • cloudbees-assurance 2.7.3.4

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0

  • cloudbees-license 8.1

  • credentials 2.1.4

  • display-url-api 1.1.1

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • junit 1.19

  • mailer 1.20

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • nectar-rbac 5.15

  • operations-center-agent 2.7.0.0.2

  • operations-center-client 2.7.0.0.2

  • operations-center-context 2.7.0.0.2

  • structs 1.3

  • support-core 2.32

  • token-macro 2.0

  • variant 1.0

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 2.3

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.1

  • branch-api 1.10.2

  • build-timeout 1.17.1

  • build-view-column 0.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.5

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.15

  • cloudbees-bitbucket-branch-source 1.2

  • cloudbees-consolidated-build-view 1.5

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.5

  • cloudbees-support 3.7

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.5

  • cloudbees-workflow-ui 2.0

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.8

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.1

  • docker-commons 1.4.0

  • docker-traceability 1.2

  • docker-workflow 1.7

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.57.2

  • external-monitor-job 1.6

  • git-client 1.19.7

  • git-server 1.7

  • git-validated-merge 3.20

  • git 2.5.3

  • github-api 1.76

  • github-branch-source 1.8.1

  • github-organization-folder 1.4

  • github-pull-request-build 1.10

  • github 1.19.1

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.30

  • javadoc 1.4

  • jquery-detached 1.2.1

  • ldap 1.12

  • matrix-auth 1.5

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mercurial 1.56

  • momentjs 1.1.1

  • monitoring 1.59.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-cloud 2.7.0.0.1

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.2

  • pipeline-input-step 2.0

  • pipeline-rest-api 1.6

  • pipeline-stage-step 2.1

  • pipeline-stage-view 1.6

  • plain-credentials 1.2

  • promoted-builds 2.27

  • run-condition 1.0

  • scm-api 1.2

  • script-security 1.27

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.15

  • suppress-stack-trace 1.5

  • tfs 4.1.0

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.2

  • workflow-api 2.1

  • workflow-basic-steps 2.1

  • workflow-cps-checkpoint 2.2

  • workflow-cps-global-lib 2.1

  • workflow-cps 2.10

  • workflow-durable-task-step 2.4

  • workflow-job 2.4

  • workflow-multibranch 2.8

  • workflow-scm-step 2.2

  • workflow-step-api 2.3

  • workflow-support 2.2

CJE 2.7.22.0.2

CJE 2.7.22.0.2 includes Jenkins Core 2.7.22

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.4

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0

  • cloudbees-license 8.1

  • credentials 2.1.4

  • display-url-api 1.1.1

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • junit 1.19

  • mailer 1.20

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • nectar-rbac 5.9.1

  • operations-center-agent 2.7.0.0.2

  • operations-center-client 2.7.0.0.2

  • operations-center-context 2.7.0.0.2

  • structs 1.3

  • support-core 2.32

  • token-macro 2.0

  • variant 1.0

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 2.3

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.1

  • bouncycastle-api 1.648.3

  • branch-api 1.10.2

  • build-timeout 1.17.1

  • build-view-column 0.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.5

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.15

  • cloudbees-bitbucket-branch-source 1.2

  • cloudbees-consolidated-build-view 1.5

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.5

  • cloudbees-support 3.7

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.5

  • cloudbees-workflow-ui 2.0

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.8

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.1

  • docker-commons 1.4.0

  • docker-traceability 1.2

  • docker-workflow 1.7

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.57.1

  • external-monitor-job 1.6

  • git-client 1.19.7

  • git-server 1.7

  • git-validated-merge 3.20

  • git 2.5.3

  • github-api 1.76

  • github-branch-source 1.8.1

  • github-organization-folder 1.4

  • github-pull-request-build 1.10

  • github 1.19.1

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.30

  • javadoc 1.4

  • jquery-detached 1.2.1

  • ldap 1.12

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mercurial 1.56

  • momentjs 1.1.1

  • monitoring 1.59.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-cloud 2.7.0.0.1

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.2

  • pipeline-input-step 2.0

  • pipeline-rest-api 1.6

  • pipeline-stage-step 2.1

  • pipeline-stage-view 1.6

  • plain-credentials 1.2

  • promoted-builds 2.27

  • run-condition 1.0

  • scm-api 1.2

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.15

  • suppress-stack-trace 1.5

  • tfs 4.1.0

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.2

  • workflow-api 2.1

  • workflow-basic-steps 2.1

  • workflow-cps-checkpoint 2.2

  • workflow-cps-global-lib 2.1

  • workflow-cps 2.10

  • workflow-durable-task-step 2.4

  • workflow-job 2.4

  • workflow-multibranch 2.8

  • workflow-scm-step 2.2

  • workflow-step-api 2.3

  • workflow-support 2.2

CJE 2.7.22.0.1

CJE 2.7.22.0.1 includes Jenkins Core 2.7.22

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.4

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0

  • cloudbees-license 8.1

  • credentials 2.1.4

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • mailer 1.17

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • nectar-rbac 5.9.1

  • operations-center-agent 2.7.0.0.2

  • operations-center-client 2.7.0.0.2

  • operations-center-context 2.7.0.0.2

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.47

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.1

  • bouncycastle-api 1.648.3

  • branch-api 1.10.2

  • build-timeout 1.17.1

  • build-view-column 0.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.5

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.15

  • cloudbees-bitbucket-branch-source 1.2

  • cloudbees-consolidated-build-view 1.5

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.5

  • cloudbees-support 3.7

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.5

  • cloudbees-workflow-ui 2.0

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.8

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.1

  • docker-commons 1.4.0

  • docker-traceability 1.2

  • docker-workflow 1.7

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.45

  • external-monitor-job 1.6

  • git-client 1.19.7

  • git-server 1.7

  • git-validated-merge 3.20

  • git 2.5.3

  • github-api 1.76

  • github-branch-source 1.8.1

  • github-organization-folder 1.4

  • github-pull-request-build 1.10

  • github 1.19.1

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.30

  • javadoc 1.4

  • jquery-detached 1.2.1

  • junit 1.18

  • ldap 1.12

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mercurial 1.56

  • momentjs 1.1.1

  • monitoring 1.59.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-cloud 2.7.0.0

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.2

  • pipeline-input-step 2.0

  • pipeline-rest-api 1.6

  • pipeline-stage-step 2.1

  • pipeline-stage-view 1.6

  • plain-credentials 1.2

  • promoted-builds 2.27

  • run-condition 1.0

  • scm-api 1.2

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • tfs 4.1.0

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.2

  • workflow-api 2.1

  • workflow-basic-steps 2.1

  • workflow-cps-checkpoint 2.2

  • workflow-cps-global-lib 2.1

  • workflow-cps 2.10

  • workflow-durable-task-step 2.4

  • workflow-job 2.4

  • workflow-multibranch 2.8

  • workflow-scm-step 2.2

  • workflow-step-api 2.3

  • workflow-support 2.2

CJE 2.7.21.0.2

CJE 2.7.21.0.2 includes Jenkins Core 2.7.21

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.4

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0

  • cloudbees-license 8.1

  • credentials 2.1.4

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • mailer 1.17

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • nectar-rbac 5.9

  • operations-center-agent 2.7.0.0.2

  • operations-center-client 2.7.0.0.2

  • operations-center-context 2.7.0.0

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.47

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.1

  • bouncycastle-api 1.648.3

  • branch-api 1.10.2

  • build-timeout 1.17.1

  • build-view-column 0.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.5

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.15

  • cloudbees-bitbucket-branch-source 1.2

  • cloudbees-consolidated-build-view 1.5

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5

  • cloudbees-ha 4.7

  • cloudbees-jsync-archiver 5.5

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.5

  • cloudbees-support 3.7

  • cloudbees-template 4.26

  • cloudbees-view-creation-filter 1.3

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.5

  • cloudbees-workflow-ui 2.0

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.8

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.1

  • docker-commons 1.4.0

  • docker-traceability 1.2

  • docker-workflow 1.7

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.45

  • external-monitor-job 1.6

  • git-client 1.19.7

  • git-server 1.7

  • git-validated-merge 3.20

  • git 2.5.3

  • github-api 1.76

  • github-branch-source 1.8.1

  • github-organization-folder 1.4

  • github-pull-request-build 1.10

  • github 1.19.1

  • gradle 1.25

  • handlebars 1.1.1

  • infradna-backup 3.30

  • javadoc 1.4

  • jquery-detached 1.2.1

  • junit 1.18

  • ldap 1.12

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mercurial 1.56

  • momentjs 1.1.1

  • monitoring 1.59.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-cloud 2.7.0.0

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.2

  • pipeline-input-step 2.0

  • pipeline-rest-api 1.6

  • pipeline-stage-step 2.1

  • pipeline-stage-view 1.6

  • plain-credentials 1.2

  • promoted-builds 2.27

  • run-condition 1.0

  • scm-api 1.2

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • tfs 4.1.0

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • windows-slaves 1.2

  • workflow-aggregator 2.2

  • workflow-api 2.1

  • workflow-basic-steps 2.1

  • workflow-cps-checkpoint 2.2

  • workflow-cps-global-lib 2.1

  • workflow-cps 2.10

  • workflow-durable-task-step 2.4

  • workflow-job 2.4

  • workflow-multibranch 2.8

  • workflow-scm-step 2.2

  • workflow-step-api 2.3

  • workflow-support 2.2

CJE 2.7.21.0.1

CJE 2.7.21.0.1 includes Jenkins Core 2.7.21

The list of plugins is the same than in the previous version 2.7.20.0.2

CJE 2.7.20.0.2

CJE 2.7.20.0.2 includes Jenkins Core 2.7.20

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.2

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0 suggested

  • cloudbees-license 8.1

  • credentials 2.1.4

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • mailer 1.17 suggested

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • nectar-rbac 5.9 suggested

  • operations-center-agent 2.7.0.0

  • operations-center-client 2.7.0.0.1

  • operations-center-context 2.7.0.0

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.47

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.1

  • bouncycastle-api 1.648.3

  • branch-api 1.10.2

  • build-timeout 1.17.1

  • build-view-column 0.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.5

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.15

  • cloudbees-bitbucket-branch-source 1.2

  • cloudbees-consolidated-build-view 1.5

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5 suggested

  • cloudbees-ha 4.7 suggested

  • cloudbees-jsync-archiver 5.5 suggested

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5 suggested

  • cloudbees-nodes-plus 1.14 suggested

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.5 suggested

  • cloudbees-support 3.7 suggested

  • cloudbees-template 4.26 suggested

  • cloudbees-view-creation-filter 1.3 suggested

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.3 suggested

  • cloudbees-workflow-ui 2.0

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.8 suggested

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.1

  • docker-commons 1.4.0

  • docker-traceability 1.2

  • docker-workflow 1.7

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.45 suggested

  • external-monitor-job 1.6

  • git-client 1.19.7 suggested

  • git-server 1.7

  • git-validated-merge 3.20

  • git 2.5.3 suggested

  • github-api 1.76

  • github-branch-source 1.8.1 suggested

  • github-organization-folder 1.4 suggested

  • github-pull-request-build 1.10

  • github 1.19.1

  • gradle 1.25 suggested

  • handlebars 1.1.1

  • infradna-backup 3.30 suggested

  • javadoc 1.4

  • jquery-detached 1.2.1

  • junit 1.18

  • ldap 1.12 suggested

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mercurial 1.56

  • momentjs 1.1.1

  • monitoring 1.59.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0 suggested

  • operations-center-cloud 2.7.0.0 suggested

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.2

  • pipeline-input-step 2.0

  • pipeline-rest-api 1.6

  • pipeline-stage-step 2.1

  • pipeline-stage-view 1.6 suggested

  • plain-credentials 1.2

  • promoted-builds 2.27

  • run-condition 1.0

  • scm-api 1.2 suggested

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12 suggested

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • tfs 4.1.0

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7 suggested

  • windows-slaves 1.2

  • workflow-aggregator 2.2 suggested

  • workflow-api 2.1

  • workflow-basic-steps 2.1

  • workflow-cps-checkpoint 2.2 suggested

  • workflow-cps-global-lib 2.1

  • workflow-cps 2.10

  • workflow-durable-task-step 2.4

  • workflow-job 2.4

  • workflow-multibranch 2.8

  • workflow-scm-step 2.2

  • workflow-step-api 2.3

  • workflow-support 2.2

CJE 2.7.19.0.1

CJE 2.7.19.0.1 includes Jenkins Core 2.7.19

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.1

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0 suggested

  • cloudbees-license 8.1

  • credentials 2.1.4

  • icon-shim 2.0.3

  • jackson2-api 2.7.3

  • mailer 1.17 suggested

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.0

  • nectar-rbac 5.8 suggested

  • operations-center-agent 2.7.0.0

  • operations-center-client 2.7.0.0

  • operations-center-context 2.7.0.0

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • ace-editor 1.1

  • active-directory 1.47

  • ant 1.4

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-cli 1.1

  • azure-publishersettings-credentials 1.1

  • bouncycastle-api 1.648.3

  • branch-api 1.10.2

  • build-timeout 1.17.1

  • build-view-column 0.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-cli 1.5.5

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-aws-deployer 1.15

  • cloudbees-bitbucket-branch-source 1.2

  • cloudbees-consolidated-build-view 1.5

  • cloudbees-even-scheduler 3.7

  • cloudbees-github-pull-requests 1.1

  • cloudbees-groovy-view 1.5 suggested

  • cloudbees-ha 4.7 suggested

  • cloudbees-jsync-archiver 5.5 suggested

  • cloudbees-label-throttling-plugin 3.4

  • cloudbees-long-running-build 1.9

  • cloudbees-monitoring 2.5 suggested

  • cloudbees-nodes-plus 1.14 suggested

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-secure-copy 3.9

  • cloudbees-ssh-slaves 1.5 suggested

  • cloudbees-support 3.7 suggested

  • cloudbees-template 4.26 suggested

  • cloudbees-view-creation-filter 1.3 suggested

  • cloudbees-wasted-minutes-tracker 3.8

  • cloudbees-workflow-aggregator 1.9.1

  • cloudbees-workflow-rest-api 1.9.1

  • cloudbees-workflow-template 2.3 suggested

  • cloudbees-workflow-ui 2.0

  • conditional-buildstep 1.3.5

  • copyartifact 1.38.1

  • credentials-binding 1.8 suggested

  • dashboard-view 2.9.10

  • deployed-on-column 1.7

  • deployer-framework 1.1

  • docker-build-publish 1.3.1

  • docker-commons 1.4.0

  • docker-traceability 1.2

  • docker-workflow 1.7

  • dockerhub-notification 2.2.0

  • durable-task 1.12

  • email-ext 2.45 suggested

  • external-monitor-job 1.6

  • git-client 1.19.7 suggested

  • git-server 1.7

  • git-validated-merge 3.20

  • git 2.5.3 suggested

  • github-api 1.76

  • github-branch-source 1.8.1 suggested

  • github-organization-folder 1.4 suggested

  • github-pull-request-build 1.10

  • github 1.19.1

  • gradle 1.25 suggested

  • handlebars 1.1.1

  • infradna-backup 3.30 suggested

  • javadoc 1.4

  • jquery-detached 1.2.1

  • junit 1.18

  • ldap 1.12 suggested

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mercurial 1.56

  • momentjs 1.1.1

  • monitoring 1.59.0

  • msbuild 1.26

  • mstestrunner 1.3.0

  • nectar-vmware 4.3.5

  • node-iterator-api 1.5.0

  • nodejs 0.2.1

  • openshift-cli 1.3

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-cloud 2.7.0.0 suggested

  • operations-center-openid-cse 1.8.110

  • pam-auth 1.3

  • parameterized-trigger 2.32

  • pipeline-build-step 2.2

  • pipeline-input-step 2.0

  • pipeline-rest-api 1.6

  • pipeline-stage-step 2.1

  • pipeline-stage-view 1.6 suggested

  • plain-credentials 1.2

  • promoted-builds 2.27

  • run-condition 1.0

  • scm-api 1.2 suggested

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12 suggested

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • tfs 4.1.0

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7 suggested

  • windows-slaves 1.2

  • workflow-aggregator 2.2 suggested

  • workflow-api 2.1

  • workflow-basic-steps 2.1

  • workflow-cps-checkpoint 2.2 suggested

  • workflow-cps-global-lib 2.1

  • workflow-cps 2.10

  • workflow-durable-task-step 2.4

  • workflow-job 2.4

  • workflow-multibranch 2.8

  • workflow-scm-step 2.2

  • workflow-step-api 2.3

  • workflow-support 2.2


1. This number of course depends on the machine’s specifications and configuration.
2. This is very handy in combination with the CloudBees Jenkins Enterprise VMWare Autoscaling plugin.