Table of contents

Administering CloudBees Jenkins Enterprise 2.x


Pipeline

Caution

This guide is an old version of Pipeline for CloudBees Jenkins Enterprise, and is superseded by Pipeline for CloudBees Core.

Please refer to Pipeline for CloudBees Core for updated content.

Jenkins Pipeline is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins as code. CloudBees Jenkins Enterprise includes all the freely available core Pipeline features, augmented with additional features for larger systems.

The Pipeline tutorial is the best place to get started understanding how to write Pipeline scripts. Here we will discuss the CloudBees Jenkins Enterprise additions.

Refer to Controlling Builds and Managing Artifacts for details on plugins related to other Jenkins project types.

Checkpoints

All Pipelines are durable: if Jenkins needs to be restarted (or crashes, or the server reboots) while a flow is running, it resumes at the same point in its Pipeline script after Jenkins restarts. Similarly, if a flow is running a lengthy sh or bat step when an agent unexpectedly disconnects, no progress should be lost when the agent is reconnected (so long as the agent continues running). The step running on the agent continues to execute until it completes, then waits for Jenkins to reconnect the agent.

However, in some cases, a Pipeline will have done a great deal of work and proceeded to a point where a transient error occurred: one which does not reflect the inputs to this build, such as source code changes. For example, after completing a lengthy build and test of a software component, final deployment to a server might fail for a silly reason, such as a DNS error or low disk space. After correcting the problem you might prefer to restart just the last portion of the Pipeline, without needing to redo everything that came before.

The CloudBees Jenkins Enterprise checkpoint step makes this possible. Simply place a checkpoint at a safe point in your script, after performing some work and before doing something that might fail randomly:

node {
    sh './build-and-test'
}
checkpoint 'Completed tests'
node {
    sh './deploy'
}

Whenever build-and-test completes normally, this checkpoint will be recorded as part of the Pipeline, along with any program state at that point, such as local variables. If deploy in this build fails (or just behaved differently than you wanted), you can later go back and restart from this checkpoint in this build. (You can use the Checkpoints link in the sidebar of the original build, or the Retry icon in the stage view, mentioned below.) A new flow build (with a fresh number) will be started which skips over all the steps preceding checkpoint and just runs the remainder of the flow.

Restoring files

Restarted Pipelines retain the program state, such as the values of variables, from each run, but they do not retain the contents of workspaces on your agents. Subsequent runs of the same Pipeline will overwrite any files used in a particular run.

Always keep the checkpoint step outside of any node block. It should not be associated with either an agent or a workspace. Refer to the example to see a checkpoint correctly placed outside any node block.

Prior to the checkpoint, use the stash step to save important files, such as build products. When this build is restarted from the checkpoint, all of its stashes will be copied into the new build first. You can then use the unstash step to restore some or all of the saved files into your new workspace.

The workspace used for a specific run of a Pipeline is not guaranteed to be used again after restarting from a checkpoint. If your post-checkpoint steps rely on local files in a workspace, not just the command you run, you will need to consider how to get those files back to their original condition before the checkpoint.

Declarative Pipeline syntax allows an agent at the top-level, giving all stages of the Pipeline access to the same workspace and files. A top level agent prevents you from running steps of your Pipeline without an agent context and its associated workspace.

Set the agent directive for each stage so that checkpoint can be used between stages. The checkpoint may also be isolated in its own stage with no agent or it is possible to use the node step directly in the steps of the stage.

Example using checkpoint in its own stage

pipeline {
  agent none
  stages {
    stage("Build") {
      agent any
      steps {
        echo "Building"
        stash includes: 'path/to/things/*', name: 'my-files'
      }
    }
    stage("Checkpoint") {
      agent none
      steps {
        checkpoint 'Completed Build'
      }
    }
    stage("Deploy") {
      agent any
      steps {
        unstash 'my-files'
        sh 'deploy.sh'
      }
    }
  }
}

Alternative example using node within a stage

pipeline {
  agent none

  stages{
    stage("Build"){
      steps{
        node('') { // this is equivalent to 'agent any'
          echo "Building"
          stash includes: 'path/to/things/*', name: 'foo'
        }
        checkpoint "Build Done"
      }
    }
    stage("Deploy") {
      agent any
      steps {
        unstash 'my-files'
        sh 'deploy.sh'
      }
    }
  }
}

This Jenkinsfile gives a more complex example of this technique in Scripted Pipeline syntax.

Alternately, you could use other techniques to recover the original files. For example, if prior to the checkpoint you uploaded an artifact to a repository manager, and received an identifier or permalink of some kind which you saved in a local variable, after the checkpoint you can retrieve it using the same identifier.

Jenkins will not prevent you from restoring a checkpoint inside a node block (currently only a warning is issued), but this is unlikely to be useful because you cannot rely on the workspace being identical after the restart. You will still need to use one of the above methods to restore the original files. Also note that Jenkins will attempt to grab the same agent and workspace as the original build used, which could fail in the case of transient "cloud" agents. By contrast, when the checkpoint is outside node, the post-restart node can specify a label which can match any available agent.

Pipeline Job Templates

CloudBees Jenkins Enterprise supports Pipeline Job Templates, allowing you to capture common job types in a Pipeline Job Template and then to use that template to create instances of that Job type.

As an example, let us create a very simple "Hello print Pipeline template" that simply prints "Hello" to the console. The number of times "Hello" is printed is captured as a template attribute. We will then use that template to create an instance of that Job to print "Hello" 4 times.

Create Pipeline Template

Do this simply by selecting to create a "New Item" on the Jenkins home page. On the "New Item" form, enter the template name and select "Job Template" as the item type and press the "OK" button.

templates 1

Define Pipeline Template

After creating the initial template you’ll need to configure it by:

  1. Specifying the template attributes. These will be injected as variables into the Pipeline script.

  2. Select "Groovy template for Pipeline" as the Transformer Type.

  3. Specifying the Pipeline script, using the template attributes (defined above) as variables.

Create Pipeline from Template

Now we can create a Build Job from the template:

  1. From the Jenkins home page, select "New Item".

  2. Enter the name the Job name e.g. "Hello print Pipeline job".

  3. Select the template (created and configured in Step #1 and #2 above) as the Job type.

  4. Press the "OK" button.

templates 3

Once the Job instance is created you’ll be allowed configure the template attributes (as specified in Step #2 above). In the case of our simple "Hello print Pipeline job" Job, we want to print the "Hello" message 4 times:

templates 4

After specifying all attributes and pressing the "Save" button, you’ll then be able to run a build of the job and check its status in the Stage View or in the Console, for example.

templates 5

Pipeline script attributes

If you have defined a template using the Groovy template for Pipeline transformer, you may wish to allow the instance configurer (end user of the template) to write some Pipeline script.

For example, the template might define an overall structure for the job:

  • selecting a node with an approved label, doing a checkout of an approved SCM,

  • running a sh step against a user-defined main script, and using catchError or a try-finally block to run some mandatory reporting.

As another example, the administrator may wish to allow the user to optionally define a set of extra steps to run during the reporting phase.

To support this use case, define an attribute using the Pipeline script text control. This control is identical to the Text area control except that it offers a Pipeline-specific code editor. Suppose you named the attribute script. Then you need merely include the following fragment in the appropriate point in your overall script:

evaluate(script)

Controlling builds

Restarting aborted builds

When running a large installation with many users and many jobs, the CloudBees Jenkins Enterprise High Availability feature helps bring your Jenkins back up quickly after a crash, power outage, or other system failure. If any jobs were waiting in the scheduler queue at the time of the crash, they will be there after a restart.

Note
Legacy builds (Freestyle jobs, Matrix jobs, etc.) running at the time of the crash will not resume. Long-running builds may be used when the context is simple.

Furthermore, after a hard crash (as opposed to a graceful shutdown), Jenkins normally may lose even the build record.

The Restart Aborted Builds plugin in CloudBees Jenkins Enterprise helps manage exceptional halts like this. First off, it ensures that at least a partial record of every build is saved immediately after it starts and after every configured build step.

  • If the build completes within control of Jenkins — including build failures, manual aborts, and termination due to scheduled shutdown — nothing special is done.

  • If Jenkins is halted suddenly, whether due to a crash or freeze of Jenkins itself or a general system crash, the list of all currently-running builds is recorded for use when Jenkins is next restarted. At that time, all aborted builds are displayed in an administrative page, where they can be inspected or easily restarted.

If your Jenkins instance is abruptly terminated after restart, navigate to Manage Jenkins. If there are builds in progress, you see a warning at the top of this screen.

admin warning
Figure 1. Aborted Builds Administrative Warning
  1. Click on the link to see a list of all the running builds known to have been interrupted. You can click on any build to see details about it, such as the changelog or console log up to the break point. If the job was parameterized, the list will display the parameter values for that build as a convenience.

    index
    Figure 2. Aborted Builds List
  2. Click the Restart build button next to an aborted build. A new build of the same job is scheduled including any associated parameters. Restarting the build either with this button or in any other manner will remove the item from the list.

The list of aborted builds is saved in $JENKINS_HOME/abortedBuilds.xml.

Long-running builds

Builds that need to run for an extended period are best implemented with Pipeline and checkpoints. If builds cannot be implemented with Pipeline and need to run for an extended period and continue running even if the master restarts, they can be defined as a "long-running build" type.

To address the needs of people who have legacy builds that are too long to interrupt every time a Jenkins agent is reconnected or Jenkins is restarted for a plugin update, CloudBees Jenkins Enterprise includes a plugin that provides a "long-running project" type. The configuration is almost the same as for a standard free-style project, with one difference: the part of your build that you want to run apart from Jenkins should be configured as a (Unix) shell or (Windows) batch step. Of course this script could in turn run Maven, Make, or other tools.

If the agent is reconnected or Jenkins restarted during this "detached build" phase, your build keeps on running uninterrupted on the agent machine (so long as that machine is not rebooted of course). When Jenkins makes contact with the agent again, it will continue to show log messages where it left off, and let the build continue. After the main phase is done, you can run the usual post-build steps, such as archiving artifacts or recording JUnit-style test results.

Make a new job and select Long-Running Project and note the Detached Build section. Pick a kind of build step to run—Bourne shell for Unix agents, or batch script for Windows agents—and enter some commands to run in your build’s workspace.

long-running-build-img-config
Figure 3. Long-Running Build Configuration

When the project is built, initially the executor widget will look the same as it would for a freestyle project. During this initial phase, SCM checkouts/updates and similar prebuild steps may be performed. Soon you will see a task in the widget with the (detached) annotation. This means that your main build step is running, and should continue running even if the Jenkins server is halted or loses its connection to the agent. (So long as the connection is open, you should see any new output produced by the detached build step in your build log, with a delay of a few seconds.)

exec
Figure 4. Long-Running Build Execution

The task label will show post steps while any post-build actions are performed, such as archiving artifacts or recording JUnit test results. This phase does not survive a crash: it requires a constant connection from the Jenkins master to the agent.

There are a number of limitations and restrictions on what Jenkins features work in long-running builds. Generally speaking, anything that works in a freestyle project in the pre-build or post-build phase should also work. But general build steps are not available except as part of the non-detached pre-build phase, and build wrappers will generally not work. Also surviving a restart can only work if Jenkins can reconnect to the exact same agent without that machine having rebooted, so this will generally not work on cloud-provisioned agents. Consult the release notes for more information.

Skip next build

The Skip Next Build plugin allows you to skip building a job for a short period of time. While you could achieve something similar by disabling the job from the job configure page, you would need to remember to re-enable the job afterwards.

There are two main use cases for this plugin:

  • If you are going to be taking some external resources that the build requires off-line for maintenance and you don’t want to be annoyed by all the build failure notices.

  • If you are merging a major feature branch and you want to prevent builds until after the merge is completed.

The plugin adds a image Skip builds action to all jobs. When a skip has been applied to the job, the icon will be yellow image and the main job page will look something like The main job screen when a skip has been applied. When no skip has been applied the icon will be green image

skip applied
Figure 5. The main job screen when a skip has been applied

To apply a skip to a folder / job, click on the image Skip builds action. This should display a screen similar to Applying a skip to a folder / job.

apply skip
Figure 6. Applying a skip to a folder / job

When a skip is applied to a folder, all jobs within the folder will be skipped.

Select the duration of skip to apply and click the Apply skip button. The main job screen should now have a notice that builds are skipped until the specified time (See The main job screen when a skip has been applied for an example)

To remove a skip from a folder / job, click on the image Skip builds action. This should display a screen similar to Removing a skip from a job.

remove skip
Figure 7. Removing a skip from a job

Click on the Remove skip button to remove the skip.

If the skip was not applied directly to the folder / job but instead is either inherited from a parent folder or originating from a skip group then the screen will look something like this:

skip inherited
Figure 8. Trying to remove a skip from a job where the skip is inherited from a parent folder.

The link(s) in the table of active skips can be used to navigate to the corresponding skip.

Skip groups

Depending on how the jobs in your Jenkins instance have been organized and the reasons for skipping builds, it may be necessary to select a disjoint set of jobs from across the instance for skipping. Skip groups can be used to combine jobs from different folders so that they can be skipped as a single group.

The Jenkins administrator configures skip groups in the global configuration: Jenkins  Manage Jenkins  Configure System  Skip groups

skip groups global config navigate
Figure 9. Navigating to the Jenkins  Manage Jenkins  Configure System  Skip groups section using the breadcrumb bar’s context menu.

Each skip group must have a unique name. It can be helpful to provide a description so that users understand why the skip has been applied to their jobs.

skip groups global config adding
Figure 10. Adding a skip group to the Jenkins global configuration

You can have multiple skip groups.

skip groups global config multiple
Figure 11. Multiple skip groups can be defined

Once skip groups have been defined, you can configure the jobs and/or folder membership from the job / folder configuration screens.

skip groups job membership
Figure 12. Configuring a job’s skip group membership
skip groups folder membership
Figure 13. Configuring a folder’s skip group membership

When there is at least one skip group defined in a Jenkins instance, the Jenkins  Skip groups page will be enabled.

skip groups root action
Figure 14. A Jenkins instance with Jenkins  Skip groups enabled
skip groups index
Figure 15. The Jenkins  Skip groups page

To manage the skip state of a skip group, you need to navigate to that skip group’s details page

skip groups details
Figure 16. The details page for a specific skip group

The details page will display the current status of the skip group as well as listing all the items that are directly a member of this skip group.

Note
Where folders are a member of a skip group, the skip group membership will be inherited by all items in the folder.

The Skip Next Build plugin adds two new permissions:

  • Skip: Apply - this permission is required in order to apply a skip to a job. It is implied by the Overall: Administer permission.

  • Skip: Remove - this permission is required in order to remove a skip from a job. It is implied by the Overall: Administer permission.

The Skip Next Build plugin adds two new CLI operations:

  • applySkip - this operation applies a skip to a job. It takes a single parameter which is the number of hours to skip the job for. If the parameter is outside the range 0 to 24 it will be brought to the nearest value within that range.

  • removeSkip - this operation removes any skip that may have been applied to a job.

The Skip Next Build plugin adds a number of Jenkins CLI commands for controlling skips:

apply-skip

Enables the skip setting on a job. apply skip cli This command takes two parameters:

  1. The full name of the job

  2. The number of hours the skip should be active for.

apply-folder-skip

Enables the skip setting on a folder. apply folder skip cli This command takes two parameters:

  1. The full name of the folder

  2. The number of hours the skip should be active for.

skip-group-on

Enables the skip setting on a skip group. skip group on cli This command takes two parameters:

  1. The name of the skip group

  2. The number of hours the skip should be active for.

remove-skip

Removes the currently active skip setting from a job. apply skip cli This command takes only one parameter: the full name of the job.

remove-folder-skip

Removes the currently active skip setting from a folder. remove folder skip cli This command takes only one parameter: the full name of the folder.

skip-group-off

Removes the currently active skip setting from a skip group. skip group off cli This command takes only one parameter: the name of the skip group.

Managing artifacts

Fast archiving

The Fast Archiver plugin uses an rsync-inspired algorithm to transfer archives from agents to a master. The result is that builds complete faster and network bandwidth usage is reduced.

After a job is built on an agent selected build artifacts in the workspace may be copied from the agent to the master so those artifacts are archived. The Fast Archiver plugin takes advantage of the fact that there are usually only incremental changes between build artifacts of consecutive builds. Such incremental changes are detected and only those changes that need to be sent from the agent to the master are transferred, thus saving network bandwidth. The algorithm is inspired by rsync’s weak rolling checksum algorithm, which can efficiently identify changes.

The changes are gzipped to further reduce network load.

The Fast Archiver plugin is automatically installed and enabled if you have downloaded CloudBees Jenkins Enterprise. This plugin takes over the built-in artifact archiving functionality in Jenkins. If you are already using that feature, no configuration change is needed to take advantages of this feature. This plugin also does not touch the persisted form of job configurations, so you can just uninstall this plugin and all your jobs will fall back to the plain vanilla artifact archiving.

Simply use the regular Archive artifacts option in the Post-build Action section for a build. You can see the result in Fast archiver running.

fast archiver console
Figure 17. Fast archiver running

Fast archiving only occurs on builds run on an agent, not those run on the Jenkins master. Also there must have been previous builds which produced artifacts for the new artifacts to be compared to. Otherwise, Jenkins will perform a regular complete artifact transfer. It will do the same if the fast archiver detects that data is corrupted.

Fast archiving is enabled by default when the plugin is first loaded. You can remove this strategy and later re-add it from Manage Jenkins  Configure system in the Artifact Management for Builds section.

fast archiver config
Figure 18. Fast archiving configuration

Unresolved directive in ../../../../../cloudbees-core/cloud-admin-guide/src/main/asciidoc/reference/artifact-manager-on-s3.adoc - include::/home/ubuntu/workspace/workspace/cumentation_cbn-site_master-IKKLOH5XAGTUZ5PEKDK6KMFC2WKQVYL7UKQOJFKE6W7PNNMIGI6A/content/docs/cloudbees-documentation/cje-v2-admin-guide/src/main/asciidoc/../abbr.adoc[]

Cloud ready Artifact Manager for AWS

Jenkins has historically provided multiple ways to save build products, otherwise known as artifacts.

Some plugins permit you to upload artifact files to repository managers like Artifactory, and Nexus Artifact Uploader, and other plugins send artifacts to remote shared filesystems like Publish Over FTP, Publish Over CIFS and Publish Over SSH. Jenkins itself stores artifact files in the Jenkins home filesystem. In 2012, CloudBees released the Fast Archiver Plugin, which optimizes the default artifact transmission but uses the same storage location.

Unfortunately, a number of these solutions are not cloud-ready, and it is awkward and difficult to use them with CloudBees Core on modern cloud platforms. Some solutions, like S3 publisher are well suited for use in a cloud environment, but require special build steps within Pipelines.

CloudBees is developing a series of cloud-ready artifact manager plugins. The first of these is Artifact Manager on S3 plugin. This plugin permits you to archive artifacts in an S3 Bucket, where there is less need to be concerned about the disk space used by artifacts.

Easy to configure

To configure Artifact Manager on S3:

  1. Go to Manage Jenkins/Configure System.

  2. In the Artifact Managment for Builds section, select the Cloud Provider Amazon S3:

    cloud provider configured
  3. Return to Manage Jenkins/Amazon Web Services Configuration to configure your AWS credentials for access to the S3 Bucket.

  4. For your AWS credentials, use the IAM Profile configured for the Jenkins instance, or configure a regular key/secret AWS credential in Jenkins. Note that your AWS account must have permissions to access the S3 Bucket, and must be able to list, get, put, and delete objects in the S3 Bucket.

    configure credentials
  5. Save or apply the credentials configuration, and move on to configure your S3 bucket settings.

    bucket settings
  6. We recommend validating your configuration. If the validation succeeds, you’ve completed the configuration for Artifact Manager on S3.

    validation success
  7. For more details about Artifact Manager for S3, see the plugin documentation here: Artifact Manager on S3 plugin.

Uploading and downloading artifacts

The Artifact Manager on S3 plugin is compatible with both Pipeline and FreeStyle jobs. To archive, unarchive, stash or unstash, use the default Pipeline steps.

FreeStyle jobs

For FreeStyle jobs, use a post-build action of Archive the Artifacts to store your Artifacts into the S3 Bucket.

fsj step archive

To copy artifacts between projects:

  1. Make sure the Copy Artifact Plugin is installed.

  2. Use a build step to copy artifacts from the other project:

    copy artefacts

Pipeline jobs

For Pipeline jobs, use an archiveArtifacts step to archive artifacts into the S3 Bucket:

node() {
    //you build stuff
    //...
    stage('Archive') {
        archiveArtifacts "my-artifacts-pattern/*"
    }
}

To retrieve artifacts that were previously saved in the same build, use an unarchive step that retrieves the artifacts from S3 Bucket. Set the mapping parameter to a list of pairs of source-filename and destination-filename:

node() {
    //you build stuff
    //...
    stage('Unarchive') {
        unarchive mapping: ["my-artifacts-pattern/": '.']
    }
}

To save a set of files for use later in the same build (generally on another node/workspace) use a stash step to store those files on the S3 Bucket:

node() {
    //you build stuff
    //...
    stash name: 'stuff', includes: '*'
}

To retrieve files saved with a stash step, use an unstash step, which retrieves previously stashed files from the S3 Bucket and copies them to the local workspace:

node() {
    //you build stuff
    //...
    unstash 'stuff'
}

To copy artifacts between projects:

  1. Make sure the Copy Artifact Plugin is installed.

  2. Use a copyArtifacts step to copy artifacts from the other project:

    node(){
      //you build stuff
      //...
      copyArtifacts(projectName: 'downstream', selector: specific("${built.number}"))
    }

Security

Artifact Manager on S3 manages security using Jenkins permissions. This means that unless users or jobs have permission to read the job in Jenkins, the user or job cannot retrieve the download URL.

Download URLs are temporary URLs linked to the S3 Bucket, with a duration of one hour. Once that hour has expired, you’ll need to request a new temporary URL to download the artifact.

Agents use HTTPS (of the form https://my-bucket.s3.xx-xxxx-x.amazonaws.com/*) and temporary URLs to archive, unarchive, stash, unstash and copy artifacts. Agents do not have access to either the AWS credentials or the whole S3 Bucket, and are limited to get and put operations.

Performance

A major distinction between the Artifact Manager for S3 plugin and other plugins is in the load on the master and the responsiveness of the master-agent network connection. Every upload/download action is executed by the agent, which means that the master spends only the time necessary to generate the temporary URL: the remainder of the time is allocated to the agent.

The performance tests detailed below compare the CloudBees Fast Archiving Plugin and the Artifact Manager on S3 plugin.

Performance tests were executed in a Jenkins 2.121 environment running on Amazon EC2, with JENKINS_HOME configured on an EBS volume. Three different kinds of tests were executed from the GitHub repository at Performance Test, with samples taken after the tests had been running for one hour:

  • Archive/Unarchive big files: Store a 1GB file and restore it from the Artifact Manager System.

  • Archive/Unarchive small files: Store 100 small files and restore them from the Artifact Manager System. Small files are approximately 10 bytes in size, with 100 files stored and times averaged

  • Stash/Unstash on a pipeline: Execute stash and unstash steps. The Fast Archive Plugin stash/unstash operations used the default stash/unstash implementation.

As can be seen from the results, the Artifact Manager on S3 Plugin provides a measurable performance improvement on both big and small files, with the improvement measured in minutes for big files and in seconds for small files.

Artifact Manager on S3 plugin performance

Plugin link: Artifact Manager on S3

 Big Files
Time in Milliseconds Archive Unarchive

Max

48,578.00

29,899.00

Min

17,773.00

20,388.00

Avg

20,969.49

22,670.67

S3 archive big file 00
 Small Files
Time in Milliseconds Archive Unarchive

Max

2,974.00

805.00

Min

752.00

104.00

Avg

1,171.65

200.76

S3 archive small files 00
Stash
Time in Milliseconds Archive Unarchive

Max

14,902.00

9,477.00

Min

1,256.00

709.00

Avg

1,977.49

1,588.96

S3 archive stash 00

CloudBees Fast Archiving Plugin performance

Big files
Time in Milliseconds Archive Unarchive

Max

358,988.00

105,615.00

Min

110,068.00

93,193.00

Avg

277,642.22

95,771.78

fast archive big file 00
 Small Files
Time in Milliseconds Archive Unarchive

Max

1,603.00

109.00

Min

491.00

10.00

Avg

953.26

22.55

fast archive small files 00
Stash
Time in Milliseconds Archive Unarchive

Max

1,914.00

3,050.00

Min

561.00

267.00

Avg

1,075.53

976.28

fast archive stash 00