Table of Contents

Introduction

CloudBees Jenkins Operations Center is an operations console for Jenkins that allows you to manage the multiple Jenkins masters within your organization.

The operations console is an application built on top of the Jenkins framework, so its look and feel will be familiar to anyone who is already familiar with Jenkins.

The key features provided by CloudBees Jenkins Operations Center are:

  • Consolidated navigation experience across all the client masters

  • Shared build agent resources that can be used by any client masters.

  • Control of authentication and authorization schemes used by client masters. This enables features such as: single sign-on and consolidated permission schemes.

  • Management of update centers used by client masters.

  • Consolidated management of CloudBees Jenkins Enterprise licenses.

  • Management and enforcement of certain key security settings on client masters.

Terms and Definitions

Jenkins (also referenced as Jenkins OSS in CloudBees documentation)

Jenkins is an open-source automation server. Jenkins is an independent open-source community, to which CloudBees actively contributes. You can find more information about Jenkins OSS and CloudBees contributions on the CloudBees site.

CJE

CloudBees Jenkins Enterprise - Commercial version of Jenkins based on Jenkins OSS Long-Term Support (LTS) releases with frequent patches by CloudBees. CJE also provides a number of plugins that help organizations address main needs of enterprise installations: Security, High Availability, Continuous Delivery, etc.

CJOC

CloudBees Jenkins Operations Center - operations console for Jenkins that allows you to manage the multiple Jenkins masters within. This document provides a detailed description of CJOC.

CJA

CloudBees Jenkins Analytics - provides insight into your usage of Jenkins and the health of your Jenkins cluster by reporting events and metrics to CloudBees Jenkins Operations Center. See "CloudBees Jenkins Analytics" section for more info.

Concepts

Traditionally the Jenkins UI has concentrated on build jobs as the primary top level items within Jenkins. A number of CloudBees plugins have extended the model somewhat, e.g. the Folders plugin introduced folders as a container top level item, the CloudBees update center plugin introduced hosted Jenkins update centers as a top level item.

CloudBees Jenkins Operations Center (CJOC) introduces some additional top level items. This section details the top level items and other concepts that are used in CJOC.

Operations center server

The operations center is a special type of Jenkins instance that acts as the central authority in a CloudBees Jenkins Operations Center cluster.

Client master

Ordinary Jenkins masters that have been joined to the Operations center cluster are called client masters.

Shared agent

Shared agents (formerly slaves) are special resources created on the Operations center server that can be leased out to client masters on demand to provide build resources.

Shared cloud

In addition to shared agents, cloud providers can be used to provision temporary shared agents when demand exceeds that available from the Shared agents.

Folders

The folders plugin provides a key top level item used for scoping the availability of resources. For example, credentials/shared agents/shared clouds/etc defined within a folder will only be available to items within the folder or contained sub-folders.

Sub-licensing

Each stand-alone CloudBees Jenkins Enterprise master must have a valid license in order to use the CloudBees Jenkins Enterprise features. A license for CJOC includes the capability to generate sub-licenses for the client masters that form part of the CJOC cluster.

System Requirements

CloudBees Jenkins Operations Center (CJOC) does not have any specific system requirements above and beyond those of Jenkins itself.

Since CJOC is based on Jenkins, it runs on all platforms that Jenkins runs on, which includes all known operating systems with a Java runtime. (Java 7 is the minimum requirement; Java 8 is recommended.)

CJOC is available as a standalone war file or as platform specific packages. In addition to the multi-platform war, CJOC is supported on the following platforms:

  • Debian

  • Red Hat Enterprise Linux

  • Ubuntu

  • CentOS

  • Fedora

  • OpenSuse

  • Windows

There are no universally applicable OS-level requirements for CJOC. Some customers find that they need to increase the per-process open files limit on Linux (ulimit -n); or increase the maximum size of the Java heap or permanent generation (refer to JVM vendor tuning guides for specifics).

There are however a number of key factors that should be kept in mind when deploying CloudBees Jenkins Operations Center.

As CJOC is responsible for managing the shared build resources across client masters and potentially responsible for single sign-on, it is recommended that:

  • CJOC should be configured for high availability and fast start-up

  • no build jobs should be run directly on CJOC - backup jobs and jobs designed to audit the cluster of masters will, of necessity, need to run on CJOC itself.

  • CJOC will need to have a single URL that is resolvable by all client masters and all users of the CJOC server and the client masters. This URL will be used both by client masters to connect to the CJOC server and by users navigating the cluster as well as when using the Single Sign-On functionality.

A consequence of the above recommendations:

  • At least two machines should be used to host the CJOC instance. These can be either virtual machines or physical machines but it is recommended for optimum availability that the machines be located on separate physical hardware.

  • A shared file system is required between the machines hosting CJOC.

    There are multiple ways to achieve such a shared file system. CloudBees does not specify any specific mechanism.

  • A switchable routing/proxying mechanism to route requests to the active CJOC instance.

    There are multiple ways to achieve such a routing/proxying, e.g.

    • haproxy can be used to route requests;

      ha deployment via proxy
      Figure 1. HA using proxy
    • ARP broadcast can be used to have a shared IP address be taken over by the active node where both nodes are on the same network segment;

      ha deployment via arp
      Figure 2. HA using ARP broadcast
    • etc.

  • At least 1GB of heap should be allocated to the JVM running operations center.

    Medium and large installations may require larger heap sizes, however for less than 5 client masters and 25 shared agents a 1GB minimum should be sufficient.

    It is recommended that CloudBees Jenkins Operations Center be the only application deployed to the servlet container instance, however if CJOC is being deployed to a shared servlet container, the customer is responsible for sizing the servlet container’s JVM heap accordingly.

Client master requirements

The following requirements apply to the client masters that will form part of the CloudBees Jenkins Operations Center cluster:

  • Client masters must be Jenkins 1.609 or newer. The client master must be CJE-licensed and must be a version of Jenkins that is supported by CloudBees.

  • Client masters must be running Java 7 or newer. (Oracle Java 8 recommended)

  • Plugins that provide authorization strategies, security realms, agent launchers, or credentials and are installed on the CJOC master must also be installed on all client masters. For this class of plugin client masters must be running either the same version as installed on the CJOC master or a newer version. Some examples of this class of plugin are:

    • Active Directory plugin;

    • LDAP plugin;

    • Mock security realm plugin;

    • Credentials plugin;

    • SSH Agents plugin;

    • SSH Credentials plugin;

  • Client masters will be required to install a minimum set of plugins:

    • CloudBees License Manager;

    • CloudBees Jenkins Enterprise License Entitlement Check;

    • Operations Center Agent;

    • Operations Center Context;

    • Operations Center Client;

    • Operations Center OpenID Cluster Session Extension;

    • CloudBees Role-Based Access Control Plugin;

    • openid;

    • openid4java

  • It is strongly recommended that client masters have the following plugins installed:

    • CloudBees License Manager;

    • CloudBees Jenkins Enterprise License Entitlement Check;

    • Operations Center Agent;

    • Operations Center Context;

    • Operations Center Client;

    • Operations Center OpenID Cluster Session Extension;

    • openid;

    • openid4java;

    • Operations Center Cloud.

    • Operations Center Analytics Reporter.

  • The Client master will need to be able to initiate a Jenkins remoting connection to the CJOC server, thus it will need to be able to establish TCP/IP connections to CJOC server(s).

  • Client masters can be configured as HA, non-HA or a mix. The choice as to whether a specific client master should be deployed using the HA features should be determined by the availability requirements of that specific client master.

    Note
    Where a client master is configured as HA, it should use a separate shared file system / JENKINS_HOME from CJOC.

CloudBees Analytics

CloudBees Analytics requires an Elasticsearch instance which has two modes of operations: embedded or remote. For production environments, you should run an Elasticsearch cluster on more than one host. An Elasticsearch cluster should have an odd number of nodes in order to ensure that partition elections can have an unambibgous winner. Note that these can be on the same host as Jenkins, but would ideally use some sort of growable data volume, much like Jenkins home.

Note

Additional heterogenous nodes can be added to the Elasticsearch cluster as data and usage grow over time.

For disk space, you should have 700mb of disk space for each master, plus about 2kb per build. So a master running 500 builds per day would require about 2.2 GB to store three years of data (if a year’s worth of existing builds were reindexed : 2kb * 500 * 365 * 4). See Analytics Data Retention for more information. Using SSD for the Elasticsearch partition is recommended. Two different Elasticsearch instances should not concurrently share the same Elasticsearch data directory.

For production environments, at least 16 GB of system RAM is recommended. Generally, half of this should be dedicated to Elasticsearch and the other half left for the operating system for use as the filesystem buffer cache.

Elasticsearch is generally not CPU heavy, but if you have a choice, prefer more cores over higher CPU speeds.

See the Elasticsearch system requirements for more information.

Sample Architecture

The following diagram illustrates one potential configuration of a deployment:

cluster deployment
Figure 3. General cluster architecture
  • The CJOC server has been configured as HA using a dedicated shared file system and a proxy to route incoming requests to the active node.

  • There is one client master that has also been configured as HA using an independent shared file system and a proxy to route incoming requests to the active node.

  • There is a second client master that does not have the need for HA.

Notes:

  • The shared file systems for the operations-center servers and the ha-master servers may be using the same physical storage devices, however they must be independent JENKINS_HOME directories and one cannot be a sub-folder of the other.

  • The proxy nodes could be the same physical device with just different URIs or host names being routed accordingly.

  • A single proxy could be configured to map both Jenkins masters and the CJOC server to sub-URIs of the same host name.

    For example, the single proxy could expose the CJOC as http://jenkins.acme.com/operations-center/, and the other Jenkins masters as sibling urls such as http://jenkins.acme.com/ha-master and http://jenkins.acme.com/non-ha-master

    example deployment
    Figure 4. Example deployment

    If haproxy is being used for the proxy, this would probably result in a configuration similar to the following. Note the configuration below is enabled for verbose logging to aid initial configuration - tuning should take place before production usage.

    global
        chroot              /var/lib/haproxy
        maxconn             4096
        user                haproxy
        group               haproxy
        stats               socket /var/lib/haproxy/stats
        log                 127.0.0.1 local0
    
    defaults
        log                 global
        # The following log settings are useful for debugging
        # Tune these for production use
        option              logasap
        option              http-server-close
        option              redispatch
        option              abortonclose
        option              log-health-checks
        retries             3
        mode                tcp
        timeout             http-request    10s
        timeout             queue           1m
        timeout             connect         10s
        timeout             client          1m
        timeout             server          1m
        timeout             http-keep-alive 10s
        timeout             check           500
        default-server      inter 5s downinter 500 rise 1 fall 1
    
    # define the backend for redirecting any "lost" users to the
    # default Jenkins instance
    backend www-default
        mode             http
        redirect         prefix /operations-center
    
    # define the backend for the operations-center instances
    backend www-operations-center
        mode             http
        balance          roundrobin
        # context path for app is /operations-center
        option           httpchk HEAD /operations-center/ha/health-check
        server           operations-center-a joc-s-a.acme.com:8080 check
        server           operations-center-b joc-s-b.acme.com:8080 check
    
    # define the backend for JNLP connections to the active
    # operations-center instance
    backend jnlp-operations-center
        timeout          server 15m
        # Jenkins by default runs a ping every 10 minutes and waits 4
        # minutes for a timeout before killing the connection, thus we
        # need to keep these TCP raw sockets open for at least that
        # long.
        option           tcplog
        # tcplog adds additional informaton for debugging disconnects to
        # the logs. It can be removed for production use
        option           httpchk HEAD /operations-center/ha/health-check
        # JNLP for instance is configured to the fixed port 19000
        server           operations-center-a joc-s-a.acme.com:19000 check port 8080
        server           operations-center-b joc-s-b.acme.com:19000 check port 8080
    
    # define the backend for the ha-master instances
    backend www-ha-master
        mode             http
        balance          roundrobin
        option           httpchk /ha-master/ha/health-check
        server           ha-master-a je-ha-a.acme.com:8080 check
        server           ha-master-b je-ha-b.acme.com:8080 check
    
    # define the backend for JNLP connections to the active
    # ha-master instance
    backend jnlp-ha-master
        option           tpclog
        # tcplog can be removed for production use
        timeout          server 15m
        option           httpchk HEAD /ha-master/ha/health-check
        # JNLP for instance is configured to the fixed port 19001,
        # this must be different from that used by any other instance
        # as connection will be from the same host and raw TCP socket
        # forwarding can only differentiate on port.
        server           ha-master-a je-ha-a.acme.com:19001 check port 8080
        server           ha-master-b je-ha-b.acme.com:19001 check port 8080
    
    # define the backend for ssh connections to the active
    # ha-master instance
    backend ssh-ha-master
        option           tpclog
        # tcplog can be removed for production use
        option           httpchk HEAD /ha-master/ha/health-check
        # ssh for instance is configured to the fixed port 19003,
        # this must be different from that used by any other instance
        # as connection will be from the same host and raw TCP socket
        # forwarding can only differentiate on port.
        server           ha-master-a je-ha-a.acme.com:19003 check port 8080
        server           ha-master-b je-ha-b.acme.com:19003 check port 8080
    
    # define the backend for the non-ha-master instance
    backend www-non-ha-master
        # context path for app is /non-ha-master
        mode             http
        server           non-ha-master je-nha.acme.com:8080
    
    # define the backend for JNLP connections to the
    # non-ha-master instance
    backend jnlp-non-ha-master
        option           tcplog
        # tcplog can be removed for production use
        timeout          server 15m
        # JNLP for instance is configured to the fixed port 19002
        server           non-ha-master je-nha.acme.com:19002
    
    # define the backend for ssh connections to the
    # non-ha-master instance
    backend ssh-non-ha-master
        option           tcplog
        # tcplog can be removed for production use
        # ssh for instance is configured to the fixed port 19004
        server           non-ha-master je-nha.acme.com:19004
    
    # define the front-end for http://jenkins.acme.com
    frontend  jenkins-cluster-http
        mode             http
        bind             jenkins.acme.com:80
        reqadd           X-Forwarded-Proto:\ http
        option           forwardfor except 127.0.0.0/8
        option           httplog
        # httplog adds additional details to the log lines
        acl              is_oc path_beg /operations-center
        acl              is_ha path_beg /ha-master
        acl              is_nh path_beg /non-ha-master
        acl              is_sts path_beg /stats
        use_backend      www-operations-center if is_oc
        use_backend      www-ha-master if is_ha
        use_backend      www-non-ha-master if is_nh
        default_backend  www-default
    
    # define the front-end for https://jenkins.acme.com
    frontend jenkins-cluster-https
        mode             http
        bind             jenkins.acme.com:443 ssl crt /etc/ssl/certs/jenkins.acme.com.pem
        reqadd           X-Forwarded-Proto:\ https
        option           forwardfor except 127.0.0.0/8
        option           httplog
        acl              is_oc path_beg /operations-center
        acl              is_ha path_beg /ha-master
        acl              is_nh path_beg /non-ha-master
        acl              is_sts path_beg /stats
        use_backend      www-operations-center if is_oc
        use_backend      www-ha-master if is_ha
        use_backend      www-non-ha-master if is_nh
        default_backend  www-default
    
    # define the front-end for JNLP connections to the active
    # operations-center instance
    frontend operations-center-jnlp
        bind             jenkins.acme.com:19000
        option           tcplog
        # tcplog can be removed for production use
        timeout          client 15m
        use_backend      jnlp-operations-center
    
    # define the front-end for JNLP connections to the active
    # ha-master instance
    frontend ha-master-jnlp
        bind             jenkins.acme.com:19001
        option           tcplog
        # tcplog can be removed for production use
        timeout          client 15m
        use_backend      jnlp-ha-master
    
    # define the front-end for JNLP connections to the
    # non-ha-master instance
    frontend non-ha-master-jnlp
        bind             jenkins.acme.com:19002
        option           tcplog
        # tcplog can be removed for production use
        timeout          client 15m
        use_backend      jnlp-non-ha-master
    
    
    # define the front-end for ssh connections to the active
    # ha-master instance
    frontend ha-master-ssh
        bind             jenkins.acme.com:19003
        option           tcplog
        # tcplog can be removed for production use
        use_backend      ssh-ha-master
    
    # define the front-end for ssh connections to the
    # non-ha-master instance
    frontend non-ha-master-ssh
        bind             jenkins.acme.com:19004
        option           tcplog
        # tcplog can be removed for production use
        use_backend      ssh-non-ha-master

    In the above example, as well as the http/https services being proxied, the following services have port forwarding to active nodes:

    • jnlp for agent communication

    • ssh for access to the internal git repository

    This type of port forwarding allows for the maximum connectivity to the Jenkins instances in the cluster while putting additional load on the machine running haproxy. An alternative to the proxying of TCP sockets is to use the system property hudson.TcpSlaveAgentListener.hostName to ensure that each Jenkins instance presents the correct X-Jenkins-CLI-Host header in requests. Either method has implications as to the mutual visibility of the agent nodes and the Jenkins nodes.

Installing and Setting up CloudBees Jenkins Operations Center

Prerequisites

The following are prerequisites for installing CloudBees Jenkins Operations Center (CJOC):

  • configured the operating system

  • installed Java

  • set up a shared filesystem (if, as recommended, setting up a HA cluster for CloudBees Jenkins Operations Center)

  • setup and configured the Servlet container that you will be running CJOC in (if not using the built-in Servlet container)

Installation Instructions

Installing on RedHat/CentOS/Fedora

To install the system on RedHat, CentOS, or Fedora, do the following:

  1. First, add the key to your system:

    sudo rpm --import http://downloads.cloudbees.com/cjoc/rolling/rpm/cloudbees.com.key
    Note

    The binaries produced after January 2016 are signed with a different Key from those produced prior to this date. If you imported the key prior to this date, the key will need to be re-imported.

  2. Next, add the repository:

    $ sudo wget -O /etc/yum.repos.d/jenkins-oc.repo http://downloads.cloudbees.com/cjoc/rolling/rpm/jenkins-oc.repo
  3. Finally, install CloudBees Jenkins Operations Center:

    sudo yum update
    sudo yum install jenkins-oc
    Note
    If configuring for HA, change JENKINS_HOME in /etc/sysconfig/jenkins-oc to point to the shared file system location of the JENKINS_HOME of the CJOC server.
  4. Start the service

    sudo service jenkins-oc start

Installing on Debian/Ubuntu

  1. First, add the keys to your system:

    wget -q -O - http://downloads.cloudbees.com/cjoc/rolling/debian/cloudbees.com.key | sudo apt-key add -
  2. Next, add the repository:

    echo deb http://downloads.cloudbees.com/cjoc/rolling/debian binary/ | sudo tee /etc/apt/sources.list.d/jenkins-oc.list
  3. Install CloudBees Jenkins Operations Center:

    sudo apt-get update
    sudo apt-get install jenkins-oc
    Note
    If configuring for HA, change JENKINS_HOME in /etc/default/jenkins-oc to point to the shared file system location of the JENKINS_HOME of the CJOC server.
  4. Finally, start the service

    sudo service jenkins-oc start

Installing on Microsoft Windows

To install the system on Microsoft Windows, do the following: If you are using Windows, download the installer zip, extract it, and execute the setup program.

Note

CloudBees Jenkins Operations Center requires a 64 bit OS and a compatable .Net framework (2.0, 3.5.1, 4.0) in order to be installed as a service. Most windows versions install a compatable framework by default, however Windows 7 OS includes the .NET Framework 3.5.1 but it is not installed by default so should be installed before running the CloudBees Jenkins Operations Center installer.

Once the installer is complete, open http://localhost:8888/ to access the installed instance. This installer installs CloudBees Jenkins Operations Center as a Windows service that starts automatically when your machine boots up.

Installing on OpenSUSE

To install the system on OpenSUSE, do the following: If you are using openSUSE, run the following command to register this repository to your openSUSE:

sudo zypper addrepo http://downloads.cloudbees.com/cjoc/rolling/opensuse/ jenkins-oc

With that set up, run sudo zypper install jenkins-oc will install CJOC.

Installing for Custom Instances

To install for a custom instance, do the following:

The jenkins-oc.war file is an executable WAR file just like a regular jenkins.war and is launched using its built-in Servlet container using the java command. For example:

java -jar jenkins-oc.war

Alternatively you can just deploy the WAR file to your own servlet container, e.g. to install CJOC on Tomcat, simply copy jenkins-oc.war to $CATALINA_HOME/webapps and then access on the context path /jenkins-oc

  • If you are running Tomcat just to host CJOC, then remove everything from $CATALINA_HOME/webapps and place jenkins-oc.war as ROOT.war.

  • If you are deploying CJOC on Apache Tomcat then you need to set the system property org.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH to true so encoded slashes are allowed.

  • You cannot host CJOC and Jenkins in the same JVM.

  • By default, Jenkins will set the JENKINS_HOME to the $SERVLET_CONTAINER_USER/.jenkins/ folder. If the servlet container installation does not include write permissions to this folder for this user (sometimes done for security), you either need to grant appropriate permissions or override this setting by adding the "-DJENKINS_HOME=$MY_JENKINSPATH" argument in your servlet container startup (see documentation for that container).

Upgrade strategies

In general, it is recommended that the client side plugins and CJOC plugins are all kept in sync on the same release line. That is, if CJOC is running the 2.19.x line of plugins then client masters should also be running the 2.19.x release line.

In order to support rolling upgrades, we endevour to ensure that CJOC can operate with client masters that are using the predecessor release lines. For example, CJOC 2.19.x can support client masters running the 2.7.x release line of plugins as well as the 1.8.x release line. However, outside of a rolling upgrade of your cluster, we always recommend that your client masters are using the same release line as their CJOC.

There are two major modes of upgrade for a CJOC cluster:

  1. Fully Off-line - this is typically best if where there is a need to jump up multiple release lines to the latest release for both CJOC and the client masters.

  2. Rolling upgrade - this is typically appropriate moving from one release line to the next.

Fully Off-line upgrade

In a fully off-line upgrade you will typically be upgrading both CJOC and all connected CJE masters to the latest release line.

Important

This should be regarded as a major change to your cluster and appropriate backup and disaster recovery measures should be taken beforehand.

  1. Stop CJOC and all client masters

  2. Remove all .pinned files from the JENKINS_HOME for CJOC and repeat this process for all client masters. (only needed if upgrading from 1.x line to 2.x)

  3. Replace the old CJOC .war file with the new CJOC .war file.

  4. Replace the old CJE .war files with the new CJE .war file.

  5. Start CJOC

  6. Validate CJOC is operating correctly.

  7. Start CJE masters

  8. Validate CJE masters are operating correctly.

Rolling upgrade

In order to plan the upgrade path for a rolling upgrade we need to examine the requirements of the various versions of CJOC and their client masters.

Each release of CJOC has its own Jenkins version requirements

JVM compatibility

Both Java 7 and 8 are supported on 1.x and 2.x lines.

Table 1. CJOC Jenkins version requirements
CJOC release CJOC Jenkins version Client master versions

1.8

1.625

1.609 - Latest LTS as of Nov 2016

2.7

2.7.x

1.642.x - 2.7.x

2.19

2.19.x

1.642.x - 2.19.x

Each CJOC release tries to maintain the following compatibility contract:

  • The latest server plugins should work with any client plugins provided that they agree on the major and minor version numbers.

  • The latest server plugins should work with the latest released client plugins for the previous release of CJOC

In general, CJOC supports client masters of an equal or lower version number that have not reached end-of-life support.

The result of all these is this following describing the version compatibility of the different combinations.

Example 1. Sample rolling upgrade path from CJOC 1.8 to 2.19

Initial state:

  • CJOC: 1.8 running Java 7

  • Client A: CJE 15.11 (i.e. 1.625) running Java 7 with oc-context 1.8.0 and oc-client 1.8.0

  • Client B: CJE 15.11 (i.e. 1.642) running Java 7 with oc-context 1.8.100 and oc-client 1.8.100

One possible rolling upgrade path is:

  1. Update all Operations Center plugins in both client masters to latest available releases offered by the CloudBees Update Center.

  2. Upgrade CJOC to 2.19.x (all plugins will be set to correct versions by the CloudBees Assurance Program plugin).

  3. Start CJOC and do a sanity check.

  4. Both masters should work properly with this combination.

  5. Upgrade masters to 2.19.x by replacing the war file and restart (all plugins will be set to correct versions by the CloudBees Assurance Program plugin).

Version specific upgrade notes

Upgrading CloudBees Jenkins Operations Center from 1.6 to a higher version

When running CloudBees Jenkins Operations Center with 1.6 or higher, upgrading it’s just a matter of upgrading the Operations Center Server war file (either manually replacing it or from the Manage Jenkins page).

Any bundled plugin (those which were packaged in the CloudBees Jenkins Operations Center WAR) will be replaced by a new bundled version, assuming you did not explicitly update them originally. If you did update them previously, they will be shown as pinned in the Plugin Manager interface (details); these will not get updated automatically, even if the newly bundled version is newer than what you have installed, so you should consider updating them after the core upgrade. Click Check now in the Advanced tab since the update center definition varies according to the Jenkins core version. Check the plugin release notes (normally hyperlinked from the plugin name in the Updates tab) to see if you want to accept updates; in some cases a plugin update will be mandatory for a core upgrade, i.e. the older version of the plugin was found to be incompatible with the new core.

Please note that when upgrading your CloudBees Jenkins Platform a double restart of Jenkins may be required (and jenkins will let you know this)

double restart banner
Figure 5. Double Restart Banner

Upgrading from 1.1

Upgrading from 1.1 is a multi-step process that must be completed in the correct order. The following sections outline each of those steps in sequence.

Upgrading CJOC Server Plugins

Due to security updates in the CJOC remoting protocol, you can not upgrade the CJOC Server directly from version 1.1 to 1.6. The first step in the process is to upgrade the following plugins in the existing Operations Center Server 1.1 installation.

Table 2. Pre-upgrade CJOC Server plugin requirements
Plugin Version

Operations Center Server Plugin

1.5.7

Operations Center Monitoring Plugin

1.0.1

Once the existing Operations Center Server 1.1 plugins have been upgraded and the server restarted, it is okay to proceed with Client Master upgrades in a rolling fashion, as outlined in the next section.

Upgrading Operations Center Client Master Plugins

Due to a number of specific changes in the Operations Center Context plugin that are necessary to implement the cluster-wide session logout functionality, it is strongly recommended that all client masters be upgraded to the following minimum plugin versions:

Table 3. Pre-upgrade client master plugin requirements
Plugin Minimum version Required

OpenID4Java API

0.9.8.0

Yes

openid

2.1

Yes

Operations Center Agent

1.6

Yes

Operations Center Client Plugin

1.6

Yes

Operations Center Cloud

1.6

Yes

Operations Center Cluster Session Extension

1.6

Yes

Operations Center Context

1.6

Yes

Note
In order to use the analytics functionality introduced in CJOC 1.6, the client masters also need to have the Operations Center Analytics Reporter plugin installed.
Upgrading Operations Center Server

With the Client Master plugins upgraded to their 1.6 versions, it is now okay to upgrade the CJOC Server from version 1.1 to 1.6.

Upgrading from 1.0

You must complete an upgrade to CJOC 1.1 as detailed in https://go.cloudbees.com/docs/cloudbees-documentation/cjoc-user-guide/setup.html#setup__upgrading_from_1_1 before upgrading to 1.6.

There is no direct upgrade path from CJOC 1.0 to CJOC 1.6.

Special Considerations for High Availability

If the CJOC server is being configured for higher availability (where there is at least one hot standby instance ready to take over in the event of the active instance failing) it is necessary to ensure that client masters can resolve connections to the active instance.

For further configuration tips, please see: the HA configuration guide

Note

For high availability requirements, CloudBees Analytics should be configured with two or more remote Elasticsearch instances. This is covered in Remote Elasticsearch Configuration.

Licensing

Once you have installed CJOC Server and started the server process, proceed to access the instance through a web browser.

Note
If you are deploying CJOC Server with a HA deployment (as recommended), access the server through the hostname that you want users to access through.

When initially accessed, the web browser will be redirected to the registration screen

registration screen
Figure 6. Registration Screen

There are two options available:

  • Request a trial license this option requires that the operations server can establish a connection to the CloudBees license server (http://licenses.cloudbees.com) and will obtain a time limited evaluation license.

    registration evaluation
    Figure 7. Requesting a trial license
    Note
    if you need to configure proxy details in order for the CloudBees Jenkins Operations Server to connect to the CloudBees license server, you will be prompted to supply those details after the initial unproxied request times out.
  • Use a license key this option is used when you have purchased a license for CJOC or when evaluating CJOC and the server is prevented by network firewalls from connecting to the CloudBees license server

    registration manual
    Figure 8. Off-line registration

Select the option that is appropriate for your installation and follow the instructions.

When you complete registration you should see a screen like the following:

registration completed
Figure 9. Registration completed

Configuring the Initial Instance

There are a number of critical configuration options that must be configured before adding items to the CJOC cluster:

  • Configure the Jenkins Location.

    The Jenkins Location is used to inform client masters of the hostname and URL from which to start establishing the remoting channel connection. Client masters will initiate a GET request against the URL and look for the response headers X-Jenkins-CLI-Port, X-Jenkins-CLI2-Port and X-Jenkins-CLI-Host before proceeding to establish a direct connection on the specified port (and host if X-Jenkins-CLI-Host is present)

    Go to the main configuration screen of CJOC, locate the Jenkins Location section and configure the correct URL.

    configure location
    Figure 10. Jenkins Location section

There are also a number of recommended configuration options, which while not strictly necessary, are strongly recommended for configuration before adding items to the CJOC cluster:

  • Configure TCP port for remoting channel connections.

    By default Jenkins picks a random port every time it starts up and listens on that port for JNLP slave connections, Jenkins CLI connections and most critically client master connections. With both JNLP slaves and the Jenkins CLI there is a fall-back to a HTTP-based backup remoting channel protocol. Currently Client masters do not have such a fall-back.

    If the CJOC server is running a firewall, it is unlikely that the firewall will be configured to accept incoming connections on whatever port CJOC selects, and thus client masters will be unable to connect to the CJOC server. Similarly if the CJOC server is configured for HA and a proxy is being used to tunnel the remoting channel connection over TCP to the active server, it will be necessary to assign a fixed port that will be used by both the proxy and CJOC server instances themselves (i.e. it must be the same port number for the proxy and for Jenkins itself).

    The procedure for selecting a fixed port is as follows:

    • Select the Manage Jenkins Link

    • Select the Configure Global Security setting

    • Enable the Enable security checkbox.

    • Define a fixed TCP port for JNLP slave agents (this TCP port is also used for client masters) and configuring the instance firewall(s) to allow incoming connections on the TCP port can greatly simplify troubleshooting.

      configure tcp port
      Figure 11. Configure a TCP port for JNLP slave agents
    • In order for security to be enabled, it will be necessary to select a Security Realm option. If you do not select a Security Realm then the security settings will be disabled when the form is saved.

      Note
      If you are not ready to configure the Security Realm at this time, select one that should "just work" such as Unix user/group database if CJOC server is running on a Unix based machine or one of the OpenID based ones and ensure that you select Anyone can do anything as the Authorization Strategy as this will let you change security settings until you are ready to configure your desired Security Realm.

If you are only adding new Jenkins instances to the CloudBees Jenkins Operations Center cluster you will likely want to consider the following settings:

  • Configure the Client master on-master executors in the Configure Global Security screen and set it to Enforce without per-master overrides and with an executor count of 0.

    configure executors
    Figure 12. Enforcing on-master executors

    Rational: if builds are allowed to run on the client masters directly, those builds will run as the Jenkins user and therefore have full access to all the files in JENKINS_HOME. Where builds need to run on the same machine as the client master, the recommended strategy is to create a special user for those builds and attach a "remote" build slave that runs as that user.

  • Configure Security Setting Enforcement in the Configure Global Security screen and set it to Single Sign-On (security realm and authorization strategy with client master opt-out disabled.

    configure security
    Figure 13. Enforcing single sign-on

    Rational: it is easier to turn this setting on at the start and enable opt-outs if necessary. Once you have masters which have already got their security settings, and more specifically their authorization strategies configured (which will typically be the case if attaching pre-existing Jenkins masters) it will require some planning to turn the single sign-on feature on.

Validating Installation

The following tests should be performed to ensure that you have completed initial configuration of CJOC Server correctly:

  • Verify that the instance can be accessed through its public location URL from a subset of the expected user population.

  • Verify that the instance can be accessed through its public location URL from a subset of the prospective client masters.

  • Verify that there are no administrative alerts of potential issues with your installation by going to the Manage Jenkins screen. A common alert to look out for is a poorly configured reverse proxy, e.g.

    broken reverse proxy
    Figure 14. A mis-configured reverse proxy
  • Verify High Availability fail-over (if using). Where a proxy is being used (i.e. when not using ARP broadcast to identify the active node), verify that the active instance’s hostname or IP address is advertised in the HTTP headers (X-Jenkins-CLI-Host)

    $ curl -I -L http://jenkins.acme.com/operations-center/
    HTTP/1.1 200 OK
    Server: Winstone Servlet Engine v0.9.10
    Expires: 0
    ...
    X-Jenkins-CLI-Host: joc-s-a.acme.com
    ...
    X-Powered-By: Servlet/2.5 (Winstone/0.9.10)
    Set-Cookie: JSESSIONID.7cbc661b=b756ce00faa7a51e8bcc0fd
    2a561fec8; Path=/operations-center; HttpOnly
  • Verify that the instance can be accessed through the remoting channel from a subset of the prospective client masters.

    $ curl -I -L http://jenkins.acme.com/operations-center/
    HTTP/1.1 200 OK
    Server: Winstone Servlet Engine v0.9.10
    Expires: 0
    ...
    X-Jenkins-CLI-Port: 34001
    X-Jenkins-CLI-Host: joc-s-a.acme.com
    ...
    X-Powered-By: Servlet/2.5 (Winstone/0.9.10)
    Set-Cookie: JSESSIONID.7cbc661b=b756ce00faa7a51e8bcc0fd
    2a561fec8; Path=/operations-center; HttpOnly
    $ nc -z -w5 joc-s-a.acme.com 34001 || echo "Connection\
     failed"
    Connection to joc-s-a.acme.com port 34001 [tcp/*http*]
    succeeded!

Client Masters

CloudBees Jenkins Operations Center (CJOC) supports multiple types of client masters. For CJOC version compatibility, review release notes for your specific version.

This version supports the following versions of Jenkins:

  • CloudBees Jenkins Enterprise 15.05 (at time of writing this was 1.609.14.2)

  • CloudBees Jenkins Enterprise 15.11 (at time of writing this was 1.625.2.2)

  • Jenkins OSS current LTS (at time of writing this was 1.625.2)

  • Jenkins OSS current HEAD (at time of writing this was 1.639)

Note
In all cases, Java 8 is recommended. While technically these versions all run on Java 7, as this version is officially end of life, CloudBees recommends Java 8.

Prerequisites

The prerequisites depend on the version of Jenkins that the client master is running.

Jenkins Enterprise 15.05

It is recommended to ensure that the following plugin updates are applied to CloudBees Jenkins Enterprise 15.05 release line:

Table 4. Table CloudBees Jenkins Enterprise 15.05 Client Master plugin requirements
Plugin Minimum version Required

Active Directory plugin

1.41

Yes

CloudBees Folders Plugin

5.1

Yes

CloudBees License Manager

7.12.1

Yes

CloudBees Monitoring Plugin

2.1

No

CloudBees Role-Based Access Control Plugin

5.3

Yes

CloudBees SSH Agents Plugin

1.3

Yes

CloudBees Support Plugin

3.6

Yes

Credentials Plugin

1.24

Yes

Enterprise License Entitlement Check

7.2

Yes

LDAP Plugin

1.11

Yes

Mailer Plugin

1.16

Yes

MapDB API Plugin

1.0.6.0

Yes

Metrics Plugin

3.1.2.1

Yes

Node Iterator API Plugin

1.5

Yes

OpenID4Java API

0.9.8.0

Yes

openid

2.1.1

Yes

Operations Center Agent

1.8.0

Yes

Operations Center Analytics Configuration

1.8.1

No

Operations Center Analytics Reporter

1.8.1

No

Operations Center Client Plugin

1.8.0

Yes

Operations Center Cloud

1.8.1

Yes

Operations Center Cluster Session Extension

1.8.0

Yes

Operations Center Context

1.8.0

Yes

SSH Credentials Plugin

1.10

Yes

SSH Agents plugin

1.11

Yes

Support Core Plugin

2.28

Yes

Unique ID Plugin

2.1.1

No

CloudBees Jenkins Enterprise 15.11

CloudBees Jenkins Enterprise 15.11 version 1.625.2.2 bundles all required plugins.

Jenkins OSS

Installing or upgrading to at least version 15.11.0 of the CloudBees Jenkins Enterprise installs all the required plugins. The minimum version of Jenkins OSS that is supported by 15.11.0 is Jenkins 1.625.2

Running on a TLS End-point

If the CJOC instance is deployed on a TLS end-point, you must import the SSL certificate in the Java Keystore of the Client Master. In case the Client Master is deployed on a Tomcat web container, you might need to tell what keystore Jenkins is using. This should verify that Tomcat is using the correct keystore.

If it is not in the standard location ($JAVA_HOME/jre/lib/security/cacerts), add it as part of the Java arguments:

-Djavax.net.ssl.keyStore=$TOMCAT_LOCATION/cacert
-Djavax.net.ssl.keyStorePassword=password
Troubleshooting Prerequisite Tasks

The following are possible troubleshooting checks:

  • Ensure that the certificates are correctly imported in both CJOC and client master.

keytool -keystore /$JRE_HOME/lib/security/cacerts -v -list
  • The cacert file should have user read permission. This user read permission is the OS’s file system permission and the user is the OS user that Jenkins’s JVM is running.

Installing a Client Master

The procedure for connecting a client master to CloudBees Jenkins Operations Center depends on whether the client master already has a valid license for CloudBees Jenkins Enterprise. Regardless, the first step is to create an item in CJOC to represent the client master. This results in the generation of a set of connection details. The connection details are then provided to the client master.

Managing CloudBees Jenkins Operations Center Tasks

Client masters can be created at the root level or within a folder. To do this:

  1. Select New Item.

  2. Specify a name for the client master within the CJOC cluster. The name can be different from the hostname of the server(s) on which the client master runs. The name is only used to identify the server to users.

  3. Select Client Master as the item type.

new job
Figure 15. Creating a new client master
  • You are directed to the configuration screen for the newly-created client master.

  • Various CJOC plugins provide additional functionality for client masters. The properties of these plugins can be changed at any time. Depending on the functionality, the changes may take a small period of time to propagate to the client master.

The core properties provided by the CJOC Server plugin are:

On-master Executors

When a job runs on one of the Jenkins Master’s own executors, that job is running as the user that is running Jenkins and within the same filesystem as the Jenkins Master itself. It is, therefore, possible for a malicious build to:

  • copy information about other "secret" builds

  • modify archived artifacts of existing builds

  • reconfigure the Jenkins master

  • capture secret credentials stored on the Jenkins master

If you are building any source code where the provenance of that code cannot be trusted, you are strongly recommended to ensure that such builds take place using an agent executor rather than an on-master executor.

Note
Considerable safety is achieved just by running the build as a different user in a chroot filesystem on the same machine as the Jenkins master. The important distinction is that the builds themselves are not running as the same user as the Jenkins master process itself.

In larger installations, such as those where CJOC is useful, it is likely not possible for the system administrator to validate each job that builds on the master against whether its source code is trusted. For this reason, CJOC provides the ability to enforce the number of executors on each client master.

There is a global setting in Jenkins » Configure Global Security » Client master on-master executors that controls this.

client on master executors
Figure 16. Global settings

When enabled the global setting will force all client masters to have the specified number of on-master executors. Typically the sensible value here is 0. There is also the ability to allow per-master overrides. If enabled this will result in client masters having an On-master executors property where it is possible to override the global enforcement for the specific client master

client on master override
Figure 17. Per-master override

It can take up to 1 minute for changes to this setting to be propagated to the client master.

Master Owner

In a large Jenkins cluster, one or more person(s) may be responsible for managing one or more client masters. The Master Owners property allows specifying the e-mail address(s) of the client master’s owner(s). When the client master is off-line for an extended period of time, the master’s owner(s) will be emailed with status updates.

master owners
Figure 18. Master owners

The advanced options allow specifying how long the client master must be off-line before notifying its owners.

  • Network partitions between the CJOC server and the client master could be responsible for the client master being considered off-line, this is something that owners should be cognisant of when taking corrective action to resolve an off-line client master.

  • Any change to the client master’s configuration on CJOC resets the timer used to track changes in a client master’s state.

Licensing

Every client master attached to CJOC is required to have a valid CloudBees Jenkins Enterprise license. The CJOC license includes a certain number of client master sub-licenses that it can have issued at any one moment in time. The Licensing property of a client master controls the exact behavior.

There are six licensing options available:

  • No license - this option will not issue a sublicense to the client master. The client master needs to independently have its own CloudBees Jenkins Enterprise license.

  • License (no dedicated executors) - this option is the recommended option for most use cases. The client master will be issued a CloudBees Jenkins Enterprise license from the pool of licenses specified by CJOC license. The issued license will not allow any dedicated executors, it being assumed that the client master’s executors will all be supplied from the pool of shared agents and any shared clouds available via CJOC.

  • License (fixed dedicated executors) - this option is for when a client master needs some specific executors that are not to be shared with any of the other client masters. One use case would be where a specific client master is used to deploy applications into production and a specific build agent is provisioned within the DMZ to actually perform production deployments. When selecting this option it is necessary to specify the number of such dedicated executors that this master will be licensed to have on-line. The CJOC pool of active executors will be reduced by this amount.

  • License (floating dedicated executors) - this option is for when a client master needs some dedicated executors but the need is not continuous. For example if a client master uses a specific build agent provisioned within the DMZ to perform production deployments and the agent is not kept on-line continuously but is instead only taken on-line when a deployment is to be performed. In such cases it may not make sense to permanently reserve a specific number of executors from the pool of executors covered by the CJOC license. When this option is selected, the number of licensed executors will be scaled to match the number of dedicated executors on the client master. This option can also be helpful when a client master has a dedicated cloud.

  • License (floating dedicated executors with upper limit) - this option is identical to License (floating dedicated executors) with the addition of a fixed upper limit, which allows the CloudBees Jenkins Operations Center administrator to grant client masters flexibility of dedicated executor configuration while preventing abuse of such flexibility from starving the entire CJOC cluster of executor licenses.

  • Test instance license - this option is only available if the operations center license includes a provision for test instance client master sub-licenses. The client master will be issued a test instance license (which covers up to five dedicated executors). The client master will have a banner identifying the client master as licensed for testing purposes only.

    test license
    Figure 19. A client master issued with a test instance license

While there are remaining unused client master licenses in the CJOC license, the default value for this property is License (no dedicated executors). Once the pool has been exhausted, the default for new client masters is No license.

If you have exhausted all your client master licenses, evaluation CloudBees Jenkins Enterprise licenses can be used to expand your cluster. These evaluation licenses expire after 30 days, but the 30 day window should be sufficient for you to complete purchase of the required additional client master licenses from CloudBees.

Note

By far the best way to connect a client master with CloudBees Jenkins Operations Center is using the Push configuration option from CJOC. This is illustrated in the Tutorial.

The push configuration method sets the Jenkins location for the client master based on the Client Master URL you supply, which significantly reduces the possibility of a misconfigured client master.

In those cases where push configuration fails, the following procedures can be used to register and license client masters

Using New (Unlicensed) Jenkins Instance

For new (unlicensed) Jenkins instances where CJOC will be managing the CloudBees Jenkins Enterprise license the CJOC client plugin provides a registrar for the CloudBees License plugin that connects the client master to CloudBees Jenkins Operation Center and obtains the license from CJOC in one operation.

In order to use this registrar you must first copy the connection details for the client master from the CJOC server.

connection details
Figure 20. Connection Details

Then on the client master itself, select the I have connection details for a CloudBees Jenkins Operations Center server that will issue my license key registrar and paste in the connection details

registrar
Figure 21. Operations Center Connector Registrar

Clicking Next will connect the client master to the CloudBees Jenkins Operations Center cluster, and assuming the CJOC server is configured to issue the client master with a sub-license, will then issue and install the sub-license, resulting in a configured instance.

Note
When using this registrar, it will configure the Jenkins Location of the client master based on the URL that you use to access the registrar.
Using Existing (Licensed) Jenkins Instance

Where a Jenkins instance already has a valid license (this can include an evaluation license during the evaluation period) you will need to configure the instance with the connection details.

First copy the connection details for the client master from the CloudBees Jenkins Operations Center.

connection details
Figure 22. Connection Details

Then on the client master itself, select Jenkins » Manage Jenkins » Configure » Operations Center Connector

configuration
Figure 23. Operations Center Connector Configuration

Enable the connector and paste in the connection details.

Note
Verify that the Jenkins Location is correct for the client master. If you have not previously set this value it will default to a value derived from the URL that you use to access the configuration screen.

After saving the configuration, the client master will establish a connection to the CJOC cluster. If the Operations Center server is configured to issue the client master with a sub-license, will then issue and install the sub-license overwriting the existing license.

Troubleshooting the Installation

You may experience issues with connecting a client master if the infrastructure is not correctly configured.

If after attempting to push the configuration to the client master, or after configuring the CJOC connection from within the client master you are continually presented with the following screen, it is likely that there is a communication issue between the nodes.

unconnected client master
Figure 24. Unconnected client master

The following table provides a series of diagnostic checks to carry out to identify the cause.

Table 5. Client master registration diagnostic checks
Area Diagnostic Steps

Check domain name resolution

  • Can the browser resolve both the CJOC server and client master domain names

  • Can each server resolve the other’s fully qualified domain name

  • Check these domain names are also the ones configured for $JENKINS_URL

  • If you are in a HA environment the domain names should resolve to the virtual IP (VIP) address of the cluster not the individual nodes

Check the HTTP Headers

Confirm the host and port the client master will use to establish remoting connection. The client master determines the endpoint by making a GET request to the CJOC server and inspecting the response headers:

  • The value of X-Jenkins-CLI2-Port is used by the client master to determine the destination port. Is it the value exected? Verify by inspecting the response headers.

  • If X-Jenkins-CLI-Host is absent, then the client master will attempt to establish remoting communication to the host in the GET request URL.

  • If X-Jenkins-CLI-Host is present then the client will use the returned host name for remoting communication.

See HA Failover Through Proxy for further details.

Check the network paths

  • Check browser → servers for http(s)

  • Check client master server → CJOC server for http(s) and jnlp ports

  • If you have an HA environment and X-Jenkins-CLI-Host is absent, confirm that port forwarding has been set up for the jnlp port as well as proxying of http(s) requests

  • Consider configuring static jnlp ports (required for HA via a load balancer)

Applying Useful Tools

The tools listed here are for information purposes and may not work on all available platforms. CloudBees does not endorse specific tools or provide support on their use.

  • Browser built in developer tools - see support article on browser dedugging

  • Charles Proxy is a useful tool for capturing and displaying HTTP requests and responses including headers

  • Wireshark is a useful tool for visualising low level packet captures from tools like tcpdump. It too can help visualise an HTTP session, but it can also show socket level issues such as connection refused and help you assert that traffic is using the correct network interfaces. To capture wireshark compatible dump files use this syntax:

    tcpdump -i <interface> -s 65535 -w <some-file>

Using Common Tasks for Client Masters

This sections details some common tasks specific to client masters.

Changing the Configuration of a Client Master

Navigate to the CJOC Server configuration page for the client master. There are three routes to this page.

  1. On the client master, select Operations Center » Configure

    accessing config screen from client
    Figure 25. Accessing the configuration page from the client master
  2. On the CJOC Server, select the drop down context menu to the right of the client master’s name and select the Configure option.

    accessing config screen from server
    Figure 26. Accessing the configuration page from the server
  3. On the CJOC Server, click on the health / weather icon for the client master. That will open CJOC’s status page for the client master which has the Manage and Configure options in the left hand menu.

You can modify the configuration of the client master and save or apply any changes.

Note
depending on the nature of the changes, it may take up to a minute for the changes to be propagated to the client master, assuming that the client master is on-line. If the client master is off-line the changes will be propagated after the connection between the CJOC server and the client master has been re-established, although again it may take up to a minute after the connection has been established for the changes to propagate.

Removing a Client Master

There are two primary steps required to remove a client master from a CloudBees Jenkins Operations Center:

  1. The most important step is to delete the Client Master item from CJOC.

  2. If the client master will continue to operate as an independent master, you will need to disable the Operations Center Connector and potentially reconfigure security.

    • If you are close to the limit of licensed client masters provided by your CJOC license you may need to force a license reset of the client master first in order to ensure that the released license is available for assignment to a new client master (this is only of concern if you will be adding the new client master within 24 hours of removing the old client master.

    • Once you have disassociated the client master from the cluster, if you intend to continue using that Jenkins instance as an independent master, you will either need to obtain a CJE license for the master or remove the CJE components and convert it to a Jenkins OSS instance.

The recommended procedure to remove a client master is as follows:

  1. Navigate to the CJOC Server configuration page for the client master (see previous section) and set the licensing strategy to No license and apply the change.

  2. Navigate to the management page for the client master, i.e. the Manage link from the CJOC Server configuration page for the client master or the Operations Center » Manage link from the client master.

  3. Select the Delete action.

  4. If the client master is being decommissioned, at this point the client master instance can be terminated, otherwise it will need to be either provisioned with a CloudBees Jenkins Enterprise license and the Operations Center connector disabled or it will need to be converted to a Jenkins OSS instance and all licensed components removed (which will have the additional side-effect of disabling the Operations Center connector). Disabling the Operations Center connector is achieved by selecting Jenkins » Manage Jenkins » Configure » Operations Center Connector; deselecting the Enable checkbox; and either saving or applying the configuration change.

    configuration
    Figure 27. Operations Center Connector Configuration

Moving a Client Master

A client master can be moved between folders by selecting the Move action.

Limiting provisioning of nodes on a Client Master

This feature requires the following (or newer) versions of plugins installed in CJOC to work:

  • Operations Center Context 1.8.5

Every Jenkins instance will have its own practical upper limit for either the number of executors or the number of nodes that it can handle. This limit will be determined by things such as the number of CPU cores available to the Jenkins JVM process, the amount of memory allocated to the JVM process, various JVM tuning parameters, etc. Because CJOC does not keep a connection open to either its Shared Agents or Shared Clouds [1] there can be significantly more resources available from CJOC than a specific Jenkins master could safely handle. If there is a spike in the build queue on a specific Jenkins instance, the cloud provisioning from the Operations Center Cloud will start allocating build resources from CJOC to the client master. The cloud provisioning API in Jenkins has no knowledge of the empirical provisioning limits for the Jenkins instance it is operating in, so a large enough build queue could trigger sufficient provisioning as to overload the Jenkins instance. Operations Center Context 1.8.5 introduces functionality to allow defining the provisioning limits to apply in any specific master.

provisioning limits
Figure 28. Provisioning Limits configuration in a Client Master’s Manage Jenkins » Configure System screen.

The provisioning limits are configured from the Client Master’s Manage Jenkins » Configure System screen. There are two types of limits available, both of these limits can be enabled and will operate independently. The limits apply to all provisioning via the Jenkins Cloud API, not just provisioning from CJOC.

The Nodes limit applies to the total number of nodes defined in the Jenkins instance. If enabled, once the total number of nodes goes above this value, the Operations Center Context plugin will block any Node provisioning until the number of nodes drops below this limit.

Note

The default node limit is 100 nodes. This default is not to say that you can expect any Jenkins instance to handle up to 100 nodes. This default reflects CloudBees experiences about the point above which any Jenkins master will require tuning. Jenkins masters running on resource constrained hardware may not be able to achieve 100 nodes even with tuning.

You will need to empirically determine the actual limit for your Jenkins instance.

The Executors limit applies to the total number of executors defined in the Jenkins instance. If enabled, once the total number of executors goes above this value, the Operations Center Context plugin will block any Node provisioning until the number of executors drops below this limit.

Note

The default executor limit is 250 nodes. This default is not to say that you can expect any Jenkins instance to handle up to 250 concurrent build jobs. This default reflects CloudBees experiences about the point above which any Jenkins master will require tuning. Jenkins masters running on resource constrained hardware may not be able to achieve 250 concurrent build jobs nodes even with tuning.

You will need to empirically determine the actual limit for your Jenkins instance.

Shared Configuration

CloudBees Jenkins Operations Center (CJOC) provides a facility to replicate a subset of system configuration information (configuration snippet) to client masters.

This feature requires the following (or newer) versions of plugins installed in CJOC to work:

  • Operations Center Context 1.6.3

  • Operations Center Server 1.6.8

  • Operations Center RBAC 1.6.2

Additionally, each client master requires Operations Center Context 1.6.3 or newer to receive the configuration.

Creating a configuration snippet replicates that configuration to any client masters within the same folder hierarchy that have not opted out of receiving shared configuration. This feature is ideal when you have some configuration such as Alerting or E-mail configuration that should be kept in sync on one (or more) client masters. It is possible to define multiple shared configuration within the same folder as well in any parent folder and the configuration of all of these will be replicated to the client master.

Creating a Shared Configuration

To create a shared configuration, review the following:

It is first important to work out which of your client masters you wish to receive the shared configuration so that you can identify the correct folder(s) to create the shared configuration in.

For example, if you have six client masters (A,B,C,D,E, and F) and you wish five (A,B,C,D and E) of them to have the same e-mail configuration (for example: SMTP server or Jenkins instance e-mail)yet only three of the client masters (C,D and E) to have the alerting information you will need to create one item to contain the e-mail configuration that is in a folder that is the parent of all of the client masters, and one item to hold the Alerting information that is in a folder that contains or is a parent of only C, D and E.

Root
 + Folder One
 |   \
 |    + Client Master A
 |    + Client Master B
 |    + Folder Two
 |      \
 |       + Client Master C
 |       - Client Master D
 |       - Client Master E
 |
 - Client Master F

In the above hierarchy, the Alerting configuration is created in Folder Two and the e-mail configuration is created in Folder One.

Note
To obtain the correct hierarchy, you may have to move the client masters to different folders.
  1. Once you have identified the correct location for the Shared Configuration, navigate to that folder and choose New Item, provide a name for the configuration, select the type Miscellaneous Configuration Container and then OK.

    new screen

    The subsequent screen allows you to define the type of configuration snippet.

  2. By selecting Add Snippet and choosing the type of configuration you want to add, you can enter the configuration of the item to replicate. A list of supported snippets is available: below.

add snippet screen

Each snippet is configured as it would be in the main Jenkins Configure System page.

As Jenkins consolidates the configuration in the current folder with the configuration in the parent folders, it is important to know the piece of information within each snippet is the key such that configuration can be overwritten in a child folder. This is done to prevent entering the same key twice by accident. In all cases, when a snippet with the same key is defined in multiple places the one in the folder closest to the folder containing the client master takes precedence.

Table 6. Supported snippet types and their key attribute
Snippet Type Key Attribute Inheritance Behaviour

Alerts

Alert title

Snippets with different keys are combined into the configuration.

E-Mail Notification

N/A

As there can be only one email configuration the closest to the client master takes precedence, and others are ignored.

System message

N/A

As there can be only one system message, the closest to the master takes precedence.

Tool Installation

Tool Name

Snippets with different keys are combined into the configuration.

Caution
If multiple snippets are are defined with the same key in the same Miscellaneous configuration container, or in a different Miscellaneous configuration container within the same Folder — then the snippet used is non-deterministic and, as such, should be avoided.

Understanding Snippet Types

Using Tool Installation

The Tool Installation mechanism allows you to specify tool setup procedures in global configurations and then use them in Jenkins jobs. Various Tool Installation types are provided by Jenkins core and plugins: JDK, Maven, Git, and Docker CLI. Large-scale distributed installations may include dozens of complex tool configurations, hence it is recommended to store them in CJOC using this snippet.

Configuring Tool Installation

Each Tool Installation snippet defines one installation of a tool. It is possible to setup multiple Tool Installations. To do this, do the following:

  1. Create a new Tool Installation snippet.

  2. Select a tool type to snippet.

  3. Configure the tool according to the guidelines available in the built-in help on on plugin Wiki pages.

    • Installation names should be unique within one tool type

    • It is possible to set up multiple tool installers for different node labels

snippets toolInstaller selectSnippet
Figure 29. Step 1. Selecting the snippet
snippets toolInstaller selectTool
Figure 30. Step 2. Selecting tool to be installed
snippets toolInstaller config
Figure 31. Step 3. Setting up a Tool (JDK from Jenkins Core)
Understanding Behavior Specifics

The following define function behaviors when using Tool Installer:

Feature Description

Missing Plugins

When a tool is configured in the Miscellanous Configuration Container or any part of its configuration (for example: Tool Installer) come from a plugin not installed on a Client Master, the configuration for that specific tool won’t be propagated to that Client Master. If the missing plugin is installed on the attached Client Master, a reconnection of the client master is required in order to make the Tool Installation be propagated to the client master.

Form Verification

Tool Installation or Tool Installer configuration forms may provide verification for the configuration entries or contain check buttons (for example: Test Connection). In the case of the Shared Configuration Snippet, all these verification handlers will be launched on the CJOC instance instead of the Client Masters. It means that the verification result may be false-positive or false-negative, because CJOC and Client Master environments may differ.

Note
It is highly recommended to double-check tool configurations on CJOC and on Client Masters after making changes.
Reviewing Tested tools

The Tool Installation Snippet has been tested with the following tools:

A correct behavior for other Tool Installation types is not guaranteed. If you see any issues with the configuration propagation, please contact the CloudBees Support team.

Configuring Tool Installation for Plugin Developers

If your plugins provide their own Tool Installation types, these types automatically appear in the dropdown list. Due to the specifics of the Tool Installation extension point in the Jenkins core, it’s not guaranteed that the configuration form displays properly. Missing or duplicated ToolInstallationProperty entries is one of known issues.

To make your Tool Installation compatible with the configuration snippet, follow examples in Jenkins JDK Tool. Tool Installation snippets for such implementations should work correctly out-of-the-box.

Opting a Client Master out of Shared Configuration

In some cases, you may want a specific client master to not receive any shared configuration. To do this, opt out of the client master from shared configuration.

To do this, do the following:

Configure the client master and choose Disable under the Shared Configuration section.

client master optout
Note
If shared configuration is not disabled, then the same section provides information about the client master’s ability to support this feature.
client master support

Opting Operations Center into Shared Configuration

It may be useful to opt Operations Center into the Shared Configuration. For example, where you are customizing tool installer locations on shared agents you will need those tool installers to be configured in Operations Center in order to be able to define the location overrides. Additional use cases include:

  • Configuring common alerts

  • Ensuring the email notification settings are consistently applied across the entire cluster

  • Applying a system message to the entire cluster

If you opt-in Operations Center it will pick up any Shared Configurations defined in the root of Operations Center.

To enable Operations Center to consume the shared configuration you must enable the Shared Configuration option from the Operations Center global configuration.

global config quick jump
Figure 32. Using the global configuration’s "Quick-Jump" navigation to find the Shared Configuration option
global config enabled
Figure 33. Operations Center enabled to consume Shared Configuration

Shared Agents

CloudBees Jenkins Operations Center (CJOC) enables the sharing of agent executors across client masters in the CJOC cluster. "Agents" were formerly called "slaves."

Currently there are a number of restrictions on how agents are shared:

Restriction Description

Non-standard Launcher

If a non-standard launcher is used, the plugin defining the launcher must be installed on all the client masters within scope for using the shared agent, and the plugin versions on the client master and the CJOC server must be compatible in terms of configuration data model. (An example of known non-compatibility would be that the ssh-slaves plugin before 0.23 uses a significantly different configuration data model from the model used post-1.0. This specific configuration data model difference is not of concern because the current versions of Jenkins all bundle versions of ssh-slaves newer than 1.0)

Shared Agent Usage

Shared agents can only be used by sibling client masters or by client masters in sub-folders of the container where the shared agent item is defined.

One Shot Build Mode

Shared agents operate in a “one-shot” build mode, unless the client master loses its connection with the CJOC server. When the connection has been interrupted, client masters use any matching agents on-lease to perform builds. If there are no agents on-lease to the client master when the connection is interrupted, the client master may be unable to perform any builds (unless it has dedicated executors available).

Shared Agent with More than One Executor

If an agent is configured with more than one executor, the other executors are available to start builds while at least one executor is in use, and no more builds than the number of configured executors have been started on the agent. In other words, if an agent is configured with four executors, then accepts up to four builds on a client master, but after at least one build has completed, it is returned immediately after it becomes idle even if fewer than four builds have been run during the lease interval.

The sharing model used for shared agents is the same as the credentials propagation and role-based access control plugin’s group inheritance model. Consider the following configuration:

sharing model
Figure 34. Sample configuration
  • There are three folders: F1, F2, and F1/F3

  • There are three shared agents: F1/S1, F2/S2 and S3

  • There are four client masters: F1/F3/M1, F1/M2, F2/M3 and M4

The following logic is used to locate a shared agent:

  • If there is an available shared agent (or shared cloud) at the current level and that agent has the labels required by the job, then that agent is leased.

  • If there is no matching shared agent (or shared cloud with available capacity) at the current level, then proceed to the parent level and repeat.

Thus:

  • F1/F3/M1 and F1/M2 are able to perform builds on F1/S1 and S3 but not on F2/S2. F1/S1 is preferred as it is “nearer”.

  • F2/M3 is able to perform builds on F2/S2 and S3 but not on F1/S1. F2/S2 is preferred.

  • M4 is only able to perform builds on S3

Under normal operation, when an agent is leased to a client master, it is leased for one and only one job build. After the build is completed, the agent is returned from its lease. This is known as “one-shot” build mode. If, while the agent is on lease, the connection between the client master and the CJOC server is interrupted, the client master is unable to return the agent until the connection is re-established. While in this state, the agent is available for re-use by the client master.

Installing Shared Agents

You can install shared agents one of two ways.

  1. To create a shared agent, you must first decide how you want it created:

    • If you want to create the shared agent item at the root of the CJOC server (example: the agent is available to all client masters), navigate to the root and select the New Job action on the left menu.

    • If you want to create the shared agent item within a folder in the CJOC server (example: the agent is available only to client masters within the folder or within the folder’s sub-folders), then navigate to that folder and select the New Item action on the left menu.

      In either case, the standard new item screen appears:

      new item screen
      Figure 35. New item screen
  2. Provide a name and select Shared Slave as the type.

    Once the agent has been created, you are redirected to the configuration screen. A newly created shared agent is in the off-line state.

    configuration screen
    Figure 36. Configuration screen

    The configuration screen options are analogous to the standard Jenkins agent configuration options.

  3. When the agent configuration is complete, save or apply the configuration. If you do not want the agent to be available for immediate use, deselect the Take on-line after save/apply checkbox before saving the configuration.

Injecting Node Properties into Shared Agents

You can inject node properties into the shared agent when it is leased to the client masters through the Operations Center Cloud plugin. The initial implementation only provides support for two specific types of node properties:

  • Environment Variables

  • Tool Locations

Note

If you have written your own custom plugins that provide custom implementation(s) of NodeProperty, you can write a custom plugin for CJOC (see Creating Custom Extensions) that provides implementation(s) of com.cloudbees.opscenter.server.properties.NodePropertyCustomizer with the appropriate injection logic for these custom node properties.

To inject node properties:

Select the Inject Node Properties option on the configuration screen and then add the required node property customizers:

  • Environment variables adds/updates the environment variables node property with the supplied values.

  • Tool Locations adds/updates the tool locations. The required tool installers must be defined with the same names both on CJOC and on the client masters to which slaves are leased.

Example 2. Injecting the tool location node properties of a shared agent (slave)
node property inject

Reviewing Common Shared Agent Tasks

Taking an Agent Offline

To take a shared agent offline, (for example, for maintenance of the server that hosts the shared agent process or to make configuration changes to the shared agent):

Navigate to the shared agent screen and select Take offline.

Example 3. An online shared agent (slave)
online state

Taking an Agent Online

To take a shared agent online, (for example, after maintenance of the server that hosts the shared agent process):

Navigate to the shared agent (slave) screen and select Take on-line.

Example 4. An offline shared agent (slave)
offline state

Configuring a Shared Agent

To configure a shared agent, you must first take the agent offline. If the shared agent is online, select Configure to take the agent offline first.

Example 5. Attempting to configure an online shared agent (slave)
configure online

Deleting a Shared Agent

To delete a shared agent, take the agent off-line first. If the shared agent is online, select Delete to take the agent off-line first.

Example 6. Attempting to delete an online shared agent (slave)
delete online

Moving a Shared Agent

A shared agent can be moved between folders by selecting Move on the shared agent (slave) screen.

Note

When moving shared agents, the JNLP slave (agent) launch commands include the path to the agent, so if you move a JNLP shared agent, you need to update the JNLP agents to connect to the new location. Any agents that are connected while the move is in progress are unaffected. If the CJOC master is restarted or fails over in a HA cluster, however, the JNLP agents are unable to reconnect until they are reconfigured with the new path.

Recovering “Lost” Agents

Occasionally, due to lost connections between client masters and the CJOC Server, a shared agent may become temporarily stuck in an on-lease state, whereby the CJOC Server believes the agent to be leased to a specific client master, but the client master has no knowledge of the shared agent.

Built-in safety mechanisms identify and recover any such “lost” agents. By their nature, these processes perform cross-checks to ensure that an agent in use is not incorrectly recovered.

To start the recovery process, the client master that the agent is leased to must be connected to the CJOC server. Once the connection is established, it can take up to 15 minutes for the recovery process to progress through its checks. Under normal circumstances, the checks are completed in less than two or three minutes.

If the automatic recovery processes fails, there is a Force release link that can be used to force the lease into a returned state.

Caution
Forcing a lease into a released (returned) state bypasses all the safety checks that ensure that the agent is no longer in use.

Reviewing CLI Operations that Support Shared Agents

The following CLI operations are designed to support management of shared agents:

  • create-job can be used to create a shared agent

  • disable-slave-trader can be used to take a shared agent off-line

  • enable-slave-trader can be used to take a shared agent on-line

  • list-leases queries the active leases of a shared agent

  • shared-slave-force-release can be used to force release of a "stuck" lease record

  • shared-slave-delete can be used to delete a shared agent

Shared Clouds

CloudBees Jenkins Operations Center (CJOC) enables the sharing of cloud provisioned agent executors across the client masters in the CJOC cluster.

Currently there are a number of restrictions:

Restriction Description

Non-standard Launcher

If a non-standard launcher is used, the plugin defining the launcher must be installed on all the client masters within scope for using the shared agent, and the plugin versions on the client master and the CJOC server must be compatible in terms of configuration data model. (An example of known non-compatibility would be that the ssh-slaves plugin pre 0.23 uses a significantly different configuration data model from post 1.0. This specific configuration data model difference is not of concern as the current supported versions of Jenkins all bundle versions of the ssh-slaves newer than 1.0)

Shared Agent Usage

Shared agents can only be used by sibling client masters or by client masters in sub-folders of the container where the shared agent item is defined.

One Shot Build Mode

Shared agents operate in a “one-shot” build mode, unless the client master loses its connection with the CJOC server. When the connection has been interrupted, client masters will use any matching agents on-lease to perform builds. If there are no agents on-lease to the client master when the connection is interrupted, the client master may be unable to perform any builds (unless it has dedicated executors available)

Shared Agent with More than One Executor

If an agent is configured with more than one executor, the other executors are available to start builds while at least one executor is in use and no more builds than the number of configured executors have been started on the agent. In other words, if an agent is configured with four executors, it accepts up to four builds on a client master, but after at least one build has completed returned immediately after it becomes idle even if less than four builds have been run during the lease.

Built-in Garbage

If the backing cloud provider plugin performs built-in garbage collection and does not use the Node Iterator API to iterate the nodes that are in use, then the built-in garbage collection may result in the termination of in-progress builds when nodes are on-lease to client masters.

The sharing model used for shared agents is the same as the credentials propagation and role-based access control plugin’s group inheritance model. Consider the following configuration:

sharing model
Figure 37. Sample configuration
  • There are three folders: F1, F2 and F1/F3

  • There are three shared clouds: F1/C1, F2/C2 and C3

  • There are four client masters: F1/F3/M1, F1/M2, F2/M3 and M4

The following logic is used to locate a shared cloud:

  • If there is a shared cloud with available capacity (or an idle shared agent) at the current level and that cloud can provision the labels required by the job, then that cloud requests to provision an agent for lease.

  • If there is no matching shared cloud with available capacity (or matching idle shared agents) at the current level proceed to the parent level and repeat.

Thus:

  • F1/F3/M1 and F1/M2 are able to perform builds on agents provisioned from F1/C1 and C3 but not from F2/C2. F1/C1 are preferred as it is “nearer”.

  • F2/M3 are able to perform builds on agents provisioned from F2/C2 and C3 but not from F1/C1. F2/C2 is preferred.

  • M4 will only be able to perform builds on agents provisioned from C3

Under normal operation when an agent is leased to a client master it will be leased for one and only one job build. Once the build is completed the agent will be returned from its lease. This is known as “one-shot” build mode. If while the agent is on lease, the connection between the client master and the CJOC server is interrupted the client master is unable to return the agent until the connection is re-established. While in this state the agent is available for re-use by the client master.

Installing a Shared Cloud

To install a shared cloud, first you must determine how you want to install:

  • If you want to create the shared cloud item at the root of CloudBees Jenkins Operations Center server (for example, the cloud is available to all client masters) navigate to the root and select the New Job.

  • If you want to create the shared cloud item within a folder in CloudBees Jenkins Operations Center server (for example: the cloud will be available only to client masters within the folder or within the folders sub-folders) then navigate to that folder and select the New Item.

In either case, the standard new item screen appears:

new item screen
Figure 38. New item screen
  1. Provide a name and select Shared Cloud as the type.

    Once the cloud is created, you are redirected to the configuration screen. A newly-created shared agent is in the offline state:

    configuration screen
    Figure 39. Configuration screen

    The configuration screen options are analogous to the standard Jenkins cloud configuration options.

  2. When the cloud configuration is complete, save or apply the configuration. If you do not want the cloud available for immediate use, deselect the Take on-line after save/apply checkbox before saving the configuration.

Adding Cloud-defined Node Properties

The Jenkins Cloud API allows that a cloud can define the node properties for the provisioned agent. This allows the implementation of the cloud extension point to use node properties to track and identify agents that have been provisioned by the implementation. As such, you can’t define node properties for agents provisioned from the Jenkins Cloud API. Defining the node properties results in removing any tracking node properties injected by the cloud implementation.

You can inject/override additional node properties. The initial implementation provides support for two specific types of node property:

  • Environment Variables

  • Tool Locations

Note

While the configuration may look similar to that of directly attached agents it must be stressed that these properties are being injected and merged into the list of properties provided by the implementation of the Jenkins Cloud API itself.

The merging of NodeProperty implementations is not something that Jenkins provides an API for, and as such each NodeProperty implementation needs explicit merge logic to be provided for.

Where customers have written their own custom plugins which provide custom implementation(s) of NodeProperty, those customers can write a custom plugin for CJOC (see Creating Custom Extensions) that providing implementation(s) of com.cloudbees.opscenter.server.properties.NodePropertyCustomizer with the appropriate injection/override logic for these custom node properties.

Injecting and/or Overriding Node Properties

To inject and/or override node properties, navigate to and the Inject/Override Node Properties option on the configuration screen and add the required node property customizers:

  • Environment variables adds/updates the environment variables node property with the supplied values.

  • Tool Locations adds/updates the tool locations. The required tool installers must be defined with the same names both on CJOC and on the client masters to which agents are leased.

node property override
Figure 40. Injecting/overriding the environment variable node properties of provisioned agents

Reviewing Common Shared Cloud Tasks

Taking a Cloud Offline

To take a shared cloud offline, (for example: for maintenance of the server that hosts the shared cloud or to make configuration changes to the shared cloud):

Navigate to the shared cloud screen and select the Take off-line.

online state
Figure 41. An online shared cloud

Taking a Cloud Online

To take a shared cloud online, (for example: for maintenance of the server that hosts the shared cloud or to make configuration changes to the shared cloud):

Navigate to the shared cloud screen and select the Take on-line.

offline state
Figure 42. An offline shared cloud

Configuring a Shared Cloud

To configure a shared cloud, it is necessary to take the cloud off-line first. If the shared cloud is online selecting the Configure action will prompt to take the cloud offline first.

configure online
Figure 43. Attempting to configure an online shared cloud

Deleting a Shared Cloud

To delete a shared cloud, take the cloud offline first. If the shared cloud is online, select Delete to take the cloud offline first.

delete online
Figure 44. Attempting to delete an online shared cloud

Moving a Shared Cloud

A shared cloud can be moved between folders by selecting the Move.

Note

When moving shared clouds, remember that the JNLP agent launch commands include the path to the cloud. If you move a JNLP shared cloud, you need to update the JNLP agents to connect to the new location. Any agents that are connected while the move is in progress are unaffected. If the CJOC master is restarted or fails over in a HA cluster, however, the JNLP agents will be unable to reconnect until they are reconfigured with the new path.

Recovering “Lost” Agents

Occasionally, due to lost connections between client masters and the CJOC Server, a Shared Cloud’s agent node may become temporarily stuck in an on-lease state, whereby the CJOC Server believes the node to be leased to a specific client master, but the client master has no knowledge of the node.

Built-in safety mechanisms kick in to identify and recover such “lost” nodes. By their nature, these processes perform cross-checks to ensure that an in-use node is not incorrectly recovered.

To start the recovery process, the client master that the node was leased to must be connected to the CJOC server. Once the connection is established, it can take up to 15 minutes for the recovery process to progress through its checks. Under normal circumstances, the checks are completed in under two to three minutes.

If the automatic recovery processes fails, use the Force release link on each lease record to force record the lease into a returned state.

Caution
Forcing a lease into a released state bypasses all the safety checks that ensure that the agent is no longer in use.

Reviewing CLI Operations that Support Shared Clouds

The following CLI operations are designed to support management of shared clouds:

  • create-job can be used to create a shared cloud

  • disable-slave-trader can be used to take a shared cloud off-line

  • enable-slave-trader can be used to take a shared cloud on-line

  • list-leases queries the active leases of a shared cloud

  • shared-cloud-force-release can be used to force release of a "stuck" lease record

  • shared-cloud-delete can be used to delete a shared cloud

Java Web Start Agents in Operations Center

CloudBees Jenkins Operations Center (CJOC) provides a mechanism to share agents (formerly called "slaves") across client masters. This works across wide variety of launcher types, including the "Launch slave agents via Java Web Start" launcher type (also known as JNLP launcher.)

Note

A critical difference between normal JNLP agents and JNLP agents in CJOC concerns how the JVM used to connect to the client master gets its JVM options.

  • In a stand-alone Jenkins environment the JVM options for the agent’s JVM depend on the launch method:

    • If launched using the “Launch” button, then the JVM Options specified within the launcher’s “Advanced” section of the “Configuration” screen for the agent will be injected into the JNLP file and will be used for launching the agent.

    • If launched using the command line, then the JVM Options will be those specified on the command line, and any JVM Options specified within the launcher’s “Advanced” section of the “Configuration” screen for the agent will be ignored.

  • In CJOC a shared agent / shared cloud using the JNLP launcher will behave the same way as a stand-alone Jenkins environment for the JVM used to connect to Operations Center, but when leasing the agent out to client masters _a second JVM is started using the JVM Options specified within the launcher’s “Advanced” section of the “Configuration” screen for the agent.

When you configure a shared agent with a JNLP launcher, the resulting shared agent will show you a Java Web Start launch button.

slave
Figure 45. Launch screen

You’ll now go to a machine you plan to use as an agent, start a browser, access this same page, then click the launch button to start Java Web Start. You should see a small GUI window. When the GUI window says "Connected", it indicates that the computer is now under the control of CJOC.

The UI in CJOC changes to indicate that the shared agent is available to be leased to client masters:

slave ready
Figure 46. Agent ready to lease

On supported platforms, this GUI gives you a menu option to install it as a service, enabling it to start automatically upon machine boot-up. You’ll want to do install as a service if you plan to use the computer as an agent permanently.

Installing JNLP agent without GUI

If the computer you plan to use as an agent does not have a GUI, you can run the command indicated in the above UI. See the OSS Jenkins Wiki page for a discussion of the same topic.

Bundling multiple JNLP agents as shared cloud

When you have multiple identical JNLP agents, it becomes tedious to configure shared agents individually. To simplify this, CJOC provides a "Java Web Start agents" cloud type in the shared cloud setting.

To configure a shared cloud in this manner, select "Java Web Start agents" as the cloud type and fill in the details of the JNLP agent:

cloud config
Figure 47. JNLP cloud config

All the JNLP agents connected to this share the exact same settings.

When you finish the settings, you’ll see the same Java Web Start launch button as with the JNLP shared agent:

cloud ready
Figure 48. JNLP cloud ready

The difference here is that you can launch an agent from this button multiple times on different computers, and they are all grouped into one big pool. The "current pool size" section indicates the number of JNLP agents that are currently controlled by CJOC, including ones that are leased to client masters.

JNLP Agent Workflow

JNLP agents work a little differently in CJOC compared to stand-alone Jenkins. This section describes those differences.

In stand-alone Jenkins, a JNLP client connects to a master, establishes a channel, becomes an agent, and starts taking orders from the master to perform builds and other tasks.

In CloudBees Jenkins Operations Center, the same JNLP client connects to a CJOC and establishes a channel in the exact same manner, but it does not become an agent. Instead, CJOC keeps this client idle, until it decides to lease it to a client master. At that point, CJOC orders the JNLP client to launch a separate JNLP client as a child process, then have it connect to the master to which the agent was leased.

This new child process connects to a client master, establishes a channel, becomes an agent, and starts taking orders from this master, until the lease ends.

One of the implications of this design is that an agent computer must be reachable by all the client masters to which it might be leased.

Shared Cloud Configuration

CloudBees Jenkins Operations Center (CJOC) enables the sharing of cloud configuration across the client masters in the CJOC cluster.

The sharing model used for shared cloud configuration is similar to the shared configuration propagation.

Unlike shared clouds or shared agents, shared cloud configuration replicates the configuration of the cloud provider to the specified client masters such that each client master has separate control of the cloud rather than leasing agents from the CJOC server.

Supported cloud providers

Other clouds may be available in future software upgrades.

Note
The shared cloud configuration is licensed individually for each cloud type.

Limitations

  1. The plugin defining the configuration must be installed on the client masters within scope for using

  2. The plugin versions on the client master(s) and the CJOC server must be compatible in terms of configuration data model.

Warning
Violations of the requirements above may have a serious impact on cloud provisioners on client masters. The service administrators should ensure

Shared configuration setup

Shared configurations should be created as Jenkins items like client masters, folders and templates using the New Item action on the left hand menu in CloudBees Jenkins Operations Center. Items can be created at the root of the instance or within a folder.

  • If you create in the root of CJOC, the cloud will be available to all client masters

  • If you create a shared cloud configuration item in a folder, the cloud will be available only to client masters within the folder or within the folders sub-folders.

Step 1. Create a new configuration item

  1. Click on the "New Item" action in Jenkins root or a folder. In both cases you will be presented with the standard new item screen.

  2. Provide a configuration name

  3. Select the Cloud Configuration type that you wish to create.

  4. Click on OK button. A new configuration window will appear

new item screen
Figure 49. New item screen

Step 2. Enter the required configuration for the selected cloud.

Options are similar to the standard Jenkins cloud configuration options.

Note
if a cloud provider requires configuration of native tools or libraries then this location must be the same on all client masters that receive this configuration in order to operate correctly

Step 3. Apply changes

When the shared cloud configuration is complete save or apply the configuration.

The cloud configuration will be pushed to all online client masters that have not opted out of receiving shared configuration. If a client master is offline, the new configuration will be applied when this master connects to CJOC.

Supported configurations

Below you can find configuration examples for the supported clouds.

Note

As mentioned earlier, prior to configuring a shared-cloud ensure that the plugin defining the launcher is installed on all the client masters within scope for using the shared cloud.

Docker Shared Cloud Configuration

Please refer to the Docker plugin’s Wiki page to find setup guidelines and info about supported configurations. This section describes the CJOC specifics only.

  • Container caps are being managed individually for each client master, there is no global limitations of the containers number

  • Current Docker Plugin version supports TLS connection options via environment variables only. These environment variables are not being distributed to client masters by Docker Shared Cloud Configuration. These environment variables should be set up on client masters using other Jenkins features

docker configuration screen
Figure 50. Docker Shared Cloud Configuration screen
Testing the configuration
  • If you click on the Test Connection button in the configuration, the test will be performed on CJOC. Make sure that the remote Docker host is available to CJOC

  • You can also test the connection from one of Jenkins client masters when the configuration gets synchronized

Amazon Elastic Compute Cloud (EC2)

This Shared cloud configuration allows to share settings for Amazon EC2 cloud, which is the part of Amazon Web Services platform (AWS).

Please refer to Amazon EC2 plugin’s Wiki page to find setup guidelines and info about supported configurations. Additional plugin configuration guidelines are also available on EC2 Executors page on DEV@Cloud Wiki. This section describes the CJOC specifics only.

  • Instance caps are being managed individually for each client master, there is no global limitation for the entire CJOC cluster.

  • If you use Access Key ID and Secret Access Key to define EC2 credentials, these credentials will be shared across all CJE instances. EC2 must allow multiple logins from different places if have multiple client masters

  • If all target CJE instances are running on EC2 instances, you can use EC2 instance profile to obtain credentials from the instance metadata

  • Currently there is no opportunity to setup different Regions for CJE instances in AWS EC2 Shared Cloud Configuration. In such case use individual setups of EC2 Cloud on each master

ec2 top configuration
Figure 51. AWS EC2 Shared Cloud Configuration setup
ec2 ami configuration
Figure 52. Amazon Machine Image (AMI) setup in AWS EC2 Shared Cloud Configuration
Testing the configuration
  • If you click on the Test Connection button in the configuration, the test will be launched on CJOC. Make sure that the AWS EC2 cloud is available to CJOC.

  • If the instance is configured to obtain credentials from EC2 instance profile, CJOC must be running on a properly configured Amazon Machine Image

  • You can also test the connection from one of Jenkins client masters when the configuration gets synchronized. It’s highly recommended if your AWS EC2 Shared Cloud Configuration uses EC2 instance profile to obtain credentials

Microsoft Azure Cloud

This Shared cloud configuration allows sharing of settings for Microsoft Azure cloud.

Please refer to Azure Slave plugin’s Wiki page to find setup guidelines and info about supported configurations.This section describes the CJOC specifics only.

  • Instance caps are being managed individually for each client master. There is no global limitation for the entire CJOC cluster.

  • The PublishSettings (Microsoft Azure Subscription Manifest) credentials will be shared across all CJE instances.

azure configuration
Figure 53. Microsoft Azure Shared Cloud Configuration setup
Testing the configuration
  • A test will be launched on CJOC when you click on the Verify Template button in the configuration

  • You can also test the connection from one of Jenkins client masters when the configuration gets synchronized.

Cluster Operations

Cluster Operations is a facility to perform maintenance operations on various items in CloudBees Jenkins Operation Center (CJOC), such as client masters and update centers. Different operations are applicable to various items such as performing backups or restarts on client masters, or upgrade/install plugins in update centers etc.

The main way of running these operations is either via a custom project type, or some preset operations embedded at different locations in the Jenkins UI.

Cluster Operation Projects

You create a Cluster Operation Project in the same way as you would any other project in Jenkins, by selecting New Item in the view you want to create it in, giving it a name, and selecting Cluster Operation as the item type.

Concepts

A Cluster Operation Project can contain one or more Operations that are executed in sequence one after the other when the project runs.

An Operation has:

  • A type that it can operate on, for example, Client Master or Update Center.

  • A set of target items that it will operate on that is obtained from a selected Source and reduced by a set of configured Filters. The target items will be operated on in parallel, and the max number of parallel items can be configured in the Advanced Operation Options section.

  • A list of Steps to perform in sequence on each target item.

The available sources, filters and steps depend on the target type that the Operation supports.

Tutorial

On the root level or within a folder of CloudBees Jenkins Operation Center,

Select New Item.

Specify a name for the cluster operation, for example "Quiet down all masters".

Select Cluster Operation as the item type.

NewItem
Figure 54. Creating a new cluster operation

You will then be directed to the configuration screen for the newly created project.

configure
Figure 55. Creating a new cluster operation

Click on Add Operation and select Masters to add a new operation with the Client Master target type.

add operation
Figure 56. Creating a new cluster operation

Select From Operations Center Root as the source and add the filter called Is Online. This will select all client masters in CloudBees Jenkins Operations Center that are online when the operation is run.

We have now specified what to run the operation on and next we will specify what to run on them by adding two steps.

Click on Add Step, select Execute Groovy Script on Master and enter the code:

System.out.println("==QUIET-DOWN@" + new Date());

This will print the text and the current date and time to the log on the Jenkins master which can be handy for audit later on.

Add a new step called Prepare master for shutdown.

This step performs functions similar to what you would get if you clicked on Prepare for Shutdown on the Manage Jenkins page on each master.

Your configuration should look something like the following when you’re done:

operation configured
Figure 57. Creating a new cluster operation

Save and then Run like any normal project. When the operation starts it will run each client master in parallel. And afterwards you can see on each client master the standard notice Jenkins is going to shut down.

Note

The user that runs the operation will need the RUN_SCRIPT permission on the client master in order for the Groovy step to work, as well as ADMINISTER for the prepare for shutdown step. Otherwise the operation run will fail.

Controlling how to fail

Sometimes it’s desirable to modify how a failure affects the rest of the operation flow. On the configuration screen for the cluster operation project, in each Operation section, there is an Advanced button. Selecting it to reveal some advanced control functions like max number of masters to run in parallel, but also Failure Mode and Fail As.

  • Fail As is a way to set the Jenkins build result that the run will get: Failure, Abort, Unstable, and so on.

  • Failure Mode controls what happens to the rest of the run if an operation step on an item fails.

    • Fail Immediately: means Abort anything in progress and fail immediately

    • Fail Tidy: means wait for anything currently running to finish, then fail. (All operations in the queue are cancelled.)

    • Fail At The End: means let everything run to the end, and then fail.

Ad-Hoc Manual Cluster Operations

CloudBees Jenkins Operations Center comes with a couple of preset cluster operations that can be run on selected client masters directly from the side panel of a list view or client master page. The list of preset cluster operations is "stored" under Manage Jenkins/Cluster Operations.

Running from a List View

Cluster Operations provides a new list view column type called ClusterOp Item Selector and appears by default as the rightmost column on new ListViews and the All view.

AllView
Figure 58. Ad-hoc cluster operation

For preexisting list views before cluster operations, you’d need to add the column by editing the view. As with all list views (except the All view) you can change the order of the columns to have them appear in an order more to your liking.

Mark the client masters that you want to run a operation on by ticking the appropriate checkbox in the Op column; the selection on each view will be remembered throughout your session.

In the left panel, there is a menu task named Cluster Operations. Hovering the mouse cursor over that task will reveal a clickable context menu marker. Click on it to open the context menu that contains the available operations for the view.

select run adhoc
Figure 59. Ad-hoc cluster operation

Clicking the Cluster Operations link in the side panel will take you to a separate page with the same operations list.

run adhoc listpage
Figure 60. Ad-hoc cluster operation

On the separate list page you can get to the project page of the preset operation (if you are an administrator) by clicking the gear icon next to the operation name.

Clicking the operation’s name, either via the context menu or the separate list page, will take you to the run page, where you are asked to confirm running the operation, or if the operation requires parameters, they will be presented there. The run page also contains the list of selected client masters, with those not applicable for this run shown with a strike-out through their names, and a simple explanation of why, either because a given master is the wrong type for the operation or because a configured filter removed it from the resource pool. Some operations, for example, are only designed to run on online masters, so any offline masters will be filtered out.

ad hoc run now
Figure 61. Ad-hoc cluster operation
Note
The list of masters and whether operations will run on them or not are part of a preliminary display; the list will be recalculated once an operation actually runs. Accordingly, client masters might get back online or could go offline in the interval between the display and when you choose to actually run the operation.

Running from a Client Master manage page

The same procedure as the previous section can be done on single client masters. The way to do it is the same except that because you are only operating on a single client master, no selection on a list view is involved.

ad hoc from client master
Figure 62. Ad-hoc cluster operation

Operation run results and logs

Each run of a cluster operation project is accessible from the project page in the left panel like any normal Jenkins project. On the run page you can see the operations that were executed, the items (client masters or update centers) that they ran on, and the result in the form of a colored ball (success/failure) as well as a link to the log files for each run.

run page
Figure 63. Cluster operation run page

Console Output in the left panel shows the overall console log for all operations. To see the individual console output of each operation on a client master or update center you can go via the (log) link next to each item on the run page or via a link for each in the overall console output.

CloudBees Jenkins Analytics

CloudBees Jenkins Analytics (CJA) provides insight into your usage of Jenkins and the health of your Jenkins cluster. Each Client Master reports events and metrics to the CloudBees Jenkins Operations Center. These data are viewable through the built-in Build Analytics and Performance Analytics views in the CJOC dashboard. You can also create custom views to share with your team.

CloudBees Jenkins Analytics can help answer such questions as:

  • Why is Jenkins running slowly?

  • What is the job failure rate for a label or master?

  • How long are jobs waiting in the queue?

  • How is the number of jobs and builds growing over time?

  • When should I assign more resources to Jenkins?

build analytics example
Figure 64. Build Analytics Example (build results)

The data generated by the Analytics module is stored in an Elasticsearch instance. You can configure the CJOC server to use either an Embedded Elasticsearch, or you can connect to a remote Elasticsearch cluster for improved scalability and reliability.

Elasticsearch is an open-source search engine which runs on the JVM. Time-series events stored in Elasticsearch can be visualized in the embedded Kibana analytics views on the CJOC dashboard.

Getting Started

To enable Analytics feature you will have to install the following plugins on CJOC:

  • Operations Center Analytics Kibana Dashboards

  • Operations Center Analytics Feeder

  • Operations Center Analytics Configuration

  • Operations Center Elasticsearch Provider

  • Operations Center Monitoring Plugin

  • Operations Center Analytics

For each CJE master you want to report information to CJOC analytics you will have to install the following plugins:

  • Operations Center Analytics Configuration

  • Operations Center Analytics Reporter

Out of the box, the Analytics module requires some configuration. To do so, click on the Configure Analytics link on the Manage Jenkins page.

Elasticsearch connection options

There are two Elasticsearch configurations available:

  • Remote Elasticsearch - connection to a standalone Elasticsearch cluster, which may be located on a local or remote host.

  • Embedded Elasticsearch - an Elasticsearch instance which is launched and managed by CloudBees Jenkins Operations Center.

Warning
Embedded Elasticsearch is designed for evaluation purposes. It can be used in order to evaluate the functionality or to simplify the setup of development/test CJOC instances. For production instances CloudBees supports Remote Elasticsearch only.

Setting up Embedded Elasticsearch for evaluation

The fastest way to get up and running is to choose the Embedded Elasticsearch option in the Elasticsearch Configuration field. Please note that this option is being provided for evaluation purposes only.

configure embedded elasticsearch
Figure 65. Configuring Embedded Elasticsearch

Once you have done so, your connected Client Masters will begin reporting their performance and build analytics. For more information on configuring the Embedded Elasticsearch, see the section on Remote Elasticsearch Configuration.

Setting up Kibana for visualization

In order to get the dashboards working, you need to setup a default index pattern for Kibana Dashboards. There are two ways to accomplish this:

  1. Restart the CJOC instance. The default indexes will be applied after the restart

  2. Manually configure a default index

    • Open Analytics Dashboard Creator from the left panel at the Jenkins main page

    • Go to the Settings tab of Kibana 4 interface

    • Specify any default index, e.g. *

    • Save the configuration

After that CJOC will start displaying dashboards in the web interface.

Global Reporting Configuration

On the above Configure Analytics page, reporting can be enabled by default across all masters. This default applies to new masters. If Allow per-master overrides is unchecked each master will begin reporting analytics immediately, without regard to individual master configuration.

Individual Master Reporting Configuration

Once Elasticsearch is configured, Client Masters will begin to report their analytics to CJOC. This can be enabled on a per-master basis on the Client Master Configuration screen by checking the Enable checkbox in the Analytics Reporting section.

client master configuration
Figure 66. Configuring a Client Master

Compatibility

Backward-incompatible changes in CJA

This section lists compatibility-breaking changes in CloudBees Jenkins Analytics.

Version 1.8.0 - Migration to Kibana 4

In 1.8.0 CloudBees Jenkins Analytics has been migrated to Kibana 4. This version is a new major release, which is not compatible with the previous Kibana 3 version.

Impact:

  • Custom dashboards created in CJOC 1.6 and 1.7 are not supported in CJOC 1.8 and higher.

  • There is no automatic migration flow available for the legacy dashboards

See Upgrading Kibana Dashboards for upgrade guidelines.

Version 1.8.100 - Database schema change

CloudBees Jenkins Analytics 1.8.100 introduced serious changes in the Elasticsearch database schema. This change was required in order to support referencing multiple nodes from a single build, which was required for Jenkins Pipeline compatibility.

Changes summary:

  • New index set: nodes-*

    • This index stores information about node configuration history (labels, settings, etc.)

    • All secrets are being filtered during the data submission.

  • Modification of the builds-* index schema.

    • Now builds contain an array of timestamped node references.

    • The original builtOn and effectiveLabelAtoms fields have been deprecated. They won’t be created for new build records.

  • Modification of all built-in dashboards.

    • There are no visualization changes.

    • Built-in dashboards now support only the new Elasticsearch data format.

Impact on users:

  • CloudBees Jenkins Analytics does not perform migration of original build entries in the builds-* index.

  • Once the CloudBees Jenkins Analytics Viewer plugin is upgraded, Analytics dashboards will stop displaying node and label usage stats for the old data.

  • It is possible to access the historical data in built-in dashboards.

    • For each modified dashboard and visualization built-in dashboards CJA provides a legacy item

    • ID of such legacy items is Legacy_${OLD_ID}

    • It is possible to add these dashboards back to CJOC web interfaces by adding a Custom Dashboard View.

  • Custom dashboards require modification if they use builtOn and effectiveLabelAtoms fields in queries.

    • There is no automatic dashboard migration flow available.

The current CloudBees Jenkins Analytics Elasticsearch database schema does not provide all features required for Pipeline support, so it is a subject for future backward-incompatible changes.

Remote Elasticsearch

Supported Elasticsearch versions

CloudBees Jenkins Operations Center versions prior to 15.11 support the 1.3.x line of Elasticsearch. CJOC 15.11 and higher support the Elasticsearch 1.7.x baseline (docs), compatibility with other versions is not guaranteed.

CloudBees Jenkins Analytics is known to be incompatible with Elasticsearch 2.x baseline. This version introduces breaking changes, hence neither CloudBees Jenkins Analytics Reporter nor CloudBees Jenkins Analytics Viewer visualizations support this version.

Supported Elasticsearch Authentication methods

The following authentication methods are supported by CloudBees Jenkins Analytics:

  • No authentication

  • BASIC authentication with user and password (since version 1.7.3 of CloudBees Jenkins Analytics Feeder)

  • DIGEST authentication with user and password (since version 1.7.3 of CloudBees Jenkins Analytics Feeder)

The authentication engine may be implemented by the elasticsearch-http-basic plugin for Elasticsearch or by a standalone web server like nginx.

For more information see the section on Remote Elasticsearch Configuration.

Embedded Elasticsearch specifics

Compatibility with CJOC features

The Embedded Elasticsearch option is designed for evaluation purposes only. It is not integrated with all CloudBees Jenkins Operations Center features. Below you can find a list of known limitations:

  • Embedded Elasticsearch may operate incorrectly together with High Availability on CJOC. There is a risk of Elasticsearch data corruption in corner cases.

  • The CJOC instance shares all system resources with Embedded Elasticsearch. It may cause system instability in the case of high resource utilization by the Elasticsearch component.

It is therefore recommended the the Remote Elasticsearch option is used for all but a simple test CJOC setup.

Supported Java versions

The Embedded Elasticsearch option requires CJOC to be running under a Java 7 update 55 (or later) or Java 8 update 20 (or later) JVM. Running on older versions of Java can cause index corruption and data loss causing the Embedded Elasticsearch to fail to start. Running on newer Java versions (Java 9 or above) is not supported.

Compatibility with Elasticsearch and Kibana baselines

Both Kibana and Elasticsearch are standalone products, so the level of backward compatibility between components is determined by these products itself.

Level of support being provided in CloudBees Jenkins Analytics:

  • CloudBees maintains the integration with Elasticsearch versions according to its CloudBees Jenkins Platform support policies

  • Once a new major CJA version is released, the integration with the target Elasticsearch and Kibana version will be maintained during the CJP backporting period defined for the customer service level

  • CloudBees does not guarantee the smooth migration to new Elasticsearch or Kibana baselines

    • Migration between baselines may require manual steps from CJA administrators

    • CloudBees does not guarantee the compatibility of data structures in Elasticsearch, but it will provide data migration tools or guidelines

    • CloudBees where possible provides a functional replacement of built-in dashboards provided in CJA, their look-and-feel may differ across Kibana versions

    • Custom dashboards in Kibana may require reconfiguration

    • Compatibility of any standalone tool is not guaranteed

Jenkins Pipeline compatibility

Jenkins Pipeline is the new engine in Jenkins, which allows describing Jenkins builds independently from particular nodes or SCM instances. The Pipeline internal architecture is different from the classic one used by other jobs types (e.g. support of multiple nodes), so its support in CloudBees Jenkins Analytics required backward-incompatible changes in CJA 1.8.12.

Below you can find info about particular Pipeline integration cases.

Build Analytics

Currently CloudBees Jenkins Analytics provides a limited support of Pipeline in Build Analytics. The limitations mentioned below are the subject for the future improvement.

  • Node usage reporting - FULLY SUPPORTED

    • CloudBees Jenkins Analytics Reporter captures any usage of Jenkins nodes by node() steps in Pipeline.

    • If a Pipeline run uses a node multiple times, such usages will be recorded in CJA multiple times as well.

    • If a node configuration changes during the Pipeline job execution between multiple runs on this node, node references will be pointing to different configuration entries in the nodes-* index.

  • Label usage reporting - PARTIALLY SUPPORTED

    • CloudBees Jenkins Analytics Reporter submits only the node name as a label.

    • Other information cannot be extracted from the Pipeline plugin API.

  • SCM usage reporting - NOT SUPPORTED

    • CloudBees Jenkins Analytics Reporter does not record SCM usage statistics for Pipelines.

    • Current CJA database architecture does not support multiple SCMs, which is required for Pipelines.

Pipeline Analytics

Currently there is no specific Pipeline Analytics feature available in CloudBees Jenkins Analytics out of the box. CJA Reporter submits build actions to the database and node configuration changes, so it is possible to implement some Pipeline Analytics functionality via Custom Dashboards.

An extensive Pipeline analytics is a subject for future CloudBees Jenkins Analytics improvements.

Built-in Dashboards

CJA ships with several preconfigured dashboards. Each analytics dashboard contains several controls which filter the data and allow navigation of the results. The following navigational elements are called out in the screenshot below.

  1. The currently selected Analytics dashboard is indicated at the left sidebar.

  2. Navigation between related reports of the current dashboard is available on the left.

  3. Masters can be filtered by clicking on their names in chart legends

  4. Many panels are clickable and will drill-down (filter) results to only include the given value.

  5. Once a drill-down filter is enabled, Dashboards will display a filters tab allowing to set complex filtering conditions

  6. You can zoom in on a histogram by clicking and dragging to limit the data displayed.

analytics view controls
Figure 67. Analytics Dashboard Controls
Note

Only users with the "View Dashboard" permission are permitted to see the Analytics dashboards.

Working with Custom Dashboards

Creating a dashboard in Kibana 4

Users with the Create Dashboard permission can click on the Analytics Dashboard Creator link on the left side of the CJOC main page. A common Kibana 4 web interface will be displayed where users will be able to create and edit their own dashboards.

This guide provides only basic info about dashboard creation in Kibana 4. More information can be found in external tutorials, several examples are listed below:

In order to create a new dashboard the following steps should be performed:

  1. Ensure that Kibana 4 accepts indexes you want to display.

    • Go to the Settings tab in the top menu and check the Index name or pattern field.

  2. Develop Kibana 4 visualizations using the Visualize tab in the top menu.

  3. Aggregate visualizations into dashboards using the Dashboard tab in the top menu.

You can also browse the built-in dashboards for insight on how to do various things.

Copying existing Kibana items

Copying visualizations:

  1. Go to the Visualize tab of Kibana.

  2. Scroll down to the Or, open a saved visualization section.

  3. Find a visualization you want to copy. Click on it in order to open its editor.

  4. Click the Save button on the upper control bar.

  5. Specify the new name in the Title field

  6. Click the Save button below the Title field.

copy visualization
Figure 68. Copying the visualization

Copying dashboards:

  1. Go to the Dashboard tab of Kibana.

  2. Scroll down to the Or, open a saved visualization section.

  3. Click the Save button on the upper control bar.

  4. Specify the new name in the Title field

  5. Click the Save button below the Title field.

Note
For both dashboards and visualizations, unique IDs of items will be generated according to the specified title.
Creating visualizations

Visualizations can be created using the Visualization tab in the Kibana web interface.

Tip

Be sure to save the changes before leaving the page! Kibana 4 caches changes, but the data may be lost in particular cases.

Creating dashboards

After entering the Dashboard Creator, new dashboards can be created using the Dashboard tab in the Kibana web interface.

Dashboards contain a set of `visualization`s augmented by their position and size info which means that any new visualizations needs to be created before the dashboard in order to create the dashboard in a single step.

Editing the built-in dashboards provided by CJOC

This use-case is prohibited. Any change made to the built-in dashboards will be overwritten when the CJOC server is restarted.

In order to edit a built-in dashboard it is possible to copy its items. For example, in order to adjust a visualization (such as a chart), perform the following steps:

  1. Open the visualization you want to edit.

  2. Copy it to a new visualization (see Copying existing Kibana items).

  3. Make changes in the visualization.

  4. Copy the original dashboard.

  5. Replace the old visualization by the newly created one.

Adding a Custom Analytics View in CJOC

CloudBees Jenkins Analytics enables creating custom built-in dashboards in the CJOC web interface. The functionality is implemented as a common View type in Jenkins, so these views can be managed individually for Jenkins root and folders. It is also possible to manage View visibility using standard Jenkins permissions.

In order to create a new Custom Analytics View…​

  1. Return to the main CJOC dashboard. Click the [+] button to add a new view.

  2. Choose a Custom Analytics View for the type and provide a name, and press OK.

  3. In the Dashboards field, you can add multiple custom analytics dashboards by choosing the custom dashboard from the drop-down list and giving it a corresponding display name.

  4. Once done, press the OK button

custom view edit
Figure 69. Custom Analytics View configuration page

After creating the view it is possible to navigate dashboards using the side panel. It is also possible to edit and delete views if the user has the required permissions.

custom view index
Figure 70. Custom Analytics View main page
Tip

When editing a dashboard, it’s often useful to see the raw JSON data via an events table. To add this, create a new row and add a table panel with a span of 12. This will allow viewing of the raw data to give you an idea of the available fields.

Reindexing for CloudBees Jenkins Analytics

When you initially configure CJA, you may have historical builds which you would like to report on. The Reindex for Analytics Cluster Operation scans all builds on connected Jenkins' and (re)submits them for analysis.

How to Reindex for Analytics
  1. Create a new Cluster Operations job in CJOC.

  2. In the Operations section add a Masters operation.

  3. In the Target Masters / Source section, choose From Operations Center Root

  4. In the Steps section, add a Reindex for Analytics step.

  5. Click Save

reindex
Figure 71. Completed Reindex Cluster Operations Job
Caution

Reindexing involves scanning all builds in a Jenkins instance, which can be an expensive operation. Consider scheduling the Reindex job for off-hours so as not to disrupt other usage of Jenkins.

Configuring Elasticsearch

CloudBees Jenkins Analytics depends upon Elasticsearch 1.7 (docs) and embeds Kibana 4 (docs). There are two primary modes for using Elasticsearch in CJA: Embedded and Remote.

Embedded Elasticsearch allows you to get up and running quickly for short-term (less than two weeks) evaluation purposes only. Remote Elasticsearch instances are needed for high availability and for scaling to hundreds of nodes with thousands of gigabytes of data.

Note
It is recommended to use the Remote Elasticsearch mode for anything other than trivial installations and/or prototyping.

Security

Elasticsearch by itself does not perform authentication or authorization, leaving that as an exercise for the administrator. The securing of Elasticsearch is outside the scope of this document, but the typical solutions include:

  • Restricting the IP addresses to which Elasticsearch binds the Elasticsearch ports (by default 9200 and 9300).

  • Restricting the IP addresses from which connections to the Elasticsearch ports will be accepted.

  • Applying authentication security layer either through a HTTP proxy or using an Elasticsearch plugin.

Note
CJOC provides a proxy for Elasticsearch that is integrated with the CJOC authentication and authorization mechanisms in order to allow the Kibana dashboards to work when embedded in CJOC.
Note
By default the Embedded Elasticsearch instance will only bind to localhost and does not provide any request authentication. This should be considered as a minimal security. For production use we recommend using a Remote Elasticsearch instance.
Embedded Elasticsearch

The security options of the Embedded Elasticsearch instance are limited to controlling the bind address of Elasticsearch, preventing remote access. The behavior can be configured by altering the Bind Host configuration entry in those cases where you can trust a specific subnet that CJOC is connected to.

Remote Elasticsearch

Prior to CJA 1.7.3, there was no support for connection authentication. In CJA 1.7.3 support was added for name/password authentication using either HTTP Digest or HTTP Basic authentication. The authentication mode is controlled by the Auth Scheme:

NONE

No authentication will be performed. The selected Username / Password credentials will be ignored.

BASIC

The selected Username / Password credentials will be used to provide preemptive HTTP Basic authentication of all requests to the remote Elasticsearch

DIGEST

The selected Username / Password credentials will be used to provide preemptive HTTP Digest authentication of all requests to the remote Elasticsearch

Warning

The Elasticsearch ports (default of 9200/9300) are neither authenticated nor encrypted and should not be exposed to either untrusted users or hosts. Use network security filtering to limit access to these ports only to the CJOC host or other Elasticsearch/trusted hosts.

Neither users nor other Jenkins instances require direct access to the Elasticsearch ports or protocol. All user access (for Analytics/Kibana dashboards) is done via the special /elasticsearch endpoint on the CJOC, which provides an authenticated proxy which filters access according to the provided security configuration.

Embedded Elasticsearch Configuration

To configure the Embedded Elasticsearch, click on the Configure Analytics link on the Manage Jenkins page. Then, choose the Embedded Elasticsearch option in the Elasticsearch Configuration field. There are several fields available for configuration and they are extensively covered in the inline documentation, which you can see by clicking on the help icon. Generally, the default settings are sufficient.

Note

The Embedded Elasticsearch requires a Java 7 JVM and at least 100 megabytes of permanent generation (PermGen) memory. You can increase the PermGen by passing the -XX:MaxPermSize=128M option to the JVM.

configure embedded elasticsearch
Figure 72. Configuring Embedded Elasticsearch

The Embedded Elasticsearch instance stores its data in the $JENKINS_HOME/elasticsearch directory by default. This can be changed to an absolute path outside of the Jenkins home if desired.

Testing Embedded Elasticsearch Configuration

Embedded Elasticsearch configuration interface provides the Test connection button, which allows to check the settings before saving them. If a user presses the button, a test request will be sent to the Elasticsearch instance. If everything is OK, CJOC displays a version for the server instance. Otherwise there will be an error summary message in the web interface. The full detailed diagnostic information for Test connection operations are recorded in the Jenkins logs.

Warning
Test connection can only succeed if the Elasticsearch instance is started. When you select this provider for the first time and have not saved the configuration, the embedded Elasticsearch instance will not have been started. You will need to save the configuration before the instance will be configured. The instance is lazy started, and as such may not be started util after either a client master has attempted to submit metrics or the Test connection button has been clicked. In short, you may have to Apply the configuration and click the Test connection button twice (with a 30 second wait in between) before you see a successful connection using the Embedded Elasticsearch option.

Remote Elasticsearch Configuration

While the Embedded Elasticsearch is adequate for smaller deployments, scalability and high availability requirements will drive the need for remote Elasticsearch instances running on one or more remote hosts. Using a Remote Elasticsearch, CJA can take advantage of effectively unlimited compute and disk resources.

The following steps describe how to manually configure Elasticsearch using the RPM package. The DEB or zip distribution requires similar steps.

Configuring a Remote Elasticsearch Instance
  1. On the remote host, run sudo yum install https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.7.4.noarch.rpm

  2. As directed run: sudo /bin/systemctl daemon-reload; sudo /bin/systemctl enable elasticsearch.service

  3. Edit /etc/elasticsearch/elasticsearch.yml

    1. Uncomment and set discovery.zen.ping.multicast.enabled to false

    2. Uncomment and set the discovery.zen.ping.unicast.hosts to point to any other Elasticsearch nodes

    3. Add a setting for the path.repo setting as directed in the Snapshot And Restore section of the Elasticsearch Reference documentation. This must be the same as the Path to Backup Elasticsearch Snapshots to field on the CJA configuration page as indicated in the Automatic Backups section of this document.

    4. See the Elasticsearch documentation for more information on how to configure Elasticsearch

  4. Start Elasticsearch by running sudo /bin/systemctl start elasticsearch.service

  5. Optional: Setup security for the newly created Elasticsearch instance

  6. Repeat the above steps for additional Elasticsearch nodes

  7. In CJOC, go to Manage Jenkins > Configure Analytics and choose the Remote Elasticsearch option.

  8. Enter the URL’s for the Elasticsearch hosts in the Elasticsearch URLs field in a space delimited list. Note that these URL’s should include the port, which is 9200 by default.

  9. Optional: Select the authentication scheme and select or create credentials to access secured Elasticsearch instances

  10. Press Save

configure remote elasticsearch
Figure 73. Configuring Remote Elasticsearch Instances
Testing the Remote Elasticsearch Configuration

Remote Elasticsearch configuration interface provides a Test connection button to allow an Administrator to validate the configuration prior to saving it. When the Test connection button is pressed, all specified URLs will be checked sequentially with the selected authentication scheme and the specified credentials.

For each URL the checker tries to retrieve the Elasticsearch summary info. If the operation succeeds, CJOC prints the summary info for the URL. If the validation fails, CJOC prints the error/warning summary info to the validation report. More info is available in CJOC system logs.

Note
On the image below you can find an example where 1 URL is accessible, but others return communication errors.
configure remote elasticsearch test
Figure 74. Testing Remote Elasticsearch Instances
Automatic backups of Elasticsearch

Backing up of Elasticsearch can be enabled via a single checkbox. This performs a regular backup to the filesystem that Elasticsearch is running on. The path you specify must be on a shared filesystem that is accessible by all nodes in the Elasticsearch cluster. The precise details of how this works in Elasticsearch may be found on the Snapshot and Restore Elasticsearch Documentation.

The configuration options are:

  • Enable Elasticsearch Backups - Check this to enable backups of Elasticseearch

  • Backup Interval in Minutes - How frequently to perform snapshots of the Elasticsearch cluster

  • Path to Backup Elasticsearch Snapshots to - The path on the filesystem where the snapshots will be created

  • Number of Backup Snapshots to Keep for Elasticsearch - The number of historic snapshots to keep. Older snapshots will be regularly deleted to conserve storage

  • Name Within Elasticsearch to Use for Backups - This is the name of the backup configuration to use within Elasticsearch

Restoring must be done manually from the backup. The api to use to perform the restore is documented in the Elasticsearch Restore documentation.

configure elasticsearch backups
Figure 75. Configuring Backups For Elasticsearch

Data Schema and Retention

Analytics data are stored in Elasticsearch using date-stamped index names. These are retained at various intervals, after which the index for that day is deleted automatically by CJA. Future versions of CJA will allow you to configure the retention interval of each index type. The indexes and their retention period are listed below.

Table 7. Index Purpose and Retention Period
Name Pattern Retention Period Contents Disk usage expectation

metrics-*

3 days

Metric reports from Jenkins masters, which are submitted every 10 seconds.

450 mb (stable) per connected master

metrics-hourly-*

3 years

Hourly summary of the metrics reports for each master, based on the metrics-* index.

210mb per connected master, per year

builds-*

3 years

Builds reported for indexing by Client Masters. Contains minimal data fields needed for reporting. Does not include console logs or artifacts. Starting from CloudBees Jenkins Analytics 1.8.100 does not include detailed info about nodes, this data has been moved to the nodes-* index.

2k per build

nodes-*

3 years

Node configuration changes reported for indexing by Client Masters. Event examples: node config save, creation/deletion of nodes. Such data will be recorded for all node types including ephemeral and cloud-provisioned nodes.

1k per configuration change

items

Forever

Minimal information about each job and folder.

1k per job or folder

kibana-4-cloudbees

Forever

Dashboard and panel definitions.

30k per dashboard

Note

Deleting a build in Jenkins does not result in the build being deleted from Analytics, because this would diminish the accuracy and value of the Build Analytics feature. Instead, builds will be 'forgotten' 3 years after they were last reported — when the datestamped build-* for that day is deleted.

Note that "Reindexing" resets the lifespan of a build. So Jenkins still has a build from 5 years ago and you Reindex that Jenkins, the build will be retained in the builds-* index for another three years, regardless of its lifespan in Jenkins itself.

Upgrading CloudBees Jenkins Analytics

Upgrading Elasticsearch from 1.3.x to 1.7.x

Starting from 1.8.0, CloudBees Jenkins Analytics has a new recommended version of Elasticsearch (1.7.3). There are known compatibility issues between Elasticsearch versions, so the upgrade should be performed with caution.

Known migration issues:

  • Old versions of Elasticsearch store the state data in different file formats. If you upgrade Elasticsearch without cleaning up these state files then the Elasticsearch instance may fail to start. The issue is registered as elasticsearch #10565

Remote Elasticsearch

Remote Elasticsearch instances are not part of CloudBees Jenkins Platform, so they are being managed and supported independently. Upgrade guidelines are available in the Rolling Upgrades section of the official Elasticsearch documentation.

Additional steps, which are recommended for Elasticsearch 1.7.x:

  1. Setup the threadpool.search.queue_size system property for all Elasticsearch nodes

    • The default value is too low for the correct behavior of CJA dashboards based on Kibana 4

    • Low value may lead to the kibana #3221 issue

    • The recommended queue size is 2000 or larger.

Embedded Elasticsearch

Embedded Elasticsearch instance in CloudBees Jenkins Analytics is implemented by the Operations Center Embedded Elasticsearch plugin. The new version of Elasticsearch will be deployed during the upgrade. Migration issues mentioned above are also applicable to embedded Elasticsearch and require some actions.

Migration steps are listed below. Request patterns for steps 2,3,6, and 7 are available in the Rolling Upgrades guide.

  1. Perform backup of Elasticsearch data before the plugin setup. Note that plugins like CloudBees Backup Plugin ignore the Elasticsearch directory.

  2. Disable shard allocation using REST API.

  3. Stop non-essential indexing and perform a synced flush.

  4. Install the new version of Operations Center Embedded Elasticsearch Plugin.

  5. Restart the CloudBees Jenkins Operations Center instance and wait till it starts up.

  6. Reenable shard allocation.

  7. Wait for the instance to recover.

Environment variables: * The default value of threadpool.search.queue_size is 2000. The value can be changed in the Advanced section of Embedded Elasticsearch settings in the CloudBees Jenkins Analytics global configuration page in the CJOC web interface.

Upgrading Kibana Dashboards

Starting from version 1.8.0, CJA provides dashboards based on embedded Kibana 4 instead in CloudBees Jenkins Operations Center. Previous CJOC versions used Kibana 3, which is not compatible with Kibana 4 due to significant architectural changes. There is no automatic migration support, so the update requires manual actions to be taken on the CJOC instance.

Potential impact:

  • CloudBees provides an automatic migration for all default dashboards available in CJOC.

  • Standard Build Analytics and Performance Analytics views will be automatically migrated and enhanced during the upgrade

  • All user-created Kibana 3 dashboards for CJOC require a manual migration to Kibana 4

  • Custom Analytics Dashboard views in CJOC may require a reconfiguration

Migration approach:

  1. Kibana 4 will be automatically deployed during the upgrade of CJA Dashboards Plugin.

    • You may need to set up the default index pattern after the upgrade (see Getting Started)

  2. Kibana 3 data will be retained within the kibana-int index. It is possible to retrieve this data using Elasticsearch API or utilities like Elasticdump.

  3. Since the data format of other indices has not been changed, Kibana 3 can be launched as a standalone application in order to view original dashboards during the implementation of their replacement for Kibana 4

  4. After the environment setup, it is possible to implement new dashboards with the same functionality. Several useful links are listed below:

Metrics available on Analytics

The CloudBees Analytics plugin catches the metrics exposed by other plugins.

The main plugin which exposes these metrics is the open source Jenkins Metrics Plugin which defines an API for integrating the Dropwizard Metrics API within Jenkins. It defines a number of standard metrics and provides some basic health checks.

  • System and Java Virtual Machine metrics

  • Web UI metrics

  • Jenkins specific metrics

  • Standard health checks

More information about the specific metrics exposed by the Jenkins Metrics Plugin can be found at Monitoring Plugin.

Other plugins can also expose different metrics like the Metrics Disk Usage plugin. If more metrics are needed, you might want to implement those metrics on a Jenkins plugin or look for a plugin that already exposes what you are looking for.

Known limitations (Errata)

This section contains info about CloudBees Jenkins Analytics limitations in the current version. There is no confirmed ETA for fixes of issues referenced in this section.

Build Analytics - Node Label usage stats

Jenkins supports the definition of complex labels for nodes, including expressions with logic operators and conjunctions. This level of complexity is not currently supported by CloudBees Jenkins Analytics, and labels having complex expressions do not report correctly. As a workaround it is possible to simplify node label expressions by defining multiple labels for the same node.

Build Analytics - Limited Pipeline support

CloudBees Jenkins Analytics does not fully support the Pipeline job type. If you combine classic (Freestyle, Matrix) and Pipeline jobs within a single Client Master, it may impact the accuracy of Build Analytics for node labels and SCMs, because CloudBees Jenkins Analytics Reporter does not submit all the information.

See Jenkins Pipeline compatibility for more info.

Security

Introduction

CloudBees Jenkins Operations Center (CJOC) uses the standard Jenkins security model. In other words there are two axes to security:

  • The security realm - which is responsible for identifying the user and reporting on any groups that are defined in the security realm that the user belongs to

  • The authorization strategy - which is responsible for determining the set of permissions that an identified user has on any specific object within the Jenkins object tree.

There are three modes you can select between with CloudBees Jenkins Operations Center:

  1. All Jenkins client masters are independent and can chose their own security realms and authorization strategies

  2. All Jenkins client masters will be forced to delegate their security realm to CJOC but can choose their own authorization strategies.

  3. All Jenkins client masters will be forced to delegate their security realm to CJOC and will be forced to use the same authorization strategy configuration as CJOC.

Finally, authorization strategies that are CJOC-aware (at the time of writing the only such authorization strategy is the CloudBeesRole Based Access Control Plugin) can contextualize the authorization strategy configuration of individual client masters based on the context within CJOC that the client master is defined in.

CloudBees recommends the following security configuration as the ideal. Customers are can use other configurations but the following recommendations provide the greatest functionality and flexibility:

  • Enable security

  • Select any security realm

  • Select the CloudBees Role Based Access Control Plugin as the authorization strategy

  • Select a markup formatter

  • Enable prevention of cross site request forgery exploits

  • Enforce 0 on-master executors as jobs running on a master to prevent jobs running on the master from modifying the master’s configuration.

  • Select Single Sign-On (security realm and authorization strategy) as for the security settings enforcement policy

  • If you are integrating existing client masters into CJOC, it may be beneficial to allow client masters to opt-out of the security settings enforcement policy while you decide how to transition their existing configuration to a CJOC managed configuration.

  • Select the appropriate default authentication mapping strategy. If you have different classes of masters you will want to enable per-master configuration of authentication mapping. Where all masters are managed by the Operations Center administrators then Trusted master with equivalent security realm is likely appropriate. Selecting Restricted master with equivalent security realm is appropriate for low risk masters where the team(s) using the master have root access to the master. Select Untrusted master with equivalent security realm if you have higher risk masters.

    Note

    The choice of authentication mapping strategy may affect the ability of some functionality.

    For example, a master that has the Untrusted master with equivalent security realm will only be able to see other client masters that are visible to unauthenticated users and the remote job trigger functionality from that master will only be able to trigger jobs that can be triggered by unauthenticated users.

  • Enable Slave → Master Access Control (you will also want to enable this on all masters).

recommended config
Figure 76. Recommended configuration

Single Sign-On Fallback Behavior

When using either of the Single Sign-On security modes, Operations Center supports a fallback mechanism to increase resiliency across the platform. If Operations Center goes offline, the Client Master connected to that Operations Center will detect the inability to connect to Operations Center, then fallback to use the same Security Realm as defined in Operations Center, but locally from the master. For example, if you use the Active Directory plugin from Operations Center and enable Single Sign-On, the same Active Directory configuration will be pushed to Client Masters in the case of an Operations Center outage. This fallback behavior allows Client Masters to continue to authenticate until Operations Center connectivity is restored.

Note

Given this fallback behavior, you must ensure any custom plugins used for authentication (i.e.: a custom security realm) in combination with Operation Center’s Single Sign-On behavior are installed on Operations Center and all connected Client Masters participating in Single Sign-On.

CloudBees Jenkins Operations Center specific permissions

CloudBees Jenkins Operations Center defines a number of new permissions that are specific to CJOC:

Client Master /Configure

This permission allows access to the configuration and management pages of client masters

Shared Cloud / Configure

This permission allows access to the configuration page of shared clouds

Shared Cloud / Connect

This permission allows access to connect a JNLP agent to a shared cloud

Shared Cloud / ForceRelease

This permission allows forcing a lease of a shared cloud into the released state

Shared Agent / Configure

This permission allows access to the configuration page of shared agents

Shared Agent / Connect

This permission allows access to connect a JNLP agent to a shared agent

Shared Agent / ForceRelease

This permission allows forcing a lease of a shared agent into the released state

Authentication mapping

CJOC acts as a gateway between each of the client masters. Any request from one client master to another client master will be routed through CJOC. Each request will be tagged with the authentication that originated the request. Because different client masters can have different levels of trust or indeed could use completely different authentication or authorization strategies it is necessary for the CJOC administrator to define the default authentication mapping strategy and optionally define specific mapping strategies for individual client masters.

In general, requests from a client master have one of two origins:

  • User requests driven via the UI. These requests will be tagged with the authentication of the user making the request. An example of one of these requests would be using the path browser to select a job to trigger.

  • Build driven requests. For a default installation of Jenkins or CJE these requests will be tagged with the SYSTEM authentication. The authentication of a build can be modified. For example the Authorize Project plugin allows jobs to be configured to run as: ANONYMOUS, a specific user, or the user who triggered the build.

Requests from a client master to another client master will be mapped twice. The first mapping will be to convert the source client master’s authentication into the authentication for CJOC. The second mapping will be to convert CJOC’s authentication into the authentication for the target master.

Authentication mapping also takes place for requests originating from CJOC to client masters, such requests will be mapped once.

Note
Cluster Operations and Authentication Mapping

When running a cluster operation that has been configured as a job on CJOC, on a default installation of CJOC the defined cluster operation will run using the SYSTEM authentication. It is expected that the CJOC administrator will either: * restrict the BUILD permission on the cluster operation to those users that are permitted to perform the operation; or * install the Authorize Project plugin on CJOC and configure the job to run with the appropriate authentication

Because ad-hoc cluster operations do not have a ‘job’ against which to either configure the project authorization or control the permissions CJOC provides a built-in authorization for ad-hoc cluster operations. When running an ad-hoc cluster operations from CJOC the authentication of that ad-hoc cluster operation will always be tagged with the authentication of the user triggering the ad-hoc operation.

Note
Authorize Project plugin and Credentials

When using the Authorize Project plugin the credentials available to the job may be different from when the job runs as SYSTEM.

By default, if the authentication that the build is running as has the Job/Build permission then the credentials should include all those credentials within the scope of the build job and also include any credentials defined in that user’s personal credential store.

There are two system properties that enable additional permissions to control the credentials available:

  • -Dcom.cloudbees.plugins.credentials.UseOwnPermission=true enables the configuration of the Credentials/UseOwn permission. A user must have this permission on a job (or Jenkins/Administer which implies this permission) in order for their personal credential store to be available to the job when running as that user.

    Caution
    Credentials/UseOwn permission is normally implied by the Job/Build permission. When you enable configuration of this permission using the -Dcom.cloudbees.plugins.credentials.UseOwnPermission=true system property the permission changes to being implied by the Jenkins/Administer permission in order to allow the configuration to be effectively useful and enable the ability to let a user trigger a build but not use their own credentials.
  • -Dcom.cloudbees.plugins.credentials.UseItemPermission=true enables the configuration of the Credentials/UseItem permission. A user must have this permission on a job (or Job/Configure which implies this permission) in order for the credential stores within scope of the job to be available when the job is running as that user.

Tip
If you enable either of these system properties on a client master and you have configured CJOC to push RBAC to all client masters, it is recommended that you enable the system properties on CJOC and all client masters so that the role definitions will be consistent.

Enabling the Credentials/UseOwn and Credentials/UseItem permissions should (assuming that the plugins using the credentials are using the credentials API correctly) allow fine grained control over what credentials a job has access to when using the Authorize Project plugin (or any other plugin that provides authorization for jobs)

The available authentication mapping strategies depends on how you have configured the Client master security on CJOC:

  • With Do not enforce security settings on masters there is no guarantee that the client master is even using the same security realm. As such a username john may refer to different users. In some cases it may be possible to map, for example, an user-id to/from an email address. Alternatively there may be a few specific users that the CJOC administrator is prepared to define static mappings for.

  • With either Single Sign-On (security realm only) or Single Sign-On (security realm and authorization strategy) we have a guarantee that the client master is using the same security realm and thus the usernames can be mapped.

The strategies are then broken down in terms of the level of trust the CJOC administrator is prepared to forward to the client master.

  • The highest level of trust assumes that the SYSTEM authentication of the client master can act as SYSTEM on CJOC.

  • An intermediate level of trust would be where the administration of a client master has been delegated to some of that client master’s users. Those users would thus have Jenkins/Administer on the client master but not on CJOC. As such they could use their super-user permissions as a route to gain additional permissions on CJOC. By selecting an authentication mapping that maps SYSTEM when coming from the client master into ANONYMOUS the CJOC administrator can effectively remove the risk. Typically with this mapping the user authentication can be trusted and thus an idempotent mapping of usernames (if the security realms are equivalent) would be appropriate.

  • The lowest level of trust is to map all authentication coming from the client master to the ANONYMOUS authentication. This level of trust is appropriate when the client master integrity is unknown to the CJOC administrator. For example, where a skunkworks project team wants their client master connected to CJOC and the CJOC administrator does not know what plugins that team will be installing.

Note

When selecting any strategy other than the highest level of trust, it is almost guaranteed that you will need to install the Authorize Project plugin or an equivalent plugin on all client masters as well as on CJOC if any inter-client master operations will be required.

This is because the build jobs will be running as SYSTEM and when their authentication is mapped through CJOC that will be converted to ANONYMOUS which more than likely does not have the required permissions on the target client master.

Caution
Changing the authentication mapping strategy

The authentication mapping is installed into the remoting channel as the very first step of establishing the remoting connection between the client and CJOC.

For security reasons the authentication mapping of a connection cannot be updated while the connection is active.

If you reconfigure the authentication mapping, the new authentication mapping will only be picked up once the client master has been disconnected from and reconnected to CJOC.

Example Configurations

This section details some solutions to common configuration requirements. There are other ways to solve these configuration requirements, and there is no one “correct” solution for any of these, so it is better to think of these as examples of what can be done.

CJOC instance shared along multiple teams with read/administration access in CJOC

Background.

This example considers the case where there are multiple teams of users sharing the access to CJOC. Only the administrator team has the permission to do any task in both the CJOC instance and the client masters. The rest of the teams will only use CJOC instance as an intermediate element to log-in.

In this example we have declared two client masters:

  • Master-1

  • Master-2

And four different teams:

  • Developers team A - will only have read access to the CJOC instance and will only be able to access to Master-1. We will call this group as developer-team-A-group.

  • Developers team B - will only have read access to the CJOC instance and will only be able to access to Master-2. We will call this group as developer-team-B-group.

  • Tester team - will only have read access to the CJOC instance and will be able to access to Master-1 and Master-2. We will call this group as tester-team-group.

  • Admin team - will be able to administer the CJOC and client masters systems. We will call this group as admin-team-group.

Sample solution.

  • 1. At CJOC ROOT level create the groups internal-oc-admin-group and internal-oc-read-group and assign the roles admin_role and read_role respectively.

  • 2. At Master-1 level create the internal-master-1-group and assign both the developer_role and the read_role.

  • 3. At Master-2 level create the internal-master-2-group and assign both the developer_role and read_role.

Table 8. Sample roles for a CJOC instance shared by different teams
Role Permissions

read_role

  • Overall | Read

  • Job | Read

admin_role

  • Overall | Administer

developer_role

  • Whatever permission you want to use

Table 9. Sample groups for an instance shared by different teams
Context Group Role(s) Member(s)

ROOT

internal-oc-admin-group

  • admin_role (current level/propagation)

All the administrator(s) of the CJOC instance

ROOT

internal-oc-read-group

  • read_role (current level/no propagation)

  • developers-team-A-group

  • developers-team-B-group

  • tester-team-group

Master-1

  • internal-master-1-group

  • developer_role (current level/propagation)

  • read_role (current level/no propagation)

  • developers-team-A-group

  • tester-team-group

Master-2

  • internal-master-2-group

  • developer_role (current level/propagation)

  • read_role (current level/no propagation)

  • developers-team-B-group

  • tester-team-group

Step by step set-up.

  • 1. At CJOC ROOT level create the groups internal-oc-admin-group and internal-oc-read-group and assign the roles admin_role and read_role respectively.

Create the following roles in the CJOC ROOT context:

  • read_role enabling the Overall\Read and Job\Read permission.

  • admin_role with the administration permission.

  • developer_role with whatever permissions you want to use.

Notice that authenticated permission still has the Overall/Administer permission in order not to be blocked ourselves in the instance. Once the admin_role is assigned to a Jenkins internal group, we can customize the permission in the authenticated role as required.

From CJOC dashboard click on Roles→Manage in the left side panel. Then, create the following roles.

oc rbac img 001
Figure 77. Manage Roles

Create the two internal groups.

  • internal-oc-read-group which contains all the groups which should have read access in CJOC to only access the client master.

oc rbac img 002

Assign the read_role granted at current level without propagation.

oc rbac img 003

Add the members of this group.

oc rbac img 004
  • internal-oc-admin-group which contains all the CJOC/client masters administrators.

oc rbac img 005

Assign the admin_role granted at current level with propagation.

oc rbac img 006

Add the members of this group.

oc rbac img 007

At this point, admin_role is fully set-up. However, users with read_role will only be able to read the CJOC dashboard, but not to see any Client master. We still need to do the next step to sort this out.

  • 2. At Master-1 level create the internal-master-1-group and assign the read_role and the developer_role.

From CJOC dashboard, click on the Groups option in the dropdown menu of the Master-1.

oc rbac img 008

Add a new internal group defined in Master-1 called internal-master-1-group.

oc rbac img 009

Assign the role read_role granted at current level without any propagation and the developer_role with propagation this time.

oc rbac img 010

Add the members of this group:

oc rbac img 011
  • 3. At Master-2 level create the internal-master-2-group and assign the developer_role and read_role.

The same approach in the point before can be done in Master-2 with the only difference that now we need to map developers-team-B-group and tester-team-group instead.

Do not forget to remove permissions to the Authenticated role for the configuration above to work properly.

CJOC instance shared along multiple teams with read/administration access in CJOC and isolating teams into folders at client master level.

Background.

This example considers the case where there are multiple teams of users sharing the access to CJOC. Only the administrator team has the permission to do any task in both the CJOC instance and the client masters. The rest of the teams will only use CJOC instance as an element to log-in. One team will only be able to see the folder they belong to.

In this example we have declared two client masters:

  • Master-1

    • developers-team-A-folder (Working folder for developers-team-A-group)

    • another-folder (accesible by everyone)

  • Master-2

And four different teams:

  • Developers team A - will only have read access to the CJOC instance and will only be able to access to Master-1. In Master-1 these users will only see the folder developers-team-A-folder. We will call this group as internal-developers-team-A-group.

  • Developers team B - will only have read access to the CJOC instance and will only be able to access to Master-2. We will call this group as internal-developers-team-B-group.

  • Tester team - will only have read access to the CJOC instance and will be able to access to Master-1 and Master-2. We will call this group as internal-tester-team-group.

  • Admin team - will be able to administer the CJOC and client masters systems. We will call this group as internal-admin-team-group.

Sample solution.

  • 1. At CJOC ROOT level create the groups internal-oc-admin-group and internal-oc-read-group and assign respectively the roles admin_role and read_role. Create the developer_role as well since it will be used later.

  • 2. At Master-1 create the internal group internal-developers-team-A-group and assign the read_role.

  • 3. At Master-1 create the internal group internal-master-1-group and assign the developer_role and the read_role.

  • 4. At Master-1/developers-team-A-folder level create the internal-master-1-group and assign the developer_role.

  • 5. At Master-2 level create internal-master-2-group and assign the developer_role.

Table 10. Sample roles for a CJOC instance shared by different teams
Role Permissions

read_role

  • Overall | Read

  • Job | Read

admin_role

  • Overall | Administer

developer_role

  • Whatever permission you want to use

Table 11. Sample groups for an instance shared by different teams
Context Group Role(s) Member(s)

ROOT

internal-oc-admin-group

  • admin_role (current level/propagation)

All the administrator(s) of the CJOC instance

ROOT

internal-oc-read-group

  • read_role

  • developers-team-A-group

  • developers-team-B-group

  • tester-team-group

Master-1

  • internal-developers-team-A-group

  • read_role (current level/no propagation)

  • developers-team-A-group

Master-1

  • internal-master-1-group

  • developer_role (current level/propagation)

  • read_role (current level/no propagation)

  • tester-team-group

Master-1/developers-team-A-folder

  • internal-developers-team-A-group

  • developer_role (current level/propagation)

  • developers-team-A-group

  • tester-team-group

Master-2

  • internal-master-2-group

  • developer_role (current level/propagation)

  • developers-team-B-group

  • tester-team-group

Step by step set-up.

  • 1. At CJOC ROOT level create the groups internal-oc-admin-group and internal-oc-read-group and assign respectively the roles admin_role and read_role. Create the developer_role as well since it will be used later.

Create the following roles in the CJOC ROOT context:

  • read_role enabling the Overall\Read and Job\Read permission.

  • admin_role with the administration permission.

  • developer_role with whatever permissions you want to use.

Notice that authenticated permission still has the Overall/Administer permission in order not to be blocked ourselves in the instance. Once we assigned the admin_role to a Jenkins internal group, we can customize the permission in the authenticated role as required.

From CJOC dashboard click on Roles→Manage in the left side panel. Then, create the following roles.

oc rbac img 001
Figure 78. Manage Roles

Create the two internal groups:

  • internal-oc-read-group which contains all the groups which should have read access in CJOC to only access the client master.

oc rbac img 002

Assign the read_role granted at current level without propagation.

oc rbac img 003

Add the members of this group.

oc rbac img 004
  • internal-oc-admin-group which contains all the CJOC/client masters administrators.

oc rbac img 005

Assign the admin_role granted at current level with propagation.

oc rbac img 006

Add the members of this group.

oc rbac img 007

At this point, admin_role is fully set-up. However, users with read_role will only be able to read the CJOC dashboard, but not to see any Client master. We still need to do the next steps to sort this out.

  • 2. At Master-1 create the internal group internal-developers-team-A-group and assign the read_role.

From CJOC dashboard, click on the Groups option in the dropdown menu of the Master-1.

oc rbac img 008

Add a new internal group defined in Master-1 called internal-developers-team-A-group.

Assign the role read_role granted at current level without any propagation.

oc rbac img 012

Add the members of this group:

oc rbac img 013
  • 3. At Master-1 create the internal group internal-master-1-group and assign the developer_role and the read_role.

You need to do the exactly same operations than the point before but this time creating a group called internal-master-1-group which includes tester-team-group and assign the read_role and the developer_role.

oc rbac img 010
  • 4. At Master-1/developers-team-A-folder level create the internal-developers-team-A-group and assign the developer_role.

Create the group internal-master-1-group.

oc rbac img 014

Assign assign the developer_role granted at current level with propagation.

oc rbac img 015

Add the members of this group.

oc rbac img 016

Now, when any member of the internal-developer-team-A-group log-in in CJOC will only be able to access to the Master-1/developers-team-A-folder and not any of the rest of the elements in the instance.

  • 5. At Master-2 level create internal-master-2-group and assign the developer role.

Understanding what is done in the previous section we should be able now to create the internal-master-2-group at Master-2 level and assign the developer_role permission.

Do not forget to remove permissions to the Authenticated role for the configuration above to work properly.

CJOC Instance shared along multiple teams with read/admin access in CJOC delegating the administration of groups at client master level to no-admin teams.

Background.

This example considers the case where there are multiple teams of users sharing the access to CJOC. Only the administrator team has the permission to do any task in both the CJOC instance and the client masters. The rest of the teams will only use CJOC instance as an intermediate element to log-in. Client masters group-admins will be able to administer the role assignment inside the client master they belong to.

In this example we have declared two client masters:

  • Master-1

  • Master-2

And four different teams:

  • Developers team A - will only have read access to the CJOC instance and will only be able to access to Master-1. We will call this group as developer-team-A-group.

  • Developers team B - will only have read access to the CJOC instance and will only be able to access to Master-2. We will call this group as developer-team-B-group.

  • Groups administration team - will only have read access to the CJOC instance and will be able only to access to Master-1 to configure groups inside the client masters. We will call this group as m1-groups-admin-team-group.

  • Admin team - will be able to administer the CJOC and client masters systems. We will call this group as admin-team-group.

Sample solution.

  • 1. At CJOC ROOT level create the groups internal-oc-admin-group and internal-oc-read-group and assign the roles admin_role and read_role respectively.

  • 2. At Master-1 level create the internal-master-1-group and assign both the developer_role and the read_role.

  • 3. At Master-1 level create the internal-m1-groups-admin-team-group and assign both the group_role and the read_role.

  • 4. At Master-2 level create the internal-master-2-group and assign both the developer_role and read_role.

Table 12. Sample roles for a CJOC instance shared by different teams
Role Permissions

read_role

  • Overall | Read

  • Job | Read

admin_role

  • Overall | Administer

developer_role

  • Whatever permission you want to use

group_role

  • Group | View

  • Group | Manage

  • Group | Configure

Table 13. Sample groups for an instance shared by different teams
Context Group Role(s) Member(s)

ROOT

internal-oc-admin-group

  • admin_role (current level/propagation)

All the administrator(s) of the CJOC instance

ROOT

internal-oc-read-group

  • read_role (current level/no propagation)

  • developers-team-A-group

  • developers-team-B-group

  • m1-groups-admin-team-group

Master-1

  • internal-master-1-group

  • developer_role (current level/propagation)

  • read_role (current level/no propagation)

  • developers-team-A-group

Master-1

  • internal-m1-groups-admin-team-group

  • group_role (current level/propagation)

  • read_role (current level/propagation)

  • m1-groups-admin-team-group

Master-2

  • internal-master-2-group

  • developer_role (current level/propagation)

  • read_role (current level/no propagation)

  • developers-team-B-group

Step by step set-up.

  • 1. At CJOC ROOT level create the groups internal-oc-admin-group and internal-oc-read-group and assign the roles admin_role and read_role respectively.

Create the following roles in the CJOC ROOT context:

  • read_role enabling the Overall\Read and Job\Read permission.

  • admin_role with the administration permission.

  • group_role enabling the Group\View, Group\Manage and Group\configure permission.

  • developer_role with whatever permissions you want to use.

Notice that authenticated permission still has the Overall/Administer permission in order not to be blocked ourselves in the instance. Once the admin_role is assigned to a Jenkins internal group, we can customize the permission in the authenticated role as required.

From CJOC dashboard click on Roles→Manage in the left side panel. Then, create the following roles.

oc rbac img 017
Figure 79. Manage Roles

Create the two internal groups.

  • internal-oc-read-group which contains all the groups which should have read access in CJOC to only access the client master.

oc rbac img 002

Assign the read_role granted at current level without propagation.

oc rbac img 003

Add the members of this group.

oc rbac img 019
  • internal-oc-admin-group which contains all the CJOC/client masters administrators.

oc rbac img 005

Assign the admin_role granted at current level with propagation.

oc rbac img 020

Add the members of this group.

oc rbac img 007

At this point, admin_role is fully set-up. However, users with read_role will only be able to read the CJOC dashboard, but not to see any Client master. We still need to do the next step to sort this out.

  • 2. At Master-1 level create the internal-master-1-group and assign the read_role and the developer_role.

From CJOC dashboard, click on the Groups option in the dropdown menu of the Master-1.

oc rbac img 008

Add a new internal group defined in Master-1 called internal-master-1-group.

oc rbac img 009

Assign the role read_role granted at current level without any propagation and the developer_role with propagation this time.

oc rbac img 021

Add the members of this group:

oc rbac img 022
  • 3. At Master-1 level create the internal-m1-groups-admin-team-group and assign both the group_role and the read_role.

From CJOC dashboard, click on the Groups option in the dropdown menu of the Master-1.

oc rbac img 008

Add a new internal group defined in Master-1 called internal-m1-groups-admin-team-group.

oc rbac img 023

Assign the role read_role and the group_role both with propagation.

oc rbac img 024

Add the members of this group:

oc rbac img 025
  • 4. At Master-2 level create the internal-master-2-group and assign both the developer_role and read_role.

Following the previous points, now you should be able to complete this step in Master-2. The main difference is that now you will need to map developers-team-B-group.

Do not forget to remove permissions to the Authenticated role for the configuration above to work properly.

Cluster-wide job triggers

Depending on the scheme used to organize an operations center cluster, it may be necessary to trigger jobs that are on a remote client master. For example, the team responsible for the production servers may want to trigger the QA team’s sanity tests against the staging environment servers before deploying to production. Given that the security concerns of the production servers can differ from those of the QA team’s, it may be that the QA team uses a different client master from the production team. In order to solve these types of problems the Operations Center Context plugin provides a build step and a post build action that can trigger jobs either on the root CloudBees Jenkins Operations Center or any client master Jenkins instance in the Operations Center cluster.

Note

The ability to trigger jobs across masters within an operations center cluster has been added to Operations Center as an update to the 1.6 release. This functionality requires the following minimum plugin versions:

  • Client Masters must be running

    • operations-center-context 1.6.1 or newer

    • operations-center-client 1.6.1 or newer

  • Operations Center server must be running

    • operations-center-context 1.6.1 or newer

    • operations-center-server 1.6.4 or newer

    • operations-center-sso 1.6.1 or newer

If operations center server has not been upgraded then

  • client masters will be unable to trigger jobs on remote masters

If individual client masters have not been upgraded then

  • those client masters will be unable to trigger jobs on remote masters; and

  • other remote masters will be able to trigger jobs on the client masters with operations-center-context 1.6 but the blocking behaviour and parameterized trigger functionality will be silently unavailable for jobs running on the client masters with operations-center-context 1.6.

The Trigger builds on remote projects build step

This build steps allows you to intersperse triggering jobs with other build steps. This can be useful if there is a flow of operations needed by the job. For example, you may have a job that resets test database instances. Using this build step you could trigger that job, specify the target database as a build parameter, wait for the reset job to complete and then proceed with the build steps that actually run the tests against the test database.

To add the trigger just select the Trigger builds on remote projects option from the Add build step drop down.

adding builder
Figure 80. Adding a build step to trigger jobs across the operations center cluster

The initial configuration does not include any projects to trigger. You can trigger multiple downstream jobs from the same build step. All jobs in a trigger will be triggered in parallel. If you want to force sequential job triggers you will need to use multiple build steps.

builder added
Figure 81. An empty Trigger builds on remote projects build step

Clicking the Add button will allow you to select the job to trigger and configure the options on the trigger. The Add button can be used repeatedly if you have multiple jobs to trigger.

Note

All jobs in the same Trigger builds on remote projects build step are processed in parallel. If you have a build step with two jobs where the first job is configured to wait until finished and mark the build as a failure if the downstream job failed and the second job is configured to trigger only when successful, both jobs will be triggered even if the first downstream job fails as the triggering occurs before the results of execution are known. However if the first job is deleted then the On job missing configuration may mark the build as FAILED and prevent the second job from being triggered.

Configuring the downstream job

There are four things that must be specified for each downstream job:

  • Name - this uses a special path browser UI control to allow you to select any job visible to you throughout the Operations Center cluster.

    builder with job1
    Figure 82. Selecting a job to trigger
    builder with job2
    Figure 83. Selecting a job to trigger (continued)
    builder with job3
    Figure 84. Job to trigger selected
    Tip

    The path browser UI provides visual indicators of its state.

    Until an item has been selected, the control will be blue with no confirmation icon in the right hand side of the input box.

    path incomplete
    Figure 85. A job selection control where a selection has not been completed

    You can either navigate with the keyboard or mouse. Typing the name of the item can be used to select it or refine the list of items displayed. Additionally the entire / separated path can be pasted into the control.

    Once a valid selection has been made the control should turn green and a check box icon should appear in the right hand side of the input box.

    path complete
    Figure 86. A job selection control where a selection has been completed

    When viewing/modifying the configuration of an existing job there may be occasions when either the communications path to the remote Jenkins is off-line or where the user viewing/modifying does not have permission to view the already selected downstream job. In such cases the control will display a partial path and a warning which provides a Retry link.

    In the event that the warning was shown because of an interrupted communications channel, the retry link will restore the controls functionality.

    As it is not possible to reveal whether the downstream job does not exist or whether the current user just does not have permission to view the downstream job the control will remain in a read-only state while displaying a warning.

    path segments offline
    Figure 87. A job selection control where some of the nodes to the selected item are currently off-line
  • Trigger When - the allows you to choose, based on the current build state, when the trigger should run:

    configure trigger when
    Figure 88. The available Trigger When options
    • Always will trigger the selected job irrespective of the current build result

    • Either unstable or successful will trigger the selected job if the current build has been marked as UNSTABLE or retains the initial build result of SUCCESS but will not trigger the selected job if a previous build step has marked the build as FAILED.

    • Only when successful will trigger the selected job only if the current build retains the the initial build result of SUCCESS. If a previous build step marked the build as either UNSTABLE or FAILED then the job will not be triggered.

  • On job missing - this allows you to choose what should happen to the current build if the specified job cannot be located (i.e. if it has been moved / deleted)

  • Mode - this allows specifying any blocking behaviour with regards to the triggered job.

    configure blocking mode
    Figure 89. The available Mode options
    • Fire and forget does not wait and assumes that the job will be triggered. The build request will be sent with a 24h time-to-live, so as long as the Jenkins instance sending the trigger connects to the Operations Center server (in order for the request to be forwarded to the Operations Center server messaging transport) and the target Jenkins instance subsequently connects to the Operations Center server within that 24h window the job will be triggered.

      blocking mode fire and forget
      Figure 90. Fire and forget mode options
    • Wait until scheduled will wait, for a user configurable time period, for confirmation of the job being enqueued on the target Jenkins build queue. The build request will be sent with a time-to-live of the specified timeout. That is, if you specify a 5 minute timeout the build request will expire after 5 minutes, if you specify a 20 day timeout then the build request will expire after 20 days, etc.

      If the confirmation is not received within the timeout, then the On timeout action will be applied to the build.

      Tip
      If there are multiple requests to build the downstream job using this mode they can be coalesced by the downstream Jenkins
      blocking mode wait scheduled
      Figure 91. Wait until scheduled mode options
    • Wait until started will wait, for a user configurable time period, for confirmation that the target job has started building. The build request will be sent with a time-to-live of the specified timeout. That is, if you specify a 5 minute timeout the build request will expire after 5 minutes, if you specify a 20 day timeout then the build request will expire after 20 days, etc.

      If the confirmation is not received within the timeout, then the On timeout action will be applied to the build.

      Once confirmation is received the downstream build number will be recorded on the build log.

      Tip
      If there are multiple requests to build the downstream job using this mode they can be coalesced by the downstream Jenkins
      blocking mode wait started
      Figure 92. Wait until started mode options
    • Wait until finished will wait, for a user configurable time period, for confirmation that the target job has completed building. The build request will be sent with a time-to-live of 24h.

      If the build result is not received within the timeout, then the On timeout action will be applied to the build.

      When the build result is received, the downstream build number and the build result will be recorded on the build log.

      If the build result is UNSTABLE then the On unstable action will be applied to the build.

      If the build result is FAILED then the On failure action will be applied to the build.

      Tip
      If there are multiple requests to build the downstream job using this mode they will not be coalesced by the downstream Jenkins as the requirement to report back the tracking information ensures separate builds for each request.
      blocking mode wait finished
      Figure 93. Wait until finished mode options
    • Track progress and wait until finished allows complete control over monitoring the progression of the build. The Scheduled timeout controls how long to wait for the confirmation that the build is enqueued. This timeout also controls the time-to-live of the build request. If left blank then a 24h time-to-live will be used. In the event of the scheduled timeout expiring, the On scheduled timeout action will be applied to the build.

      The Started timeout controls how long to wait for confirmation of the downstream job starting. If confirmation is not received within the specified timeout then the On started timeout action will be applied to the build. If confirmation is received then the downstream build number will be recorded on the build log.

      Finally the Finised timeout controls how long to wait for confirmation that the target job has completed building.

      If the build result is not received within the timeout, then the On finished timeout action will be applied to the build.

      When the build result is received, the downstream build number and the build result will be recorded on the build log.

      If the build result is UNSTABLE then the On unstable action will be applied to the build.

      If the build result is FAILED then the On failure action will be applied to the build.

      Tip
      If there are multiple requests to build the downstream job using this mode they will not be coalesced by the downstream Jenkins as the requirement to report back the tracking information ensures separate builds for each request.
      blocking mode track progress
      Figure 94. Track progress and wait until finished mode options
Note

Each of the On …​ actions has four options:

  • Ignore and continue - which will leave the build result as is and continue.

  • Mark build as failure and continue - which will set the build result to FAILURE and continue.

  • Mark build as failure and stop - which will set the build result to FAILURE and try to stop the build without running any further build steps.

  • Mark build as unstable and continue - which will set the build result to UNSTABLE - unless it is already marked as FAILURE - and continue.

configure failure mode
Figure 95. Configuring failure modes

Triggering parameterized jobs

In addition to the four mandatory configuration options for each downstream job you also have the option of specifying the build parameters that the job will be supplied with when building.

Note

Parameters on a trigger for a non-parameterized downstream job will be stripped from the build request on receipt by the downstream Jenkins.

Similarly, any parameters that are not defined in a parameterized downstream job will be stripped from the build request.

This is to ensure that a trigger cannot maliciously manipulate the downstream build job’s environment as most build parameters get exposed as environment variables in the build run.

The Add parameters button can be used to add parameter value factories to the build request(s) for the downstream job. The parameter value factories are an extension point.

job parameters 1
Figure 96. Configuring downstream job parameters
job parameters 2
Figure 97. A configured downstream job parameter

The Operations Center Context plugin provides the following parameter value factories:

  • Boolean parameter produces a single parameter value which can be either true or false;

    job param boolean
    Figure 98. The configuration options for a Boolean parameter
  • Current build parameters produces a single set of parameters that are a subset of the triggering job’s build parameters. This allows you to propagate the triggering job’s build parameters to the triggered job.

    By default parameters which are defined as having sensitive values - such as password parameters - will be excluded but the Parameters with sensitive values option allows this to be configured.

    If you need to exclude some of the build parameters from the trigger use the Excluded parameters option. This takes a multi-line list of parameter names (one per line) which will be excluded from the build request. Wildcards matching is supported by using the * character.

    job param current
    Figure 99. The configuration options for Current build parameters
  • String parameter produces a single parameter value specified as a verbatim string constant.

    job param string
    Figure 100. The configuration options for a String parameter
  • Fan-out string parameter produces multiple sets of parameter values. This will result in multiple build requests of the downstream job. Each value is specified on a separate line in the Values option.

    job param fan out
    Figure 101. The configuration options for a Fan-out string parameter
Tip

A parameter value factory can produce multiple sets of parameter values. If there are multiple sets of parameter values then the downstream job will receive multiple build requests. Where there are multiple parameter value factories producing multiple sets of parameter values, a build request for every combination will be submitted.

By way of example, if there are the following build parameters defined:

  • A boolean parameter called ACTIVATE value true

  • A fan-out string parameter called X with values 1, 2 and 3

  • A string parameter called HOST with value staging

  • A fan-out string parameter called Y with values A, B and C

Then a total of 9 separate build requests would be made of the downstream job:

  • ACTIVE = true, X = 1, HOST = staging, Y = A

  • ACTIVE = true, X = 1, HOST = staging, Y = B

  • ACTIVE = true, X = 1, HOST = staging, Y = C

  • ACTIVE = true, X = 2, HOST = staging, Y = A

  • ACTIVE = true, X = 2, HOST = staging, Y = B

  • ACTIVE = true, X = 2, HOST = staging, Y = C

  • ACTIVE = true, X = 3, HOST = staging, Y = A

  • ACTIVE = true, X = 3, HOST = staging, Y = B

  • ACTIVE = true, X = 3, HOST = staging, Y = C

The Build other remote projects post-build action

This post-build action allows you to trigger jobs after a build has completed even in those cases where a build step marked the build as a failure and requested that the build stop immediately.

To add the trigger just select the Builds other remote projects option from the Add post-build action drop down.

adding publisher
Figure 102. Adding a post-build action to trigger jobs across the operations center cluster at the end of the build

The initial configuration does not include any projects to trigger. The available configuration options are identical to those of the Trigger builds on remote/local projects build step. You can trigger multiple downstream jobs from the post-build action. All jobs in the post-build action will be triggered in parallel.

builder added
Figure 103. A Build other remote projects post-build action added

The Trigger Remote Job Pipeline Step

As of operations-center-context version 1.8.0, it is possible to trigger remote jobs from a Pipeline job using the ad-hoc Remote Trigger Job function. The RemoteTriggerJob step will appear in the list of available steps of Pipeline if operations-center-context (v. 1.8.0 +) plugin is installed in the Jenkins Instance.

remoteTriggerJob workflowStep

All the parameters available in the remote trigger functionality explained above will be available in the Pipeline step, except for the when parameter which is not applicable for a Pipeline script.

remoteTriggerJob workflowStep parameters
Note

Just as the configuration described in the previous section, all the On…​ actions has four options, which in Pipeline scripts correspond to:

  • ContinueAsIs: which will leave the build result as is and continue.

  • ContinueAsFailure: which will set the build result to FAILURE and continue.

  • StopAsFailure: which will set the build result to FAILURE and try to stop the build without running any further build steps.

  • ContinueAsUnstable: which will set the build result to UNSTABLE - unless it is already marked as FAILURE - and continue.

The remote trigger Pipeline step takes the following parameters:

remotePathUrl

The URL of the remote Job to trigger. This can have 2 formats

  • jenkins://jenkins_instance_id/path_to_job: if the job to trigger is in the same instance where the worklfow job is configured, it is possible to use . to indicate the current instance.

  • cjp://path_from_root_of_cjoc: the walking path to the job from the root of CJOC.

remotePathMissing

The behaviour to take when the remotePathUrl parameter is missing. This can be any of the behaviour described in the NOTE section above.

mode

The kind of blocking behaviour wanted with respect to waiting on the downstream job

FireAndForget

does not wait and assumes that the job will be triggered. This is the default value and can be ommitted.

remoteTriggerJob fireAndForget
ConfirmScheduled

will wait, for a user configurable time period, for confirmation of the job being enqueued on the target Jenkins build queue.

remoteTriggerJob confirmScheduled
timeout

how long to wait for the build result of the downstream job.

whenTimeout

select wanted behaviour among the ones described in the NOTE section above.

ConfirmStarted

will wait, for a user configurable time period, for confirmation that the target job has started building.

remoteTriggerJob confirmStarted
timeout

how long to wait for the build result of the downstream job.

whenTimeout

select wanted behaviour among the ones described in the NOTE section above.

AwaitResult

will wait, for a user configurable time period, for confirmation that the target job has completed building.

remoteTriggerJob awaitResult
timeout

how long to wait for the build result of the downstream job.

whenFailure

what to do when the downstream job fails, select wanted behaviour among the ones described in the NOTE section above.

whenTimeout

what to do in case of timeout, select wanted behaviour among the ones described in the NOTE section above.

whenUnstable

what to do when the downstream job is unstable, select wanted behaviour among the ones described in the NOTE section above.

TrackProgressAwaitResult

allows complete control over monitoring the progression of the build.

remoteTriggerJob trackResult
scheduledTimeout

How long to wait for confirmation that the build request of the downstream job has been accepted in the build queue.

startedTimeout

How long to wait for confirmation that the downstream job has started building.

timeout

How long to wait for the build result of the downstream job.

whenFailure

what to do when the downstream fails, select wanted behaviour among the ones described in the NOTE section above.

whenScheduledTimeout

what to do when the scheduledTimeout expires, select wanted behaviour among the ones described in the NOTE section above.

whenStartedTimeout

what to do when the startedTimeout expires, select wanted behaviour among the ones described in the NOTE section above.

whenTimeout

what to do when the timeout expires, select behaviour among the ones described in the NOTE section above.

whenUnstable

what to do then the downstream result is unstable, select wanted behaviour among the ones described in the NOTE section above.

Triggering jobs from metrics based alerts

The CloudBees Monitoring plugin provides the ability to monitor various metrics of Jenkins and raise alerts when those metrics deviate from user defined ranges. The default alerter that ships with the CloudBees Monitoring plugin is an email recipient that will send an email containing basic details of the alert. The Operations Center Context plugin also provides an additional recipient that can be used to trigger a remote project on alert state transitions.

Note

This functionality requires the following minimum plugin versions:

  • Jenkins instance being monitored must be

    • operations-center-context 1.8 or newer

    • cloudbees-monitoring 2.0 or newer

  • Jenkins instance with the target job being triggered must be running

    • operations-center-context 1.7.0 or newer

If the job being triggered is on a different Jenkins instance from the Jenkins instances that is being monitored, then both Jenkins instances must be part of the same CloudBees Jenkins Operations Center cluster.

The notifier can be used as a global recipient as well as a metric specific recipient.

Some use-cases for this notifier could include:

  • Triggering a project that cleans up the workspaces on a master when the free disk space falls below a user defined threshold

  • Triggering a cluster operation to schedule a safe restart of a client master if the free memory has been below 10% for more than 1 hour

  • etc

alert notifiers
Figure 104. Adding a Trigger a build of a remote project global recipient to alerting

By default:

  • The recipient will only trigger projects when the alert condition becomes active.

  • If the project being triggered is parameterized the following parameter values will be provided by default:

    Note
    If the project being triggered does not have a parameter with the corresponding name then the parameter value will be ignored when the job is triggered.
    • SOURCE_JENKINS_URL which will contain a string value corresponding to the URL of the root of the Jenkins master from which the alert originates.

    • CONDITION_NAME which will contain a string value corresponding to the name of the condition that has triggered.

    • CONDITION_ACTIVE which will contain a boolean value where truth indicates that the condition has just transitioned to active.

  • The job request will be submitted with a time-to-live of 1 hour. i.e. if the communications path between the source master and the destination master is temporarily off-line, the build request will be retained in the messaging queue for at most 1 hour.

The default options can be customized using the advanced options.

alert advanced
Figure 105. The advanced options for Trigger a build of a remote project
Table 14. The configuration options available for Trigger a build of a remote project

Option

Description

Job to trigger

The job to trigger

Parameter name for source Jenkins URL

If specified then job will be triggered with this named parameter having the value of this Jenkins master’s URL.

Tip
This parameter can be particularly helpful when triggering a job on a remote master.

Parameter name for condition name

If specified then job will be triggered with this named parameter having this alert’s title.

Tip
This parameter can be particularly helpful when a single job receives multiple different alerts.

Parameter name for condition active

If specified then job will be triggered with this named parameter having indicating whether the alert is currently active or inactive.

Trigger the job when condition active

If selected then the job will be triggered when the alert transitions from inactive to active.

Trigger the job when condition inactive

If selected then the job will be triggered when the alert transitions from active to inactive.

Timeout for triggering the job

How long the request for the build request for the job has will remain valid.

If the request cannot be delivered to the downstream master within this timeout then the build request may be discarded.

If no units of time are specified then the value is assumed to be in seconds. You can specify the timeout in multiple units and the symantics of those units will be preserved, e.g. 1h 65s will not be transformed into 3665s. The valid units are:

d, day or days

days

h, hr, hrs, hour or hours

hours

m, min, mins, minute or minutes

minutes

s, sec, secs, second or seconds

seconds

ms, milli, millis, millisecond or milliseconds

milliseconds

The following are all valid timeouts (though they will be simplified on save)

  • 300 will be simplified to 300s i.e. 300 seconds

  • 5 minutes will be simplified to 5m i.e. 300 seconds

  • 4 minutes and 60 seconds will be simplified to 4m60s i.e. 300 seconds

  • 3 minutes and 120 seconds will be simplified to 3m120s i.e. 300 seconds

  • 45secs 3 minutes 75s will be simplified to 3m120s i.e. 300 seconds

Note
Form validation will display the simplified interpretation of the timeout

Cluster-wide copy artifacts

Depending on the scheme used to organize an operations center cluster, it may be necessary to copy artifacts from jobs that are on a remote client master. For example, team responsible for QA may want to pull the artifacts to be tested from the development team’s server. In order to solve this class of problem the Operations Center Context plugin provides a build step and a pipeline step that can copy artifacts from another job, either on the root CloudBees Jenkins Operations Center or any client master Jenkins instance in the CJOC cluster.

Note

The ability to copy artifacts across masters within an operations center cluster has been added to Operations Center as an update to the 2.7 release. This functionality requires the following minimum plugin versions:

  • The source and target masters must be running

    • operations-center-context 2.7 or newer

If individual masters have not been upgraded then

  • those client masters will be unable to copy artifacts from remote masters; and

  • other remote masters will be able to copy artifacts from jobs on the client masters.

The Copy archived artifacts from remote/local jobs build step

This build step allows you to copy archived artifacts from a remote (or local) job.

To add the trigger just select the Copy archived artifacts from remote/local jobs option from the Add build step drop down.

adding builder
Figure 106. Adding a build step to copy artifacts across the operations center cluster
builder added
Figure 107. A Copy archived artifacts from remote/local jobs build step added

The main configuration options are:

Source job

The job to copy. This uses the path browser UI component to allow you to navigate to the job you want to copy from.

Timeout (timeout)

How long to wait for the artifacts to be copied. If no units of time are specified then the value is assumed to be in seconds. You can specify the timeout in multiple units and the semantics of those units will be preserved, e.g. 1h 65s will not be transformed into 3665s. The valid units are:

d, day or days

days

h, hr, hrs, hour or hours

hours

m, min, mins, minute or minutes

minutes

s, sec, secs, second or seconds

seconds

ms, milli, millis, millisecond or milliseconds

milliseconds

The following are all valid timeouts (though they will be simplified on save)

  • 300 i.e. 300 seconds

  • 5 minutes will be simplified to 5m i.e. 300 seconds

  • 4 minutes and 60 seconds will be simplified to 4m60s i.e. 300 seconds

  • 3 minutes 120 seconds will be simplified to 3m120s i.e. 300 seconds

  • 45secs 3 minutes 75s i.e. 300 seconds

Form validation in the snippet generator will display the simplified interpretation of the timeout

Build Selector

How to select the build to copy artifacts from, such as latest successful or stable build, or latest "keep forever" build. Other plugins may provide additional selections. See the section on Build selectors for details of the specific implementations available.

Includes

Relative paths to artifact(s) to copy or leave blank to copy all artifacts. This works just as a filter and doesn’t test whether all specified artifacts really exist. Check the /artifact/ browser of a build to see the relative paths to use here as the build page typically hides intermediate directories. You can use wildcards like target/checkout/**/target/*.zip and use comma to separate multiple entries. See the @includes of Apache Ant’s fileset for the exact format.

Note
May also contain references to build parameters like $PARAM or ${PARAM}.

Additional behaviours of the build step can be controled from the Advanced button.

builder advanced
Figure 108. The Advanced options of a A Copy archived artifacts from remote/local jobs build step
Target

The directory to copy the artifacts into. Only paths within the workspace are supported.

Name Mapping

Strategy that controls how artifact names are mapped when being copied. See the section on Artifact Name mappers for details of the specific implementations available.

Fingerprint artifacts

Fingerprint the artifacts and associate with the build to enable the tracability features of Jenkins.

Ignore errors

There are a number of possible error conditions:

  • The Jenkins master containing the source job from which the artifacts will be copied may be off-line from Operations Center for longer than the timeout.

  • The Jenkins master containing the target job into which the artifacts will be copied (i.e. this job) may be off-line from Operations Center for longer than the timeout.

  • The copying of the artifacts themselves may take longer than the timeout.

  • There are no matching files to be found in the source job.

  • There may be no builds available that match the configured build selector

  • The authentication that the target job is running as, after being mapped to an authentication on the source master, may not have permission to see the source job or may not have permission to copy the artifacts from the source job

By default, if any of these issues are encountered, the target build result will be marked as a failure and the build stopped. Enable this option to ignore any such errors and continue the build.

There are two extension points that control the behaviour of the build step:

Build selector

The build selector is a strategy for determining the build from which the artifacts should be copied.

There are ten strategies provided by the Operations Center Context plugin. The strategy is an extension point which enables other plugins to provide their own implementations.

Tip

Where possible use the strategies provided by the Operations Center Context plugin. If you are implementing a custom build selector strategy in a custom plugin, the strategy instance serialized and sent to the remote master over a remoting channel that does not load classes from the sender. This means that if you want to use a custom strategy not provided by the Operations Center Context plugin you must ensure that the plugin providing the custom strategy is installed on all masters that jobs will be copied between.

The built-in strategies are:

Last successful build

This strategy will find the most recent completed build with a status of either Stable or Unstable.

Last stable build

This strategy will find the most recent completed build with a status of Stable.

Upstream triggering build

This strategy will look at the causes of the job to determine if it has been triggered by the source job.

The search will only include the graph of jobs on the master.

The search will match job triggering using either the Cluster-wide job triggers functionality provided by Operations Center Context or the regular Upstream causes built in to Jenkins itself.

There is a configuration option to fall back to the last successful build if no upstream cause can be identified.

It is possible that multiple builds of the same upstream job may have been coalesced into a single run of the job using this build step. By default, the highest triggering build number will be used, but this can be configured to pick the lowest triggering build number instead.

Build identified by number

This strategy will pick the exact specified build number only.

Tip
May also contain references to build parameters like $PARAM or ${PARAM}.
Last build

This strategy will pick the most recent build of the source job.

Warning
This can include builds in progress, which may or may not have completed any archiving artifacts.
Last completed build

This strategy will pick the most recent completed build of the source job.

Warning
This can include failed and aborted builds, which may or may not have completed any archiving artifacts.
Last failed build

This strategy will pick the most recent failed build of the source job.

Warning
This can include builds which failed before archiving artifacts.
Last unstable build

This strategy will find the most recent completed build with a status of Unstable.

Last unsuccessful build

This strategy will pick the most recent non-stable build of the source job.

Warning
This can include builds which were aborted or failed before archiving artifacts.
Build identified by a permalink

This strategy will pick the most recent build returned by one of the source job’s permalinks.

Tip
This step is most useful for permalinks contributed by plugins, such as the Promoted Builds plugin
Note
It is preferred to use the corresponding build selector rather than a permalink for the standard permalinks.

Artifact name mapper

The artifact name mapper is a strategy for deciding the names of the artifacts when they are copied.

There are three strategies provided by the Operations Center Context plugin. The strategy is an extension point which enables other plugins to provide their own implementations.

Tip

Where possible use the strategies provided by the Operations Center Context plugin. If you are implementing a custom artifact name mapper strategy in a custom plugin, the strategy instance serialized and sent to the remote master over a remoting channel that does not load classes from the sender. This means that if you want to use a custom strategy not provided by the Operations Center Context plugin you must ensure that the plugin providing the custom strategy is installed on all masters that jobs will be copied between.

The built-in strategies are:

No mapping

This strategy will use the path name that the artifacts were archived with.

Flatten all directories from name

This strategy will strip the path component from the path name that the artifacts were archived with and just use the artifact base name.

Remove leading directories from name

This strategy takes a parameter which is the number of directory segments to remove from the path name. If the archived artifact has less directory segments than the number to remove then the artifact’s base name will be used.

The Copy archived artifacts from remote/local jobs pipeline step

The Operations Center Context plugin adds a pipeline step for copying artifacts from remote / local jobs: copyRemoteArtifacts.

The build step returns a list of strings corresponding to the names of the files that were copied.

In most cases the default options will be sufficient and the step can be invoked as:

Basic usage of copyRemoteArtifacts step
node {
  copyRemoteArtifacts 'jenkins://cafebabedeadfeefcafebabedeadfeef/path/to/job'
}

To control where the artifacts are copied, just wrap the copyRemoteArtifacts step in a dir step, e.g.

Copying the artifacts into a specific directory
node {
  dir ("subdir") {
    copyRemoteArtifacts "jenkins://cafebabedeadfeefcafebabedeadfeef/path/to/job"
  }
}

Similarly, errors can be ignored by wrapping using the standard idioms for pipeline scripts.

For more advanced control of options, it is recommended to generate the configuration using the snippet generator.

snippet generator
Figure 109. The snippet generator for the copyRemoteArtifacts step.

The configuration options are:

Source job (from)

The job to copy from.

This is a string value which can be of type: jenkins://instanceID/path

  • jenkins:// the prefix of the remote path URI schema.

  • instanceID the Jenkins Instance ID of the remote instance where to trigger the job or . to indicate the current instance.

  • path the path to the item

or of type cjp:///path/from/root/of/cjoc

  • cjp:// the prefix of the remote path URI schema.

  • path/from/root/of/cjoc the walking path from the root of CJOC

The CJP protocol can only be used against a Trusted client master. See Authentication mapping for more information.

Timeout (timeout)

How long to wait for the artifacts to be copied. If no units of time are specified then the value is assumed to be in seconds. You can specify the timeout in multiple units and the semantics of those units will be preserved, e.g. 1h 65s will not be transformed into 3665s. The valid units are:

d, day or days

days

h, hr, hrs, hour or hours

hours

m, min, mins, minute or minutes

minutes

s, sec, secs, second or seconds

seconds

ms, milli, millis, millisecond or milliseconds

milliseconds

The following are all valid timeouts (though they will be simplified on save)

  • 300 i.e. 300 seconds

  • 5 minutes will be simplified to 5m i.e. 300 seconds

  • 4 minutes and 60 seconds will be simplified to 4m60s i.e. 300 seconds

  • 3 minutes 120 seconds will be simplified to 3m120s i.e. 300 seconds

  • 45secs 3 minutes 75s i.e. 300 seconds

Form validation in the snippet generator will display the simplified interpretation of the timeout

Build Selector (selector)

How to select the build to copy artifacts from, such as latest successful or stable build, or latest "keep forever" build. Other plugins may provide additional selections. See the section on Build selectors for details of the specific implementations available.

Includes (includes)

Relative paths to artifact(s) to copy or leave blank to copy all artifacts. This works just as a filter and doesn’t test whether all specified artifacts really exist. Check the /artifact/ browser of a build to see the relative paths to use here as the build page typically hides intermediate directories. You can use wildcards like target/checkout/**/target/*.zip and use comma to separate multiple entries. See the @includes of Apache Ant’s fileset for the exact format.

Name Mapping (mapper)

Strategy that controls how artifact names are mapped when being copied. See the section on Artifact Name mappers for details of the specific implementations available.

Fingerprint artifacts (fingerprint)

Fingerprint the artifacts and associate with the build to enable the tracability features of Jenkins.

Tips and tricks

This section details some specific use cases and the recommended solutions for implementing them using the cluster-wide copy artifacts functionality.

Matrix to Matrix copying

I have two matrix jobs. Both matrix jobs have the same axes configurations. I want to copy the artifacts produced by the first job into the second job. I only want to copy the artifacts produced by the matching axes.

We will suppose there are two axes: the operating system and the CPU architecture.

The Operating System is the first axis with name OS. The CPU architecture is the second axis with name ARCH.

We want to configure the test job to copy artifacts from the build job. We want to copy: * the OS=linux,ARCH=i386 artifacts to OS=linux,ARCH=i386 * the OS=linux,ARCH=x86_64 artifacts to OS=linux,ARCH=x86_64 * the OS=win,ARCH=i386 artifacts to OS=win,ARCH=i386

To do this, we need to:

  1. Select the matrix aggregator for the build as the source

  2. Specify the include pattern with the prefix OS=${OS}/ARCH=${ARCH}/. For example instead of the of *.zip we would use OS=${OS}/ARCH=${ARCH}/*.zip.

  3. Specify the Remove leading directories from name artifact name mapping strategy with 2 directories to be stripped - because we have two axes.

matrix to matrix
Figure 110. Matrix to matrix copying of artifacts

Coping Maven Job type artifacts

I want to copy the automatically archived artifacts from a multi-module Maven job type build

The automatically archived artifacts from a multi-module Maven job type are exposed using the following path scheme: groupId/artifactId/ where groupId is the groupId of the module and artifactId is the artifactId of the module. The file names are the file names used by the Maven job type when archiving the files in each individual module.

Move/Copy/Promote

Introduction

Starting with the Operations Center Context plugin version 1.7.1 CloudBees Jenkins Operations Center (CJOC) and CloudBees Jenkins Enterprise (CJE) provide enhanced Move and Copy operations for Job, Folders and other items. This functionality enhances and complements the facility to "promote" job configurations from one location to another — between client masters or within the same client master added in CJOC 1.7.

Note

The promote functionality is currently limited to Freestyle job types only.

The open source version of Jenkins does not have Folders plugins installed by default. Without any folders to move jobs between, it does not make sense to have a Move concept. When you install the Folders plugin in Jenkins, this plugin adds a Move operation to all items that can be moved.

legacy action
Figure 111. The move operation provided by the Folders plugin.

Because the open source version of Jenkins does not have the ability to communicate with other Jenkins masters, you can not Move between masters.

The open source version of Jenkins has limited capabilities for copying jobs. Again, in the absence of folders and when limited to a single master, the major use case for copying a job is to create a new job with the same configuration as an existing job.

That use case is provided for by open source Jenkins using New Item ▸ Copy an existing Item. When you install the Folders plugin in Jenkins, there are some use cases for copying a job with its entire build history. Typically those use cases are solved by the Jenkins administrator manually copying the job within the JENKINS_HOME directory structure and restarting Jenkins.

With the introduction of CJOC, however, there now are a lot more use cases for Move and Copy operations:

  • Moving jobs between masters in order to distribute the load between multiple masters

  • Copying a job from one master to another to verify that the job and its associated history is transferred correctly before removing the original

  • Copying an example folder with jobs to seed a project initiation

  • Copying a job from a production master to a test master in order to assess the impact of a plugin upgrade.

Within the above context there is also an additional set of use cases for some customers who have more strict change control processes. Such customers typically will allow users to configure a job in one environment and then move the changes to that job to production. In essence they are configuring the job and then promoting the configuration of that job towards production (typically through a test environment).

CJOC 1.7 introduced the Promote Configuration operation to support this last use case. Due to the nature of a "promotion" operation, the support for this has to be specifically developed for each job type. At the time of writing configuration promotion support has only been implemented for Free-style jobs.

With Operations Center Context version 1.7.1, the Promote Configuration operation has been subsumed by the new Move/Copy/Promote operation. This new operation also subsumes the Move operation that is provided by the Folders plugin.

Note
When a move is within the current master, the actual move itself is performed by the Folders plugin but the User Interface is provided by the Operations Center Context plugin.
new action
Figure 112. The Move/Copy/Promote operation provided by the Operations Center Context plugin, replacing the Move operation provided by the Folders plugin.

Bear in mind the following before you proceed:

  • Upgrade all client masters to at least Operations Center Context 1.7.1 before using the move/copy/promote functionality between client masters.

  • If you attempt to Move/copy/promote to a Jenkins instance that does not have at least version 1.7.0 of the Operations Center Context plugin, the destination will not respond to the validation messages and the originating master may wait for up to one hour before timing out and reporting a validation failure.

  • To be able to move an item from one master to another, you would need to update the client-master security section on CJOC, setting the authentication mapping to any of the trusted with compatible security realm options (both restricted master with compatible security realm and trusted master with compatible security realm will work). You can choose the one you need in the JENKINS_URL/configureSecurity page.

  • If you attempt to move/copy anything other than a single free-style job to a Jenkins instance that does not have at least version 1.7.1 of the Operations Center Context plugin, the operation will fail on the receiving end.

Moving, Copying, or Promoting Items Using the Web UI

To move/copy/promote a job / folder / item — browse to the item and select the Move/Copy/Promote action from the menu. You will see a screen similar to the following:

initial screen
Figure 113. The initial screen. The default destination reflects the current location of the item to be moved (in this case a folder called "widgets")

Note the following about this screen:

  • The path browser defaults to the current location of the item.

  • The path browser always corresponds to the location containing the item. If there is already an existing item with the same name at the selected destination, then the move operation fails.

  • The three buttons at the bottom of the screen reflect your capabilities for the current item. For example, if you do not have the permissions to delete the item, then the Move button is disabled (as a move is conceptually a copy followed by a delete). Similarly, if support for promotion of the specific type of item has not been written, then the Promote button will be disabled.

Using the path browser you can navigate to the location you want to move/copy/promote the item to.

Note
This functionality is available even on a standalone Jenkins master, although in such cases the operation is limited to that master.

Once the correct destination has been selected, click on the desired operation:

  • The Move button will move the current item with all its build history to the new location. The original item will be removed on confirmation of the item having been successfully received at the destination.

  • The Copy button will copy the current item with all its build history to the new location. The original item with all its build history will also be retained at its original location.

  • The Promote button will copy the configuration without any build history of the item to the destination. If an item of the same name already exists at the destination, that item’s configuration will be replaced by the source item’s configuration. The build history of the source item and any pre-existing destination item will remain unchanged by the promotion.

The first thing that will happen is that Jenkins will attempt to validate the operation. There are a list of checks that are performed. These checks depend on the type of operation, the type of item, as well as whether the destination is to a different Jenkins instance. The checks are sent to the destination master for validation.

validation in progress
Figure 114. Initial validation checks in progress
  • If the destination master reports all checks were successful then the operation will be submitted to the Jenkins build queue.

    operation in progress 1
    Figure 115. A move/copy/promote request having been completed validation successfully and accepted into the build queue automatically.
  • If the destination master reports warnings for some checks, then a confirmation screen will be shown:

    validation warnings
    Figure 116. A move/copy/promote request where the validation identified some problems (in this case there are some missing plugins and the destination master is running an older version of Jenkins)

    The user can then decide whether to continue the operation or abort.

    Note
    If the user elects to continue the operation, all warnings will be ignored.
  • If the destination master reports errors for some checks, then a confirmation screen will be shown:

    validation errors
    Figure 117. A move/copy/promote request where the validation indicates that the request will fail (the error handling defaults to the safe option of "Stop on warnings or errors" to prevent accidental selection of the Continue button)

    This screen differs from the case of warnings as the user must explicitly select the error handling behaviour. The default error handling on this screen is to stop for warnings or errors.

    Caution

    If validation errors are reported then the operation will almost certainly not succeed.

    In certain cases it may be possible to copy or promote by ignoring warnings or ignoring both errors and warnings.

    Do not attempt a move operation when ignoring errors and warnings unless you have a current backup of the item being moved as when errors are ignored the move operation will always delete the source item.

operation in progress 2
Figure 118. A move/copy/promote operation in the running post-validation checks.

The move/copy/promote operations are injected into the Jenkins build queue as they can be potentially long running operations:

  • Move operations need to ensure that the job is not currently building and also that the job does not try to start building during the move operation.

  • Move operations acting on a folder need to ensure that all jobs contained in the folder are not currently building.

  • Copy and move operations may involve sending significant quantities of data between masters, this can take quite some time if there are a lot of large archived artifacts.

    operation in progress 3
    Figure 119. A move/copy/promote operation waiting for confirmation that the files have been received by the destination.
  • Any of the move/copy/promote operations, when interacting with a remote master, need to allow for the possibility that the destination master may experience a temporary network outage during the operation.

Note
Once the operation has been submitted to the Jenkins build queue, the progress dialog can be closed if the user wants to do other things.

When the operation completes, then progress dialog will report the results.

operation success
Figure 120. A successfully completed move/copy/promote request. Clicking Close will redirect the browser to the destination item.
operation failure
Figure 121. A failed move/copy/promote request. If the destination item exists then clicking Close will redirect the browser to the destination item, otherwise you will be returned to the source item.

If the destination item exists then the user will be taken to the destination item when they close the progress dialog.

Moving, Copying, or Promoting Items Using the Jenkins CLI

The Operations Center Context plugin also adds three commands to the Jenkins CLI:

  • cjp-move-item

  • cjp-copy-item

  • cjp-promote-item

All three of these commands use the same syntax:

java -jar jenkins-cli.jar -s JENKINS_URL cjp-__-item /path/to/source/item uri/of/destination/item

They also support the same command line options:

-q

Submit the request and return immediately.

-w

Wait until the request has started.

-v

Display verbose output of progress.

-e IGNORE_ALL | IGNORE_WARNINGS | IGNORE_NONE

Select the error handling mode.

The source item is specified by providing the path to the item on the source master. This is a normal path for selecting items using the Jenkins CLI.

The destination folder is specified by providing a URI for the destination. At present there are two supported URI schemes:

  • jenkins://instanceid/path/to/folder.

    Tip
    If you are referencing a path on the same master as the source you can use jenkins://./path/to/folder
  • cjp:///path/from/root/of/cjoc

In general, with all of these paths can be constructed by building the names of each item. Most Jenkins items support using a different display name for the item from the actual name that the item is known as. This can complicate determining the actual path or URI.

The Operations Center Context plugin adds a hidden action to all items: /platform-uri/ that returns the uri of the item.

For example, if you use a web browser and navigate to the destination folder, the browser location will be something like:

https://jenkins.dev.example.com/job/widget/job/blue

By changing the location to:

https://jenkins.dev.example.com/job/widget/job/blue/platform-uri/

We will get back the jenkins:// URI for the "Jenkins » Widgets » Blue" folder.

If we change the location to

https://jenkins.dev.example.com/job/widget/job/blue/platform-uri/?scheme=cjp

We will get back the cjp:/// URI for that folder. On a standalone Jenkins master that would be cjp:///widget/blue but if the master is joined to CJOC then the path will be relative to the root of CJOC, for example: cjp://masters/dev/widget/blue.

Note

cjp:// URIs are much easier to work with, but can be broken by moving the client masters within CJOC.

jenkins:// URIs can be cumbersome to work with but have the advantage that they are not affected by moving the master within CJOC as the master is looked up using its instance id.

Example: Promoting a Job from one Master to Another

First we need to determine the URI of the destination, in this case we use the curl command:

$ curl http://jenkins.prod.example.com/job/widgets/job/blue/platform-uri/?s\
cheme=cjp
cjp:///masters/production/widgets/blue

$

Now that we have the URI of the destination we can use the CLI command to promote the job configuration:

$ java -jar jenkins-cli.jar -s http://jenkins.dev.example.com/ cjp-promote-\
item /widgets/blue/deploy cjp:///masters/production/widgets/blue
Started masters » dev » widgets » blue » deploy ↑ masters » production » widget
s » blue » deploy
Completed masters » dev » widgets » blue » deploy ↑ masters » production » widg
ets » blue » deploy : SUCCESS

$

We can roll the two steps into one using $(curl …​):

$ java -jar jenkins-cli.jar -s http://jenkins.dev.example.com/ cjp-promote-\
item /widgets/blue/deploy $(curl -L http://jenkins.prod.example.com/job/wid\
gets/job/blue/platform-uri/?scheme=cjp)
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                               Dload  Upload   Total   Spent    Left  Speed
0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
0     0    0    39    0     0   3296      0 --:--:-- --:--:-- --:--:--  3296
Started masters » dev » widgets » blue » deploy ↑ masters » production » widget
s » blue » deploy
Completed masters » dev » widgets » blue » deploy ↑ masters » production » widg
ets » blue » deploy : SUCCESS

$

We can ask for detailed progress using -v:

$ java -jar jenkins-cli.jar -s http://jenkins.dev.example.com/ cjp-promote-\
item /widgets/blue/deploy cjp:///masters/production/widgets/blue -v
Started masters » dev » widgets » blue » deploy ↑ masters » production » widget
s » blue » deploy
Promotion Starting at: 14/08/15 09:12 triggered by: anonymous
Source: masters » dev » widgets » blue » deploy
Destination: masters » production » widgets » blue » deploy
Error Handling: IGNORE_NONE
        Running pre-flight checks...
        Pre-flight checks: PASS
        Sending files...
        Tray created: DumbWaiterTray{shelf='c033b4a3-1462-4b54-a2f2-10be3c57e30
e', urls=[]}
        Tray is prepared: DumbWaiterTray{shelf='c033b4a3-1462-4b54-a2f2-10be3c5
7e30e', urls=[]}
        Requesting post-validation...
        Post-flight Validation completed.
        Performing Backup
        Backup completed
        Running promotion...
        Promotion completed
        Tidying up...
        Disposing of tray: DumbWaiterTray{shelf='c033b4a3-1462-4b54-a2f2-10be3c
57e30e', urls=[]}

Promotion finished with success at 14/08/15 09:12.
Completed masters » dev » widgets » blue » deploy ↑ masters » production » widg
ets » blue » deploy : SUCCESS

$

We could request that the command returns as soon as the operation starts rather the default of than waiting for the operation to complete using the -w option:

$ java -jar jenkins-cli.jar -s http://jenkins.dev.example.com/ cjp-promote-\
item /widgets/blue/deploy cjp:///masters/production/widgets/blue -w
Started masters » dev » widgets » blue » deploy ↑ masters » production » widget
s » blue » deploy

$

We could ask for the operation to be queued only using the -q option:

$ java -jar jenkins-cli.jar -s http://jenkins.dev.example.com/ cjp-promote-\
item /widgets/blue/deploy cjp:///masters/production/widgets/blue -q

$

We can also control the error handling behaviour. By default the operations will abort if there are any errors or warnings. When transitioning between masters it is not uncommon to find plugin version differences between the masters. Plugin version skew will typically manifest as a WARNING

$ java -jar jenkins-cli.jar -s http://jenkins.dev.example.com/ cjp-promote-\
item /widgets/blue/deploy cjp:///masters/production/widgets/blue -v
Started masters » dev » widgets » blue » deploy ↑ masters » production » widget
s » blue » deploy
Promotion Starting at: 14/08/15 09:29 triggered by: anonymous
Source: masters » dev » widgets » blue » deploy
Destination: masters » production » widgets » blue » deploy
Error Handling: IGNORE_NONE
        Running pre-flight checks...
        Pre-flight checks: WARNING
Aborting
        Tidying up...

Promotion finished with problems at 14/08/15 09:29
Completed masters » production » widgets » blue » deploy ↑ masters » dev » widg
ets » blue » deploy : FAILURE

$

The Jenkins Administrator can inspect the detailed logs of the request using Jenkins ▸ Manage Jenkins ▸ Move/Copy/Promote History on the source Jenkins instance.

detailed history
Figure 122. The detailed logs of a failed promote operation accessed via Jenkins ▸ Manage Jenkins ▸ Move/Copy/Promote History

In this case there are some missing plugins.

Tip

There are some plugins that store side information about a project in a job property.

Typically, these can be identified by examining the job configuration or the config.xml of the job itself.

Note

A job that is deploying things may actually be using the Deployer framework plugin to do the deployment.

While, in general, this specific plugin may inject a mostly unused property into jobs, you need confirmation that this deploy job is not actually using the Deployer framework plugin.

The easiest way to get that confirmation would be to consult with the creator / maintainer of the job. If the creator / maintainer is unavailable or unsure of the answer we can determine the answer by inspecting the configuration of the job.

We take a look at the deploy job’s configuration:

example job config
Figure 123. Example deploy job’s configuration

In this case, the job is using the Copy artifacts from another project and Execute shell build steps. It doesn’t seem likely that this job is actually using the Deployer framework plugin.

We take a look at the deploy job’s raw config.xml just to get final confirmation:

example job raw config
Figure 124. Example deploy job’s config.xml

Looking at the config.xml we can see that the main functionality is actually provided by three steps:

  • The SSH Agent plugin provides a build wrapper that makes the deploy-key credentials available - this is using a credential with an ID that has been explicitly specified so that developement can provide a credential for their server while production can, using the same job configuration, provide the production credentials on the production server.

  • The Copy Artifacts plugin copies the widget from the upstream build job into the workspace.

  • The Execute shell build step uses scp to copy the built widget to the ${TARGET_HOST} - which is presumably an environment variable set on a parent folder so that development can test against a non-production server.

There is a property in the configuration file from the Deployer framework and that is what is causing the warning. The property itself, though, does not seem to contain any significant configuration:

  • The single click deployment functionality has been recorded as disabled.

  • There are no default targets for the the Deploy Now action.

After analysis, we conclude that this job is safe to promote into production even with the reported warnings.

$ java -jar jenkins-cli.jar -s http://jenkins.dev.example.com/ cjp-promote-\
item /widgets/blue/deploy cjp:///masters/production/widgets/blue -v -e IGNO\
RE_WARNINGS
Started masters » dev » widgets » blue » deploy ↑ masters » production » widget
s » blue » deploy
Promotion Starting at: 14/08/15 10:03 triggered by: anonymous
Source: masters » dev » widgets » blue » deploy
Destination: masters » production » widgets » blue » deploy
Error Handling: IGNORE_WARNINGS
        Running pre-flight checks...
        Pre-flight checks: WARNING
        Sending files...
        Tray created: DumbWaiterTray{shelf='ba906833-1e1b-4e85-8f9c-5477f50865d
5', urls=[]}
        Tray is prepared: DumbWaiterTray{shelf='ba906833-1e1b-4e85-8f9c-5477f50
865d5', urls=[]}
        Requesting post-validation...
        Post-flight Validation completed.
        Performing Backup
        Backup completed
        Running promotion...
        Promotion completed
        Tidying up...
        Disposing of tray: DumbWaiterTray{shelf='ba906833-1e1b-4e85-8f9c-5477f5
0865d5', urls=[]}

Promotion finished with success at 14/08/15 10:03.
Completed masters » dev » widgets » blue » deploy ↑ masters » production » widg
ets » blue » deploy : SUCCESS

$

Resolving Common Issues

The following issues may be encountered:

Moving templates

If you move a template from one master to another master, jobs on the source master that were based on the template become disconnected from the template, just as if you had deleted the template on a standalone Jenkins master.

Moving items based on templates

If you move a job based on a template from one master to another, the job on the destination master will become disconnected from the template, just as if you had deleted the template on a standalone Jenkins master.

Plugin version skew

The most common warning is from plugin version skew, i.e. where the destination master has an older version of a plugin used on the source master.

Typically such skew will not be a major issue as the general principle under which plugins are developed is that you should be able to roll back a plugin upgrade without loosing configuration.

If a plugin maintainers has to break that contract, they are expected to set a flag in the plugin’s manifest that triggers a warning in the update center, e.g.

update center warning
Figure 125. Warning about a plugin upgrade where the plugin’s configuration may not be retained if the upgrade is rolled back.

The Deployer framework plugin is not installed on the receiving Jenkins

The Deployer framework plugin can often show up as a missing plugin.

For most jobs, the Deployer framework is storing a cache of whether the job has any archived artifacts that can be auto detected as being supported for deployment with any of the installed deployment engines. The cache is required in order to ensure that the views render quickly as the Deploy Now button should only be rendered for jobs that have at least one artifact that can actually be deployed.

deploy now view
Figure 126. A view with two jobs, the most recent build of the build job has archived artifacts that can be deployed now and as a result the Deploy Now button is visible beside the Build Now button. The deploy job does not have any archived artifacts and therefore there is nothing to for the Deployer framework plugin to deploy from that job.
Note

The best way to know if a job is actually using the Deployer framework plugin is to ask the people using the job.

If all of them do not know, then it is probably safe to conclude that they are not using it.

If you have any remaining doubt, to confirm if a job is actually using the Deployer framework you need to check three things:

  1. There are no Deploy applications build steps

    deploy apps build step
    Figure 127. A Deploy applications build step.
  2. There is no Deploy applications step in the Post-build Actions

    deploy apps publisher
    Figure 128. A Deploy applications post-build action.
  3. The Deploy Now defaults do not contain any significant configuration. The most reliable indicator of significant configuration is if the One-Click Deployment option has been enabled, as that is normally only selected when the defaults have been configured with a valid configuration.

    If the One-Click Deployment option has not been selected, check if the Advanced button is indicating that there is edited configuration within (i.e. the notepad icon to the left of the button)

    deploy now edited
    Figure 129. An Advanced button where the values differ from the default configuration.

    Different deployer engines will inject a sample starting configuration, so often it will be necessary to expand the Advanced button.

    deploy now expanded
    Tip
    If none of the Host service entries contain an Application then the Deploy Now functionality has not been configured.

    The deployer engines may auto-detect applications, for example the AWS deployer engine knows that .war files are supported for deployment onto Elastic Beanstalk. In order to simplify the use of the Deploy Now action, it will configure an empty application if there is a .war archived artifact, e.g.

    deploy now aws autodetect
    Figure 130. An auto-detected application entry

    In such cases the lack of customization of the application configuration, example no Application Name or Version specified, the default hint of BucketName/ in the S3 Bucket, etc., all point to this configuration either not being significant or being trivial to reconstruct.

Pipelines

Because a large part of the definition of a Pipeline can be stored in source control, it may not be possible for Jenkins to determine the full list of plugins that are used by the actual Pipeline definition.

This is typically not a serious issue as any of the explicit Jenkins configuration in the Pipeline job (i.e. the parts of the configuration that would be discarded if saved on a Jenkins instance missing a required plugin) will be covered by the validation checks so worst case the Pipeline job can simply be moved to a master that has the plugins required by the Pipeline script.

Update Center Plugin

Introduction

The Update Center plugin gives a Jenkins administrator the ability to host their own Jenkins Update Center for the Jenkins instance(s) that they administer. This update center will provide the Jenkins Administrator with the ability to both restrict the plugins available and host their own custom plugins.

The Update Center plugin was introduced in CloudBees Jenkins Enterprise 12.04.

Creating an Update Center

Update centers can be created just like regular jobs. Go to New Job, enter a name for the update center and choose Update Center as the job type (Creating a new Update Center).

uc create
Figure 131. Creating a new Update Center

There are a number of options that you can configure for the update center (Configuring an Update Center):

  • Plugin Versioning Strategy - this strategy is applied whenever a new plugin (i.e. one where there are no existing versions installed) is first installed in the update center. There are two strategies. The first will not publish any version of the plugin until the user explicitly selects a version to publish. The second will always publish the newest version of the plugin that is installed in the update center. See Version number rules for a description of how version numbers are compared by Jenkins.

  • Signature provider - Jenkins versions from 1.424 onwards expect that the update center metadata is signed by a known certificate. Select the certification signature provider to use for signing the update center metadata.

  • Upstream sources - by selecting upstream sources to pull updates from you can track other update centers and installing plugin versions is easier as the plugins do not have to be manually downloaded onto your computer and then uploaded into the update center.

  • Maintenance tasks - allows control of periodic tasks such as downloading new versions of plugins installed in the update center, or installing new plugins from the upstream sources into the update center.

uc config
Figure 132. Configuring an Update Center

Using the update center

The main update center screen consists of a number of tabs.

Info tab

uc info
Figure 133. Main update center informations screen

This screen displays some basic information about the update center:

  • How much disk space is being used by the update center

  • What the update center’s URL is

Warning

There are only two occasions when it is appropriate to use the installer plugin:

  • You need to configure Operations Center itself to use a self-hosted update center.

  • You need to make plugins available to a Jenkins OSS instance that cannot access any external update centers in order to install the Operations Center plugins required to join it into Operations Center using the CloudBees Jenkins Enterprise converter plugin.

    This is requires a full understanding of the plugin requirements and it is usually a better option to use a CloudBees Jenkins Enterprise distribution of Jenkins instead.

Most users will be running a CloudBees Jenkins Enterprise distribution of Jenkins which already has the required plugins bundled.

Once a Jenkins instance is joined to the Operations Center cluster you should use the Specify update center for Master option to push the update center configuration to the master.

uc connected master property
Figure 134. The recommended way to configure a master connected to Operations Center to use a custom update center.
Important

The plugin that the link downloads will access the update center via the URL that the link corresponds to. Thus it is best to access the link from the machine that the plugin will be installed on in order to ensure that the correct (from that machine) URL for accessing the update center is used.

Note

The plugin that the link downloads will ensure that the matching certificate checks are performed, i.e. if the Update Center is using a Self-Signed Certificate to sign the update-center.json file, the plugin will include the corresponding certificate chain in order to ensure the `update-center.json’s authenticity.

Similarly, if the Update Center is not using any Certificates to sign the update-center.json then the plugin will instruct Jenkins not to require a signed update-center.json for that specific update center, irrespective of the Jenkins instance’s global certificate requirements.

There are two ways to install the plugin in a Jenkins instance:

  • Using the Web Interface

    1. From the Update Center, download the Update center installer plugin. (Ideally downloading it via a browser running on the same machine as the Jenkins instance that will be using the Update Center as its update center)

    2. Go to the Jenkins instance’s root page.

    3. If the Jenkins instance has security enabled, login as a user with has the Overall | Administerpermission.

    4. Select the Manage Jenkins link on the left-hand side of the screen.

    5. Select the Manage Plugins link.

    6. On the Advanced tab use the Upload Plugin option to upload the plugin.

      uc install plugin
      Figure 135. Uploading the Update Center Installer Plugin into a Jenkins instance
    7. Check the Installed tab. If the Jenkins instance says that it requires a restart, then restart the Jenkins instance.

      Note
      Up to at least Jenkins 1.455 require a restart to activate the plugin.
  • Using the Command Line Interface

    1. From the Update Center, download the Update center installer plugin. (Ideally you should download it via a browser running on the same machine as the Jenkins instance that will be using the Update Center as its update center.)

    2. Stop the Jenkins instance.

    3. Move (or copy) the downloaded plugin into the $JENKINS_HOME/plugins/ directory

    4. Start the Jenkins instance.

Core tab

uc core install
Figure 136. The Core tab of the Update Center screen

This screen provides details of Jenkins distributions.

The Promoted Version section lists all the versions of Jenkins that are installed in the update center and allows for one specific version to be advertised via the Update Center’s update-center.json

The Upstream Versions section lists any versions of Jenkins, coming from the Update Center’s upstream sources, that have not been installed in the Update Center

Note

Only versions of Jenkins which have been installed in the update center can be advertised via the Update Center’s update-center.json. This is to ensure that the Update Center can continue to serve even if the upstream source is off-line.

To install a version of Jenkins from the upstream sources, just click on the corresponding Store button and the download will be added to the download queue.

uc core downloading
Figure 137. A core distribution being downloaded into the Update Center

Plugins tab

uc plugins
Figure 138. The Plugins tab of the Update Center screen

This screen provides details of all the Plugins known to this Update Center, i.e. all the plugins advertised by the update center’s upstream sources as well as all the plugins installed in the update center.

Clicking on a Plugin’s name will display the Plugin information screen

uc plugin install
Figure 139. The plugins details screen when no version of a plugin is installed in the update center
uc plugin config
Figure 140. The plugins details screen with a version of the plugin installed in the update center

The Promoted Version section lists all the versions of the plugin that are installed in the update center and allows for one specific version to be advertised via the Update Center’s update-center.json

The Upstream Versions section lists any versions of the plugin, coming from the Update Center’s upstream sources, that have not been installed in the Update Center

Note

Only versions of plugins which have been installed in the update center can be advertised via the Update Center’s update-center.json. This is to ensure that the Update Center can continue to serve even if the upstream source is off-line.

To install a version of a plugin from the upstream sources, just click on the corresponding Store button and the download will be added to the download queue.

Tools tab

uc tools
Figure 141. The Tools tab of the Update Center screen

This screen provides details of all the Tool installers known to this Update Center, i.e. all the tool installers advertised by the update center’s upstream sources.

Clicking on a Tool installer’s name will display the list of versions of the Tool installer that are available.

The Update Center plugin does not currently support hosting private versions of Tool installers.

Upload Core tab

uc upload core
Figure 142. The Upload Core tab of the Update Center screen

This screen provides for manual uploading of custom distributions of Jenkins into the update center.

Upload Plugin tab

uc upload plugin
Figure 143. The Upload Plugin tab of the Update Center screen

This screen provides for manual uploading of custom plugins into the update center.

Maintenance

There are two maintenance tasks available:

  • Pull new versions

  • Pull everything

Pull new versions

This maintenance task is used when you just want to follow a small set of plugins. When this task runs it performs the following steps:

  1. Checks all the upstream sources for any versions of the core Jenkins distribution that are not installed in the update center. If any versions are found, they will be queued for download.

    If Core is set to track the Latest version and one of the downloaded versions is newer than the current latest version, then this will result in the newer version being immediately published once the download has been completed. (See Version number rules for a description of how version numbers are compared by Jenkins.)

    If Core is set to either None or a specific version, then no change to the published version will take place.

  2. Checks all the upstream sources for any versions of each plugin with at least one version currently installed in the update center. If any versions are found, they will be queued for download.

    If any of the plugins are set to set to track the Latest version and one of the downloaded versions is newer than the current latest version, then this will result in the newer version being immediately published once the download has been completed. (See Version number rules for a description of how version numbers are compared by Jenkins.)

    If a plugin is set to either None or a specific version, then no change to the published version will take place.

Pull everything

This maintenance task is used when you want to track everything in the update center’s upstream sources. When this task runs it performs the following steps:

  1. Checks all the upstream sources for any versions of the core Jenkins distribution that are not installed in the update center. If any versions are found, they will be queued for download.

    If Core is set to track the Latest version and one of the downloaded versions is newer than the current latest version, then this will result in the newer version being immediately published once the download has been completed. (See Version number rules for a description of how version numbers are compared by Jenkins.)

    If Core is set to either None or a specific version, then no change to the published version will take place.

  2. Checks all the upstream sources for any versions of all plugins published by the upstream source. If any versions are found, they will be queued for download.

    If the plugin is not currently installed in the update center the Plugin Versioning Strategy (see Creating an Update Center) will

    If any of the plugins are set to set to track the Latest version and one of the downloaded versions is newer than the current latest version, then this will result in the newer version being immediately published once the download has been completed. (See Version number rules for a description of how version numbers are compared by Jenkins.)

    If a plugin is set to either None or a specific version, then no change to the published version will take place.

Reference Information

Version number rules

Jenkins compares plugin versions using the following strategy:

  • Each version number is split into segments separated by '-' and '.' characters.

  • Starting with the first segment of each version number, segments are compared until a difference is encountered.

  • If both segments are numeric, they are compared as numbers, otherwise they are compared as strings. There are a couple of special qualifier strings that get special treatment, for example if the string starts with 'a', 'b' or 'm' and the remainder of the string is a number, then those segments will be compared based as if the letter was replaced by 'alpha-', 'beta-' or 'milestone-' respectively. Also the qualifiers: 'snapshot'; 'alpha'; 'beta'; 'milestone'; 'rc' or 'cr'; '' or 'final' or 'ga'; and 'sp' will be sorted according to that sequence.

  • Where the values of a segment are the same, if the preceding separator was '.' then that comes before '-'.

Other tasks

Removing a custom update center as an updates source from a Jenkins instance

If you want to remove a custom update center as an updates source you would have to: * disable it in the client master configuration page from cjoc if you have pushed the configuration from there (this is the suggested and preferred way) * uninstall or disable the custom update center installer plugin and restart your Jenkins instance if instead you have configured the custom update center by installing the custom update center installer plugin on the client master.

The list of update centers will not be modified on disk as a result of the change, but the custom update center will not be loaded into the list of update centers and you will see the following

WARNING: Failed to resolve class com.thoughtworks.xstream.mapper.CannotResolveClassException: custom

in your logs until the list of update centers is re-saved to disk.

Tutorial

Caution

This tutorial is only designed to provide a quick overview of CloudBees Jenkins Operations Center and the CloudBees Jenkins Platform.

DO NOT USE THIS DOCUMENTATION TO SET UP ANY ACTUAL USER FACING SERVER.

This chapter details a verified walk-through setting up a CloudBees Jenkins Operations Center (CJOC) server with two attached client masters all running on a single machine. The resulting system can be used to explore the core features of CJOC, however neither the specific architecture nor the exact configuration are recommended for a production instance.

The aim of this tutorial is to enable exploration of the features with the minimal footprint. As a result of this aim, a number of compromises have been made and some technical choices were forced. The reader should also note that some of the specific critical deficits of the walk-through architecture include:

  • All of the Java processes are running as root. This is not recommended for production systems.

  • There is a single point of failure for every node in the cluster, which results from running everything on a single machine.

  • There is a non-optimized haproxy configuration that returns 503 error codes during failover and startup - resulting from using an absolute minimal haproxy.conf in order to illustrate the minimum requirements. A production haproxy.conf file would include backup servers to serve informational content during HA switchover and startup. A production enviornment would also expose additional monitoring information to production monitoring tools.

    The choice of use of haproxy to load balance between instances was forced by the requirement that this tutorial work on a single machine.

Finally, because the tutorial is designed with the aim of producing a local functioning CJOC cluster, there are rather specific requirements.

System Requirements

The system can be run on a standalone machine or within a virtual machine.

CPU architecture

x86 64-bit

Memory

4GB

Disk space

20GB

Operating system

CentOS 7.0 x86_64

Network connectivity

Must be able to connect to CloudBees servers, either directly or via a proxy

Note
While network connectivity to CloudBees servers is not a general requirement for running CJOC, in order to reduce complexity, this tutorial makes that assumption.

Operating system installation

The tutorial assumes that you install CentOS from the CentOS 7.0 x86_64 Live CD. This tutorial used the following Live CD

$ sha1sum CentOS-7-x86_64-LiveGNOME-1503.iso
aada24a7cf0b6f521ae5fb3d55c545e6d31bfc01  CentOS-7-x86_64-LiveGNOME-1503.iso

Once the Live CD has started, select the Install to Hard Drive option

01
Figure 144. Install to Hard Drive

On the welcome screen, select the language and keyboard to use for the installation.

02
Figure 145. Welcome with language and keyboard selection
03
Figure 146. The initial installation summary screen

It is mandatory to select the installation destination.

04
Figure 147. Installation destination

Verify that network time synchronization is enabled on the Date and time settings screen

Note
If the system is running as a virtual machine, it is critically important that you set up date and time synchronization, either over the network or (ideally) using the virtualization provider’s built-in tooling.
05
Figure 148. Date and time (verifying the default "Network Time" is on)

On the Network and Hostname screen, specify jenkins.localdomain

06
Figure 149. Network and Hostname

Once you have configured these screens you can select the Begin installation button.

07
Figure 150. Installation summary screen after configuration
08
Figure 151. Initial installation in progress

While the initial installation is in progress you can set the root password

Note
select a root password that complies with your standard policy for root passwords.
09
Figure 152. Root password

Create a local user account after setting the root password

Note
The tutorial does not make any assumptions about the username or password of the regular user created during the initial installation. However, it does assume that you just create a local user and avoid configuring a Network Login.
10
Figure 153. Creating a user
11
Figure 154. Installation complete

Close the installer and restart the system.

12
Figure 155. Restart
13
Figure 156. Booting…​

When the system has booted, you will need to complete the Initial setup steps. These screens can retain their default values.

14
Figure 157. Initial setup

Operating system and proxy configuration

Login as root

17
Figure 158. Login screen
18
Figure 159. Logging in as root after selecting Not listed?

Start a terminal

19
Figure 160. Terminal menu item
20
Figure 161. Terminal open
Note
if running in a virtual machine and your virtual machine provider has dedicated drivers, etc, install those drivers and reboot the operating system if necessary before continuing past this point.

Execute the following command

yum install -y java-1.8.0-openjdk-devel haproxy firewalld
21
Figure 162. Command entered
22
Figure 163. Command completed successfully

This CentOS distribution already comes with a OpenJDK 7 installed, so we need to change the default to OpenJDK 8.

Execute the following command:

alternatives --config java
22a
Figure 164. Preparing to change the default java version
Note

The current selection (1) should show up as java-1.7.0-openjdk and OpenJDK 8, which we just installed, should show up as selection 2. If the selections differ from this expectation, then make the necessary adjustments

Type 2 to select java-1.8.0-openjdk and followed by the ENTER key.

22b
Figure 165. Changing the default java version

To confirm that the Java version has been switched, execute the following command:

java -version

The output should be similar to the following screenshot. The important part is that the version number starts with 1.8.0

22c
Figure 166. Confirmation that the default java version has been changed to OpenJDK 1.8.0

Replace the /etc/haproxy/haproxy.cfg file with the following:

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option                  http-server-close
    option                  forwardfor except 127.0.0.0/8
    option                  redispatch
    option                  abortonclose
    retries                 3
    maxconn                 3000
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           500
    default-server          inter 5s downinter 500 rise 1 fall 1
backend bk-ops
    balance roundrobin
    option                  httpchk HEAD /operations-center/ha/health-check
    server                  operations-center-1 127.0.0.1:8080 check
    server                  operations-center-2 127.0.0.1:8081 check
backend bk-j-a
    balance                 roundrobin
    option                  httpchk HEAD /jenkins-a/ha/health-check
    server                  jenkins-a-1 127.0.0.1:8082 check
    server                  jenkins-a-2 127.0.0.1:8083 check
backend bk-j-b
    balance                 roundrobin
    option                  httpchk HEAD /jenkins-b/ha/health-check
    server                  jenkins-b-1 127.0.0.1:8084 check
    server                  jenkins-b-2 127.0.0.1:8085 check
backend bk-def
    redirect                prefix /operations-center
frontend fr-http
    bind *:80
    reqadd                  X-Forwarded-Proto:\ http
    acl                     is_ops path_beg /operations-center
    acl                     is_j_a path_beg /jenkins-a
    acl                     is_j_b path_beg /jenkins-b
    use_backend             bk-ops if is_ops
    use_backend             bk-j-a if is_j_a
    use_backend             bk-j-b if is_j_b
    default_backend         bk-def
23
Figure 167. Editing the file

Start the haproxy service with the command

$ systemctl start haproxy
24
Figure 168. Starting haproxy

Verify that haproxy is operational. Using a web browser, attempt to access the URL http://127.0.0.1/ and verify that you are redirected to http://127.0.0.1/operations-center/ and that this returns a 503 error (because we have not installed CloudBees Jenkins Operation Center yet).

25
Figure 169. Expected redirect and 503

Disable selinux and open the required firewall ports with the following commands:

$ setenforce 0
$ firewall-cmd --permanent --add-service=http
$ firewall-cmd --permanent --add-port=8080-8085/tcp
$ firewall-cmd --reload
$ systemctl restart haproxy
26
Figure 170. Firewall configured

At this point you should be able to replicate the haproxy test from outside the virtual machine (if you do not know the machine’s address you can find it using the ifconfig command)

27
Figure 171. Expected redirect and 503 from outside

Installing CloudBees Jenkins Operations Center Server

Because all services will be running on one machine we will use the .WAR form of CJOC. Normally when running on CentOS it would be recommended to use the RPM-based installer, but that will conflict with trying to run a HA configuration on a single machine.

Run the following commands as root

$ mkdir ~/operations-center
$ cd ~/operations-center
$ curl -L -O https://downloads.cloudbees.com/cjoc/rolling/war/2.7.19.1/jenkins-oc.war
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  182M  100  182M    0     0   946k      0  0:03:17  0:03:17 --:--:-- 1252k
Note

The URL specified will download the latest version of CloudBees Jenkins Operations Center server at the time of writing which should result in the most consistent user experience with respect of this tutorial.

For production, we recommend using the latest release version of CJOC.

At this point we will start the first instance of our CJOC server cluster.

$ JENKINS_HOME=/root/operations-center java -jar jenkins-oc.war --httpPort=8080 --prefix=/operations-center
32
Figure 172. CloudBees Jenkins Operations Center instance 1 startup in progress

Startup can take a minute or two but is completed once the line Jenkins is fully up and running has been output.

33
Figure 173. Startup completed (with the Jenkins is fully up and running log message 12 lines from the bottom of the console output.

At this point, you should be able to access the unlock screen on http://ip-address/operations-center. To unlock Jenkins you should use the unlock code printed in startup logs (or directly accessing the file pointed out in the unlock screen):

33a
Figure 174. Unlock screen.

After your screen is unlocked, you are automatically redirected to the registration screen and presented with a screen similar to the following:

34
Figure 175. Registration screen

Click on the Request a trial license button.

34a
Figure 176. Completing the evaluation request details

The next screen is the plugins selection. In this stage you can click on Install suggested plugins which installs the recommended list of plugins, or you can select which plugins you want to install by clicking on Select plugins to install.

34b
Figure 177. Plugins selection wizard
34c
Figure 178. Plugins installation progress screen

Once the plugins installation finishes, you can create the first administrator user (recommended), or you can skip this step and continue as admin. In this case, the admin password is the same you introduced to unlock Jenkins.

34d
Figure 179. First administrator user creation form

Finally click on Restart and wait for CJOC to be up again.

At this point we will ensure that the initial configuration of CJOC is correct.

Log in with the user you created in the previous step (or the admin user if you have not), then select the Manage Jenkins » Configure System screen

Hover over the configuration item in the breadcrumbs bar. A small down arrow should appear. Click on the arrow and select Jenkins Location from the menu that is displayed.

37a
Figure 180. Jumping to Jenkins Location on the Configure System screen

Ensure that the Jenkins Location » Jenkins URL setting is using the public IP address of the Jenkins server. Click the Save button.

38
Figure 181. Jenkins Location » Jenkins URL

At this point we can start the backup instance and verify that HA is functioning.

Start a second Terminal and run the following commands:

$ cd ~/operations-center
$ JENKINS_HOME=/root/operations-center java -jar jenkins-oc.war --httpPort=8081 --prefix=/operations-center

Once the line Elected as a backup node has been logged, the HA cluster is complete.

40
Figure 182. Backup node started (with Elected as a backup node as the 6th last line in the active terminal)

Go to http://ip-address/operations-center/ha. Select the Instead of exiting the primary Jenkins JVM, restart the Jenkins JVM. checkbox and click the Start Fail-over button

41
Figure 183. Starting a fail-over
42
Figure 184. Primary switch-over
43
Figure 185. New primary starting up

Note: Depending on how slow your machine is, you may see a 503 timeout because the haproxy configuration does not have backup static content to cover certain critical sections of Jenkins startup during which time the /ha/health-check URL will block responses for longer than the simple time-out we have configured haproxy with.

44
Figure 186. New primary started

You can browse to Manage Jenkins » High Availability Status to verify that the active process has changed.

45
Figure 187. HA status after fail-over

Now we will configure security. Select Manage Jenkins » Configure Global Security

Enable security is selected by default. And you can see it is using the Jenkins' own user database

Then, you have to the following:

  1. Under Access Control/Authorization:

    • select the Role-based matrix authorization strategy, it will offer a drop-down of import strategies:

    • on that drop-down, select Typical initial setup (ignoring existing authorization strategy)

      51
  2. Select Client master on-master executors » Enforce and set the # of executors to 0

  3. Select Single Sign-On (security realm and authorization strategy) for the Client master security » Security Setting Enforcement option.

  4. Change the Authentication mapping option to Trusted master with equivalent security realm.

    53b
    Note

    The authentication mapping option controls how Jenkins user identities are converted when performing remote operations within the cluster.

    The mapping that most users will assume is in place is the Trusted master with equivalent security realm mapping which basically means that there is a bidirectional identity mapping between CJOC and the client masters. This type of mapping makes sense when you trust the client masters.

    If you are concerned about a client master being compromised - or more typically in cases where the client master is managed by a team known for running skunkworks projects - you may not want to assume that the Jenkins SYSTEM identity from a client master can perform operations on CJOC as SYSTEM.

    The Restricted master with equivalent security realm option thus uses a uni-directional mapping of the SYSTEM identity so that CJOC can send operations to the client master as SYSTEM but any attempts by the client master to send operations to CJOC as SYSTEM will be mapped to the ANONYMOUS identity. This mapping allows users to trigger builds across the cluster as themselves.

    For client masters that are completely untrusted, the Untrusted master with equivalent security realm option allows for a completely unidirectional mapping such that operations from CJOC will have an identity mapping to the client master but operations from the client master will be mapped to ANONYMOUS. The side-effect of selecting such a mapping strategy is that the client master will only be able to trigger jobs on other cluster masters if those jobs can be triggered by the ANONYMOUS user.

    If you are not using the Security Settings Enforcement to enforce a common security realm on all client masters then there are different authentication mapping options that allow converting different User IDs between client masters.

    In the case of this tutorial system, all the client masters are running as the same user on the same machine, so there is no point in restricting permissions.

  5. In Access Control for Builds select the Add button and add the Ad-hoc cluster operations authenticator.

    Note

    The Ad-hoc cluster operations authenticator captures the user who is performing an ad-hoc cluster operation and then allows the operation to run as that user. If you do not have this authenticator installed then ad-hoc cluster operations - that is, ones that are not defined as an item/job - will be performed as the ANONYMOUS identity and most likely will not actually do anything because that identity normally does not have any permissions in typical real-world deployments.

47a
Figure 188. Security configuration summary (1/2)
47b
Figure 189. Security configuration summary (2/2)

Save the configuration.

Note

Before proceeding any further, since we are using a firewall we have to tell this Jenkins Operations Center instance to work on a specific port when establishing TCP connections, and we need to open that port in the firewall.

$ firewall-cmd --permanent --add-port=49188/tcp
success
$ firewall-cmd --reload
success

Now, in Manage Jenkins » Configure Global Security you can specify a fixed port for TCP port for JNLP agents, and set it to 49188.

Configuring a shared agent

We will now create a shared agent (formerly called shared slave) to be used by the two Jenkins instances that we will be creating later.

On the master directly, edit the /etc/ssh/sshd_config file and change the line #PermitRootLogin yes to PermitRootLogin yes.

55a
Figure 190. Enabling root login over ssh

Restart the sshd service and ensure that it is enabled with the following commands:

$ systemctl restart sshd
$ systemctl enable sshd
55b
Figure 191. Restarting the sshd service (and ensuring that it is enabled)

In a new Terminal window, run the following commands

$ mkdir ~/slave-1
$ ssh-keygen
$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
55e
Figure 192. Preparing the shared agent

Now in Jenkins select New Item. Enter slave-1 as the name, and select Shared Agent as the type and then click _OK

Enter the following values:

Shared agent configuration

CJP has a Non-blocking I/O variant of the SSH Agents launcher, so we will use it here as it has a faster connection recycling time than the standard SSH launcher.

Table 15. Shared agent configuration
Parameter name Value

# of executors

1

Remote FS root

./slave-1

Launch method

Launch agents on Unix machines via SSH (Non-blocking I/O)

Host

127.0.0.1

Click the Add button next to the empty Credentials drop down

57
Figure 193. Adding credentials

Change the Kind to SSH Username with private key

Enter the following values:

Username : root

Private Key : Select From the Jenkins master ~/.ssh

Note
We need the Scope to be Global so that the credential will be made available to client masters. A scope of System would restrict the credential to CJOC only.
58
Figure 194. Adding SSH credentials

If you specified a passphrase when you created the key you will need to provide it in the options exposed by the Advanced button.

Click Add.

The modal dialog should be removed and the newly added credentials should be selected in the Credentials drop down.

59a
Figure 195. Credentials defined

Click the Save button.

The agent should be listed as Available for lease

60
Figure 196. Configured shared agent

Configuring a shared cloud

We will now create a shared cloud of JNLP agents that can also be used by the Jenkins instances that we will be creating later.

In Jenkins select New Item. Enter cloud-1 as the name, and select Shared Cloud as the type and then click OK

sc01
Figure 197. New shared cloud

Enter the following values:

Table 16. Shared cloud configuration
Parameter name Value

# of executors

1

Remote FS root

./jnlp

sc02
Figure 198. Configuration before saving

Click the Save button.

The cloud should be listed as Available for lease with 0 connected agents.

sc03
Figure 199. Configured shared cloud

At this point we will give it some JNLP agents to lease out.

Note
We could use JNLP to launch the agents from the browser, but that would require either disabling the firewall first or configuring each Jenkins instance to use a fixed JNLP port and ensuring that the firewall has those ports open. The later is the recommended approach for production systems. For this tutorial, we will cheat and just connect the JNLP agents from localhost which side-steps the entire issue.

On the shared cloud screen, the Run from agent command line: bullet point has a link to the agent JAR file disguised as slave.jar. The link will depend on the URL of CJOC, but it should be something like http://ip-address/operations-center/jnlpJars/slave.jar. We will be downloading the JAR file into the virtual machine.

On the master directly, in a new Terminal window, run the following commands (replacing http://ip-address/operations-center/jnlpJars/slave.jar with the URL from your CJOC)

$ mkdir ~/jnlp-1
$ cd ~/jnlp-1
$ curl -L -O http://*ip-address*/operations-center/jnlpJars/slave.jar
sc04
Figure 200. slave.jar downloaded

Next, we need to start the first JNLP agent, so we need to copy the launch command from the shared cloud screen. Enter that command and run it. It should look something like

$ java -jar slave.jar -jnlpUrl http://*ip-address*/operations-center/jnlpSharedSlaves/cloud-1/slave-agent.jnlp -secret *hex-encoded-secret*
sc06
Figure 201. A JNLP agent started and successfully connected

Now in Jenkins, if you reload the shared cloud screen you should now see that there is 1 connected agent

sc07
Figure 202. Configured shared cloud with 1 connected agent

We repeat this process to add a second agent to the cloud.

On the master directly, in a new Terminal window, run the following commands (replacing http://ip-address/operations-center/jnlpJars/slave.jar with the URL from your CloudBees Jenkins Operations Center)

$ mkdir ~/jnlp-2
$ cd ~/jnlp-2
$ curl -L -O http://*ip-address*/operations-center/jnlpJars/slave.jar

Finally, we need to start the second JNLP agent, so again we need to copy the launch command from the shared cloud screen. Enter that command and run it. It should look something like

$ java -jar slave.jar -jnlpUrl http://*ip-address*/operations-center/jnlpSharedSlaves/cloud-1/slave-agent.jnlp -secret *hex-encoded-secret*
sc09
Figure 203. A second JNLP agent started and successfully connected

Now in Jenkins, if you reload the shared cloud screen you should now see that there are 2 connected agents

sc10
Figure 204. Configured shared cloud with 2 connected agents

Configuring a CloudBees Jenkins Enterprise Client Master with version equal to/higher than 2.7.19

The aim of this section is to configure a CJE 2.7.19 (i.e. 2.7.x series) client master as part of the CJOC Cluster.

On the master directly, open a new Terminal window, run the following commands

$ mkdir ~/jenkins-a
$ cd ~/jenkins-a
$ curl -L -O https://downloads.cloudbees.com/cje/rolling/war/2.7.19.1/jenkins.war
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                               Dload  Upload   Total   Spent    Left  Speed
100  203M  100  203M    0     0  5201k      0  0:00:39  0:00:39 --:--:-- 11.6M
$ JENKINS_HOME=/root/jenkins-a java  -Dhudson.slaves.NodeProvisioner.MARGIN=50 \
-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -jar jenkins.war --httpPort=8082 --prefix=/jenkins-a
Note

There are two properties being specified on the command line used to launch Jenkins:

  • hudson.slaves.NodeProvisioner.MARGIN

  • hudson.slaves.NodeProvisioner.MARGIN0

These properties control how eager Jenkins is in provisioning agents from a cloud provider. The values used in this tutorial have been chosen to bias Jenkins towards eager provisioning as opposed to the default prudent values that Jenkins normally runs with.

We are using eager values in order to make the Tutorial faster to follow and also to illustrate some less common scenarios by way of explanation, e.g. see An idle node provisioned to the master.

For a production system, it is quite likely that you will need to tune the values of these properties to get provisioning responsiveness in line with your own expectations and requirements. Such tuning is typically a trade-off between risking eager over-provisioning and risking excessive build delays.

In CloudBees Jenkins Operation Center select New Item. Enter jenkins-a as the name, and select Client Master as the type and then click OK

69
Figure 205. Creating a client master
70
Figure 206. Default client master configuration

Select Save

Note

We could copy the connection details and configure them on the client master, however as CloudBees Jenkins Enterprise 15.05 supports configuration push we will use the simpler push mechanism.

The Connection Details mechanism is primarily intended for use cases where either the browser mediated configuration push cannot be used due to networking issues or where a client master needs its connection authentication reset.

72
Figure 207. Ready to push configuration

In the Client master URL box enter http://ip-address/jenkins-a/.

Tip
If you TAB out of the box, or otherwise leave the text box, the form validation will be triggered. In our case, you will see a warning The Specified URL looks like a Jenkins URL. Authentication is enabled on the Jenkins instance so unable to validate whether the instance supports configuration push from operations center.

Click the Push configuration button. You should be redirected to the soon-to-be-client master and presented with a confirmation screen.

Note

Because we just started a clean Client Master, when we click join we are going to have to go through the same Setup Wizard as the one we passed on CJOC

So, from the logs of the last Terminal where you started CJE, copy the hash from there like before, and paste it in the next window.

Then, when asked to confirm, click Join again.

73
Figure 208. Confirmation for joining a cluster

Select Join Operations Center, behind the scenes the following actions will take place:

  • The connection details will be configured and the operations center connector enabled.

  • The operations center connector will establish a connection to CloudBees Jenkins Operations Center.

  • CJOC will provision a sub-license for the client master.

  • CJOC will push the contextual information to the client master.

  • The security realm and authorization strategy details will be pushed to the client master.

Note
  • Single sign-on is active and the instance is secured and, because your session is already logged in to CloudBees Jenkins Operation Center, you are logged in to CJE as the same user (Note: If this is not the case, then reload the page. Depending on the performance of the virtual machine the security realm migration from no realm to secured on the initial join to the cluster can take place a few seconds after the page load)

  • The breadcrumbs show the context of the CJE master within the CJOC cluster, navigation between the cluster server and the client master should be similar to navigation within Folders despite changing the actual server that the user is browsing

  • The Role Based Access Control permissions have been propagated from CJOC

  • The master is now configured without any executors.

Once you joined the jenkins-a Client Master, you will be presented with the following screen of CJOC:

cjoc after joined cm1
Figure 209. CJOC after jenkins-a client master addition
  1. On that screen, click on jenkins-a to see the Setup Wizard.

  2. Click on Install Suggested Plugins again.

  3. When the suggested plugins have been installed, finally click on the Start using CloudBees Jenkins Enterprise button to access the jenkins-a main screen.

Note

Before proceeding any further, because we are using a firewall we have to tell this Jenkins instance to work on a specific port when establishing tcp connections and we need to open that port in the firewall. Note that this port has to be different from the one used by Jenkins Operations Center.

$ firewall-cmd --permanent --add-port=49189/tcp
success
$ firewall-cmd --reload
success

Now, in Manage Jenkins » Configure Global Security you can specify a fixed port for TCP port for JNLP agents, and set it to 49189.

Create a new free-style job that has a shell build step with the following commands:

sleep 30
date

Start a build of the job. The shared agent (formerly called a shared slave) will be leased from CloudBees Jenkins Operations Center and returned when the job completes.

75
Figure 210. Creating a test job
76
Figure 211. The shared agent being provisioned
Note

While running this, you could see sometimes an over-provisioning of nodes. See An idle node provisioned to the master for a more detailed explanation.

CJE Backup Node

We are now going to start the backup node for this CJE master.

Start a fourth Terminal and run the following commands:

$ cd ~/jenkins-a
$ JENKINS_HOME=/root/jenkins-a java  -Dhudson.slaves.NodeProvisioner.MARGIN=50 \
-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -jar jenkins.war --httpPort=8083 --prefix=/jenkins-a
80
Figure 212. The backup node started

At this point you can open http://ip-address/jenkins-a/ha/ to view the HA status of the first Jenkins instance and if you want you could verify the HA failover works with the client Jenkins just as you did for the Operations Center Server nodes.

81
Figure 213. The Jenkins HA status

A more interesting situation is to trigger a failover while a build is in progress. The point of triggering this type of failover is to see that the agent leases are recovered even when the client master fails over. [2] For this scenario, you will need four windows open:

  • http://ip-address/jenkins-a/ha/ on this window select the Instead of exiting the primary Jenkins JVM, restart the Jenkins JVM. check box.

  • http://ip-address/jenkins-a/

  • http://ip-address/operations-center/job/slave-1/

  • http://ip-address/operations-center/job/cloud-1/

Note
Timing this can be tricky, so you may want to consider increasing the job duration from 30 seconds to 180 seconds by changing the sleep 30 in the build step to sleep 180.
82
Figure 214. Four browser windows ready to trigger a Jenkins failover mid build

Start the job building, wait for it to start and then initiate the fail-over.

83
Figure 215. Job started building immediately prior to fail-over
84
Figure 216. Fail-over initiated
Note
The client masters can take up to 15 seconds to persist their local cache state of the agents they have been leased. If you initiate the failover during the first 15 seconds of an agent lease then a secondary safety recovery process will recover the agent from its leased status. Once the details have been persisted then a faster recovery process can be used. The slower recovery process is to allow for network partitions during a failover.
85
Figure 217. Fail-over completed
86
Figure 218. Agent recovery in progress
87
Figure 219. Agent recovery completed

Configuring a CloudBees Jenkins Enterprise 16.06 Client Master

The aim of this section is to configure a CloudBees Jenkins Enterprise 16.06 (i.e. 1.651 series) client master as part of the CJOC cluster.

On the master directly, open a new Terminal window, run the following commands:

$ mkdir ~/jenkins-b
$ cd ~/jenkins-b
$ curl -L -O https://downloads.cloudbees.com/cje/1.651/war/1.651.3.1/jenkins.war
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                               Dload  Upload   Total   Spent    Left  Speed
100  167M  100  167M    0     0  1956k      0  0:01:27  0:01:27 --:--:-- 2724k
$ JENKINS_HOME=/root/jenkins-b java  -Dhudson.slaves.NodeProvisioner.MARGIN=50 \
-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -jar jenkins.war --httpPort=8084 --prefix=/jenkins-b
88
Figure 220. Starting a CJE master
  1. In CloudBees Jenkins Operation Center select New Item.

  2. Enter jenkins-b as the name, and select Client Master as the type and then click OK. We are going to do what we did previously for the jenkins-a Client Master.

Select On-master executors » Override and enter 1 as the # of executors.

Note
We could copy the connection details and configure them on the client master, however because all versions of CloudBees Jenkins Enterprise after 1.532.2 support configuration push, we will use the simpler push mechanism.

Select Save

In the Client master URL box enter http://ip-address/jenkins-b/ (if you TAB out of the box, or otherwise leave the text box, the form validation will be triggered and you should see Jenkins instance supports configuration push from operations center underneath the Push Configuration button)

97
Figure 221. Ready to push configuration

Click the Push configuration button. You should be redirected to the soon-to-be-client master and presented with a confirmation screen.

Note
If the master had security enabled, then display of the confirmation screen would be after an authentication check against that master’s security realm and authorization strategy. Only users with the Jenkins/ADMINISTER permission will be able to complete the push operation.
98
Figure 222. Confirmation for joining a cluster

Select Yes, behind the scenes the following actions will take place:

  • The connection details will be configured and the operations center connector enabled.

  • The operations center connector will establish a connection to CloudBees Jenkins Operations Center.

  • CJOC will provision a sub-license for the client master.

  • CJOC will push the contextual information to the client master.

  • The security realm and authorization strategy details will be pushed to the client master.

99
Figure 223. Joined to the cluster

Some of the points to notice:

  • Single sign-on is active and the instance is secured and, because your session is already logged in to CJOC, you are logged in to CJE as the same user

  • The breadcrumbs show the context of the CJE master within the CJOC cluster, navigation between the cluster server and the client master should be similar to navigation within Folders despite changing the actual server that the user is browsing

  • The Role Based Access Control permissions have been propagated from CJOC

  • The master is now configured with 1 dedicated executor.

  • If you inspect the license (Manage Jenkins » Manage License and click the Check Validity button at the bottom) it will show as a Operations Center managed CloudBees Jenkins Enterprise license with 1 dedicated executors

Note

Before proceeding any further, because we are using a firewall we have to tell this Jenkins instance to work on a specific port when establishing tcp connections and we need to open that port in the firewall. Note that this port has to be different from the one used by Jenkins Operations Center and by jenkins-a.

$ firewall-cmd --permanent --add-port=49190/tcp
success
$ firewall-cmd --reload
success

Now, in Manage Jenkins » Configure Global Security you can specify a fixed port for TCP port for JNLP agents, and set it to 49190.

Create three new free-style jobs that have a shell build step with the following commands

sleep 180
date

Start a build of all three jobs. The first build will be started on the dedicated executor and the second and third should trigger a lease of the shared agents from CJOC which will be returned when the job completes.

Note
By default Jenkins will not provision an agent within 100s of Jenkins starting and the provisioning logic is only run once every 10 seconds. The 180 second build duration of the three jobs should be sufficient that all three jobs will be running at the same time, one on the master’s dedicated executor and the other two on agents leased from CJOC, although it is possible that you might not get them all to work first try. The aim of this exercise is to illustrate that nodes get provisioned to client masters on demand.
100
Figure 224. Three test jobs created
101
Figure 225. The three test jobs in progress
Note

There are five system properties that control the sensitivity of provisioning:

  • hudson.slaves.NodeProvisioner.MARGIN0

  • hudson.slaves.NodeProvisioner.MARGIN - note that this is a percentage unlike MARGIN0

  • hudson.slaves.NodeProvisioner.MARGIN_DECAY

  • hudson.slaves.NodeProvisioner.initialDelay (milliseconds) - how long after Jenkins starts before provisioning can take place.

  • hudson.slaves.NodeProvisioner.recurrencePeriod (milliseconds) - how often to check if there is a requirement for provisioning.

Tuning these properties involves a trade-off between rapidly provisioning nodes in cases where they were not required and delaying too long before provisioning nodes that are actually required. For example, if we change how we launch this Jenkins instance and add -Dhudson.slaves.NodeProvisioner.initialDelay=10000 -Dhudson.slaves.NodeProvisioner.recurrencePeriod=1000 without tuning down the MARGIN properties, we can end up with idle nodes getting provisioned to the client master

101 alt
Figure 226. An idle node provisioned to the master

The idle node(s) will be detected and returned to the Operations Center, but the represent a drain on available capacity while they are on lease to a client master.

Start the backup node for this CJE master, Start a sixth Terminal and run the following commands

$ cd ~/jenkins-b
$ JENKINS_HOME=/root/jenkins-b java -Dhudson.slaves.NodeProvisioner.MARGIN=50 \
-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -jar jenkins.war --httpPort=8085 --prefix=/jenkins-b
102
Figure 227. Starting the backup node

At this point you can open http://ip-address/jenkins-b/ha/ to view the HA status of the second Jenkins instance.

And if you want you could verify the HA failover works with the this client Jenkins version just as you did previously for the Operations Center Server nodes and for the 2.7 client Jenkins nodes.

The builds were performed using the shipping 1.7 versions of the operations center client plugins. To complete the configuration of a CJE 16.x client master we should upgrade the plugins.

Note
The versions of the plugins that are available on your system may be newer than those displayed in the screenshots captured here. Newer versions should be backwards compatible, so this should not be an issue.
  • Go to Jenkins » jenkins-b » Manage Jenkins » Manage Plugins and on the Updates tab select all of the Operations Center plugins.

  • Click on the Download now and install after restart button.

Note
As we are upgrading plugins that are already installed, it is necessary to restart.

If you want, after Jenkins restarts, you can repeat the three jobs building concurrently exercise to confirm that the upgrade of plugins did not affect the behaviour.

Securing the cluster

On the CJOC Server select Groups; there should be three groups from when security was enabled:

  • Administrators with the member admin

  • Developers with no members

  • Browsers with no members

120
Figure 228. The initial groups

Select Browsers group.

121
Figure 229. The Browsers group

Select the Configure screen and remove the propagation from the browse role.

122
Figure 230. Configuring the Browsers group

Select Save. Add authenticated as a member of the group

124
Figure 231. Users group adding authenticated
125
Figure 232. Users group membership

On the main Operations Center Server screen, there is a down arrow for the context menu that appears on the right side of the currently selected item name in the main table. Either click this arrow for the jenkins-b client master and select Groups or else navigate jenkins-b » Operations Center » Manage » Groups

127
Figure 233. The context menu for jenkins-b
128
Figure 234. The jenkins-b groups

Select the add some? link and create a group called Master Users

129
Figure 235. Adding a new groups

Give the group the browse and develop roles.

130
Figure 236. Master Users group configuration

Add authenticated as a member

131
Figure 237. Master Users group adding authenticated
132
Figure 238. Master Users group membership

Logout from CloudBees Jenkins Operations Center, e.g. http://ip-address/operations-center/logout or the log out link in the top right hand corner of the screen.

In the Global Security, make sure you select the "Allow users to sign up" checkbox and Save.

Then log out, and log in again by creating a new Test user using the sign up link on the top right corner.

You should now see just the jenkins-b master as that is the only master where you have permissions.

133
Figure 239. The operations center view as a restricted user

Attempting to access http://ip-address/jenkins-a/ or `http://ip-address/jenkins-b/ will result in an Access denied.

134
Figure 240. Access denied access to jenkins-a

The client master that we do have permission to access can be browsed successfully, and our permissions are inherited from those assigned by the operations center server

Using Cluster Operations

Logout from CloudBees Jenkins Operations Center, e.g. http://ip-address/operations-center/logout or the log out link in the top right hand corner of the screen.

Login as the admin.

In CloudBees Jenkins Operation Center select New Item. Enter restart-busy as the name, and select Cluster Operations as the type and then click OK

200
Figure 241. Creating a cluster operation

Click the Add Operations button and select Masters as the operation we want to perform will apply to client masters.

We now need to specify which masters the operation(s) will apply to. In our case we want to restart a master that has been specified by a job parameter, so select the With a specified Jenkins URL radio button providing the expression ${SOURCE_JENKINS_URL}.

204
Figure 242. A Masters operation with Source configured to apply to a specific client masters with the URL provided by a job parameter.

We need to declare the job parameter that we are referencing, so check the This build is parameterized check box and add a String Parameter with name SOURCE_JENKINS_URL.

204a
Figure 243. A String parameter called SOURCE_JENKINS_URL.

We also want to restrict the operation to masters that are actually on-line, so add an Is Online filter

204b
Figure 244. Adding a Filters restrict the operation to online client masters only.

Click Add Step and select the Safe Restart step.

206
Figure 245. Adding a Safe Restart step

Click Save.

207
Figure 246. The configured cluster operation
Note
At this point the operation has inherited the permissions from its parent (i.e. the root in this case). Those permissions give anyone with the develop role the ability to trigger a build. We do not want ordinary developers to be able to restart all the client masters. Additionally, we may not want to expose the operation to users with the browse role.

Select the Roles » Filter action for the cluster operation

Select the Require explicit assignment check boxes for the browse and develop roles.

211
Figure 247. The Jenkins » restart-busy » Roles » Filter screen before filtering roles.

Select Apply

Note
At this point, we could trigger the cluster operation and manually provide the URL of the client master to restart. A more interesting example is to configure alerts to be pushed to all client masters that will trigger a restart for us.
Warning
In the following example we pick an easy to trigger metric (HTTP requests) with a very low threshold (0.05 requests per second). In a production system you would be very unlikely to select this metric to trigger a safe restart.

In CloudBees Jenkins Operation Center select New Item. Enter alert-busy as the name, and select Miscellaneous configuration container as the type and then click OK

220
Figure 248. Creating a Miscellaneous configuration container

Add an Alerts snippet, then add a Local metric timer within range condition.

In that component:

  • Select http.requests as the timer.

  • Select the 1 minute average rate as the value.

  • Select 0.05 as the Alert if above boundary.

  • Set the Alert title to Busy

  • Add a Trigger a build of a remote project recipient, and select the restart-busy cluster operation we just created as a recipient

Note
The defaults for the remote trigger operation are to pass the URL of the triggering Jenkins as a job parameter called SOURCE_JENKINS_URL. The defaults can be modified by clicking the Advanced button.
224
Figure 249. The configured miscellaneous configuration container

Click Save.

Now navigate to any of the connected client masters. Simply accessing the web page should generate enough HTTP requests to pull the 1 minute average rate above 0.05 requests per second.

227
Figure 250. A client master with the alert in a non-idle state
228
Figure 251. The alerts management screen of a client master where the alert condition has been passed but not held for long enough to trigger
229
Figure 252. The alerts management screen of a client master where the alert condition has triggered

If you are quick you can then navigate to Operations Center and see the cluster operation in the queue:

231
Figure 253. Hovering over the queue item reveals the provided parameter of the triggering client master
232
Figure 254. The client master restarting as a result of the cluster operation
Note
The jenkins-b master does not have the CloudBees Monitoring plugin - which provides the alerting - due to the minimal set of plugins that we installed. If you want to test alerting based triggers on the jenkins-b master you will need to install the CloudBees Monitoring plugin first.
Tip
You probably want to change or remove the alert after confirming that it works otherwise your client masters will keep on getting restarted.

Additional exercises

There are some additional exercises that can be performed with this test system:

  • Start a build on one of the client masters and kill both of the operations center server instances. The build should be uninterrupted and while the operations center server is off-line builds should continue using the agent on lease. Once operations center server is restarted, the agent should be returned and the normal one build per lease behaviour restored.

  • Stop both operations center server instances and verify that you can login on the client masters, however Single Sign-On is disabled. Once you restart either/both of the operations center server instances your session established outside of Single Sign-On will be invalidated and you will be authenticated via the Single Sign-On mechanism.

  • Start builds on all client masters and verify that the shared agent and the agents in the shared cloud are all leased out and returned

  • Start a build on one of the client masters and failover the operations center server. The build should be uninterrupted and the agent should be returned once the connection to the operations center server is restored.

Appendix - Plugins Included in CJOC 2.x

Introduction

Each CJOC 2.x release includes a set of plugins. These plugins are classified as required and optional. This classification affects the behaviour in the different install and upgrade scenarios. As an example, for a fresh installation:

  • Required plugins will always be installed.

  • Optional plugins will only be installed if selected in the install wizard. The rest of the plugins will still be available through the Plugin Manager.

Rolling Train

CJOC 2.32.2.1

CJOC 2.32.2.1 includes Jenkins Core 2.32.2

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.32.0.1

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0

  • cloudbees-ha 4.7

  • cloudbees-license 9.3

  • cloudbees-monitoring 2.5

  • cloudbees-support 3.7

  • credentials 2.1.10

  • display-url-api 0.5

  • jackson2-api 2.7.3

  • junit 1.19

  • mailer 1.18

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.3

  • node-iterator-api 1.5

  • operations-center-agent 2.32.0.1

  • operations-center-context 2.32.0.7

  • operations-center-license 2.32.0.1

  • operations-center-server 2.32.0.1

  • structs 1.5

  • support-core 2.33

  • token-macro 2.0

  • variant 1.1

The optional plugins included in the release are:

  • active-directory 1.48

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-slave-plugin 0.3.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-ssh-slaves 1.7

  • cloudbees-update-center-plugin 4.23

  • credentials-binding 1.10

  • durable-task 1.12

  • ec2 1.35

  • email-ext 2.53

  • icon-shim 2.0.3

  • infradna-backup 3.32

  • javadoc 1.4

  • ldap 1.13

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.14

  • mesos 0.13.1

  • monitoring 1.62.0

  • nectar-rbac 5.12

  • nectar-vmware 4.3.5

  • operations-center-analytics-config 2.32.0.1

  • operations-center-analytics-dashboards 2.32.0.1

  • operations-center-analytics-feeder 2.32.0.1

  • operations-center-analytics-reporter 2.32.0.1

  • operations-center-analytics-viewer 2.32.0.1

  • operations-center-analytics 2.32.0.1

  • operations-center-azure-cloud 2.32.0.1

  • operations-center-clusterops 2.32.0.1

  • operations-center-ec2-cloud 2.32.0.1

  • operations-center-elasticsearch-provider 2.32.0.1

  • operations-center-embedded-elasticsearch 2.32.0.1

  • operations-center-jnlp-controller 2.32.0.1

  • operations-center-mesos-cloud 2.32.0.1

  • operations-center-monitoring 2.32.0.1

  • operations-center-rbac 2.32.0.1

  • operations-center-sso 2.32.0.1

  • operations-center-updatecenter 2.32.0.1

  • pam-auth 1.3

  • plain-credentials 1.3

  • script-security 1.24

  • secure-requester-whitelist 1.1

  • skip-plugin 4.0

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • workflow-step-api 2.7

CJOC 2.32.1.1

CJOC 2.32.1.1 includes Jenkins Core 2.32.1

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.32.0.1

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0

  • cloudbees-ha 4.7

  • cloudbees-license 9.3

  • cloudbees-monitoring 2.5

  • cloudbees-support 3.7

  • credentials 2.1.10

  • display-url-api 0.5

  • jackson2-api 2.7.3

  • junit 1.19

  • mailer 1.18

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.3

  • node-iterator-api 1.5

  • operations-center-agent 2.32.0.1

  • operations-center-context 2.32.0.3

  • operations-center-license 2.32.0.1

  • operations-center-server 2.32.0.1

  • structs 1.5

  • support-core 2.33

  • token-macro 2.0

  • variant 1.1

The optional plugins included in the release are:

  • active-directory 1.48

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-slave-plugin 0.3.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-ssh-slaves 1.7

  • cloudbees-update-center-plugin 4.23

  • credentials-binding 1.10

  • durable-task 1.12

  • ec2 1.35

  • email-ext 2.53

  • icon-shim 2.0.3

  • infradna-backup 3.32

  • javadoc 1.4

  • ldap 1.13

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.14

  • mesos 0.13.1

  • monitoring 1.62.0

  • nectar-rbac 5.9

  • nectar-vmware 4.3.5

  • operations-center-analytics-config 2.32.0.1

  • operations-center-analytics-dashboards 2.32.0.1

  • operations-center-analytics-feeder 2.32.0.1

  • operations-center-analytics-reporter 2.32.0.1

  • operations-center-analytics-viewer 2.32.0.1

  • operations-center-analytics 2.32.0.1

  • operations-center-azure-cloud 2.32.0.1

  • operations-center-clusterops 2.32.0.1

  • operations-center-ec2-cloud 2.32.0.1

  • operations-center-elasticsearch-provider 2.32.0.1

  • operations-center-embedded-elasticsearch 2.32.0.1

  • operations-center-jnlp-controller 2.32.0.1

  • operations-center-mesos-cloud 2.32.0.1

  • operations-center-monitoring 2.32.0.1

  • operations-center-rbac 2.32.0.1

  • operations-center-sso 2.32.0.1

  • operations-center-updatecenter 2.32.0.1

  • pam-auth 1.3

  • plain-credentials 1.3

  • script-security 1.24

  • secure-requester-whitelist 1.1

  • skip-plugin 4.0

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • workflow-step-api 2.7

CJOC 2.19.4.2

CJOC 2.19.4.2 includes Jenkins Core 2.19.4

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.7.3.4

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0

  • cloudbees-ha 4.7

  • cloudbees-license 9.2

  • cloudbees-monitoring 2.5

  • cloudbees-support 3.7

  • credentials 2.1.6

  • display-url-api 0.5

  • jackson2-api 2.7.3

  • junit 1.19

  • mailer 1.18

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • node-iterator-api 1.5

  • operations-center-agent 2.19.0.3

  • operations-center-context 2.19.2.3

  • operations-center-license 2.19.0.2

  • operations-center-server 2.19.0.4

  • structs 1.3

  • support-core 2.33

  • token-macro 2.0

  • variant 1.0

The optional plugins included in the release are:

  • active-directory 1.48

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-slave-plugin 0.3.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-ssh-slaves 1.5

  • cloudbees-update-center-plugin 4.22

  • credentials-binding 1.8

  • durable-task 1.12

  • ec2 1.35

  • email-ext 2.51

  • icon-shim 2.0.3

  • infradna-backup 3.31

  • javadoc 1.4

  • ldap 1.13

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mesos 0.13.1

  • monitoring 1.62.0

  • nectar-rbac 5.9

  • nectar-vmware 4.3.5

  • operations-center-analytics-config 2.19.0.2

  • operations-center-analytics-dashboards 2.19.0.2

  • operations-center-analytics-feeder 2.19.0.2

  • operations-center-analytics-reporter 2.19.0.2

  • operations-center-analytics-viewer 2.19.0.2

  • operations-center-analytics 2.19.0.2

  • operations-center-azure-cloud 2.19.0.2

  • operations-center-clusterops 2.19.0.2

  • operations-center-ec2-cloud 2.19.0.2

  • operations-center-elasticsearch-provider 2.19.0.2

  • operations-center-embedded-elasticsearch 2.19.0.2

  • operations-center-jnlp-controller 2.19.0.2

  • operations-center-mesos-cloud 2.19.0.2

  • operations-center-monitoring 2.19.0.2

  • operations-center-rbac 2.19.0.2

  • operations-center-sso 2.19.0.2

  • operations-center-updatecenter 2.19.0.2

  • pam-auth 1.3

  • plain-credentials 1.2

  • script-security 1.24

  • secure-requester-whitelist 1.0

  • skip-plugin 4.0

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • workflow-step-api 2.4

CJOC 2.19.3.1

CJOC 2.19.3.1 includes Jenkins Core 2.19.3

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • bouncycastle-api 2.16.0

  • cloudbees-assurance 2.7.3.2

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0

  • cloudbees-ha 4.7

  • cloudbees-license 9.2

  • cloudbees-monitoring 2.5

  • cloudbees-support 3.7

  • credentials 2.1.6

  • display-url-api 0.5

  • jackson2-api 2.7.3

  • junit 1.19

  • mailer 1.18

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • node-iterator-api 1.5

  • operations-center-agent 2.19.0.2

  • operations-center-context 2.19.2.3

  • operations-center-license 2.19.0.2

  • operations-center-server 2.19.0.4

  • structs 1.3

  • support-core 2.33

  • token-macro 2.0

  • variant 1.0

The optional plugins included in the release are:

  • active-directory 1.47

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.11.37

  • azure-slave-plugin 0.3.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-ssh-slaves 1.5

  • cloudbees-update-center-plugin 4.22

  • credentials-binding 1.8

  • durable-task 1.12

  • ec2 1.35

  • email-ext 2.51

  • icon-shim 2.0.3

  • infradna-backup 3.31

  • javadoc 1.4

  • ldap 1.13

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mesos 0.13.1

  • monitoring 1.62.0

  • nectar-rbac 5.9

  • nectar-vmware 4.3.5

  • operations-center-analytics-config 2.19.0.2

  • operations-center-analytics-dashboards 2.19.0.2

  • operations-center-analytics-feeder 2.19.0.2

  • operations-center-analytics-reporter 2.19.0.2

  • operations-center-analytics-viewer 2.19.0.2

  • operations-center-analytics 2.19.0.2

  • operations-center-azure-cloud 2.19.0.2

  • operations-center-clusterops 2.19.0.2

  • operations-center-ec2-cloud 2.19.0.2

  • operations-center-elasticsearch-provider 2.19.0.2

  • operations-center-embedded-elasticsearch 2.19.0.2

  • operations-center-jnlp-controller 2.19.0.2

  • operations-center-mesos-cloud 2.19.0.2

  • operations-center-monitoring 2.19.0.2

  • operations-center-rbac 2.19.0.2

  • operations-center-sso 2.19.0.2

  • operations-center-updatecenter 2.19.0.2

  • pam-auth 1.3

  • plain-credentials 1.2

  • script-security 1.24

  • secure-requester-whitelist 1.0

  • skip-plugin 4.0

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • workflow-step-api 2.4

CJOC 2.7.21.1

CJOC 2.7.21.1 includes Jenkins Core 2.7.21

The list of plugins is the same than in the previous version 2.7.20.2

CJOC 2.7.20.2

CJOC 2.7.20.2 includes Jenkins Core 2.7.20

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.2

  • cloudbees-folder 5.13

  • cloudbees-folders-plus 3.0

  • cloudbees-ha 4.7

  • cloudbees-license 8.1

  • cloudbees-monitoring 2.5

  • cloudbees-support 3.7

  • credentials 2.1.4

  • jackson2-api 2.7.3

  • mailer 1.17

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • node-iterator-api 1.5

  • operations-center-agent 2.7.0.0

  • operations-center-context 2.7.0.5

  • operations-center-license 2.7.0.0

  • operations-center-server 2.7.0.0

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • active-directory 1.47

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-slave-plugin 0.3.3

  • bouncycastle-api 1.648.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-ssh-slaves 1.5

  • cloudbees-update-center-plugin 4.20

  • credentials-binding 1.8

  • durable-task 1.12

  • ec2 1.35

  • email-ext 2.46

  • icon-shim 2.0.3

  • infradna-backup 3.30

  • javadoc 1.4

  • junit 1.18

  • ldap 1.12

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mesos 0.13.1

  • monitoring 1.59.0

  • nectar-rbac 5.9

  • nectar-vmware 4.3.5

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-dashboards 2.7.0.0

  • operations-center-analytics-feeder 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-analytics-viewer 2.7.0.0

  • operations-center-analytics 2.7.0.0

  • operations-center-azure-cloud 2.7.0.0

  • operations-center-clusterops 2.7.0.0

  • operations-center-ec2-cloud 2.7.0.0

  • operations-center-elasticsearch-provider 2.7.0.0

  • operations-center-embedded-elasticsearch 2.7.0.0

  • operations-center-jnlp-controller 2.7.0.0

  • operations-center-mesos-cloud 2.7.0.0

  • operations-center-monitoring 2.7.0.0

  • operations-center-rbac 2.7.0.0

  • operations-center-sso 2.7.0.0

  • operations-center-updatecenter 2.7.0.0

  • pam-auth 1.3

  • plain-credentials 1.2

  • script-security 1.23

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • workflow-step-api 2.4

CJOC 2.7.19.1

CJOC 2.7.19.1 includes Jenkins Core 2.7.19

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.1

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0

  • cloudbees-ha 4.7

  • cloudbees-license 8.1

  • cloudbees-monitoring 2.5

  • cloudbees-support 3.7

  • credentials 2.1.4

  • jackson2-api 2.7.3

  • mailer 1.17

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.0

  • node-iterator-api 1.5

  • operations-center-agent 2.7.0.0

  • operations-center-context 2.7.0.0

  • operations-center-license 2.7.0.0

  • operations-center-server 2.7.0.0

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • active-directory 1.47

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-slave-plugin 0.3.3

  • bouncycastle-api 1.648.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-ssh-slaves 1.5

  • cloudbees-update-center-plugin 4.20

  • credentials-binding 1.8

  • durable-task 1.12

  • ec2 1.35

  • email-ext 2.46

  • icon-shim 2.0.3

  • infradna-backup 3.30

  • javadoc 1.4

  • junit 1.18

  • ldap 1.12

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mesos 0.13.1

  • monitoring 1.59.0

  • nectar-rbac 5.8

  • nectar-vmware 4.3.5

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-dashboards 2.7.0.0

  • operations-center-analytics-feeder 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-analytics-viewer 2.7.0.0

  • operations-center-analytics 2.7.0.0

  • operations-center-azure-cloud 2.7.0.0

  • operations-center-clusterops 2.7.0.0

  • operations-center-ec2-cloud 2.7.0.0

  • operations-center-elasticsearch-provider 2.7.0.0

  • operations-center-embedded-elasticsearch 2.7.0.0

  • operations-center-jnlp-controller 2.7.0.0

  • operations-center-mesos-cloud 2.7.0.0

  • operations-center-monitoring 2.7.0.0

  • operations-center-rbac 2.7.0.0

  • operations-center-sso 2.7.0.0

  • operations-center-updatecenter 2.7.0.0

  • pam-auth 1.3

  • plain-credentials 1.2

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • workflow-step-api 2.3

Fixed Train

CJOC 2.7.22.0.1

CJOC 2.7.22.0.1 includes Jenkins Core 2.7.22

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.4

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0

  • cloudbees-ha 4.7

  • cloudbees-license 8.1

  • cloudbees-monitoring 2.5

  • cloudbees-support 3.7

  • credentials 2.1.4

  • jackson2-api 2.7.3

  • mailer 1.17

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • node-iterator-api 1.5

  • operations-center-agent 2.7.0.0.2

  • operations-center-context 2.7.0.0.2

  • operations-center-license 2.7.0.0

  • operations-center-server 2.7.0.0.2

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • active-directory 1.47

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-slave-plugin 0.3.3

  • bouncycastle-api 1.648.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-ssh-slaves 1.5

  • cloudbees-update-center-plugin 4.20

  • credentials-binding 1.8

  • durable-task 1.12

  • ec2 1.35

  • email-ext 2.46

  • icon-shim 2.0.3

  • infradna-backup 3.30

  • javadoc 1.4

  • junit 1.18

  • ldap 1.12

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mesos 0.13.1

  • monitoring 1.59.0

  • nectar-rbac 5.9.1

  • nectar-vmware 4.3.5

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-dashboards 2.7.0.0

  • operations-center-analytics-feeder 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-analytics-viewer 2.7.0.0

  • operations-center-analytics 2.7.0.0

  • operations-center-azure-cloud 2.7.0.0

  • operations-center-clusterops 2.7.0.0

  • operations-center-ec2-cloud 2.7.0.0

  • operations-center-elasticsearch-provider 2.7.0.0

  • operations-center-embedded-elasticsearch 2.7.0.0

  • operations-center-jnlp-controller 2.7.0.0

  • operations-center-mesos-cloud 2.7.0.0

  • operations-center-monitoring 2.7.0.0

  • operations-center-rbac 2.7.0.0

  • operations-center-sso 2.7.0.0

  • operations-center-updatecenter 2.7.0.0

  • pam-auth 1.3

  • plain-credentials 1.2

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • workflow-step-api 2.3

CJOC 2.7.21.0.2

CJOC 2.7.21.0.2 includes Jenkins Core 2.7.21

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.4

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0

  • cloudbees-ha 4.7

  • cloudbees-license 8.1

  • cloudbees-monitoring 2.5

  • cloudbees-support 3.7

  • credentials 2.1.4

  • jackson2-api 2.7.3

  • mailer 1.17

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • node-iterator-api 1.5

  • operations-center-agent 2.7.0.0.2

  • operations-center-context 2.7.0.0

  • operations-center-license 2.7.0.0

  • operations-center-server 2.7.0.0.2

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • active-directory 1.47

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-slave-plugin 0.3.3

  • bouncycastle-api 1.648.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-ssh-slaves 1.5

  • cloudbees-update-center-plugin 4.20

  • credentials-binding 1.8

  • durable-task 1.12

  • ec2 1.35

  • email-ext 2.46

  • icon-shim 2.0.3

  • infradna-backup 3.30

  • javadoc 1.4

  • junit 1.18

  • ldap 1.12

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mesos 0.13.1

  • monitoring 1.59.0

  • nectar-rbac 5.9

  • nectar-vmware 4.3.5

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-dashboards 2.7.0.0

  • operations-center-analytics-feeder 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-analytics-viewer 2.7.0.0

  • operations-center-analytics 2.7.0.0

  • operations-center-azure-cloud 2.7.0.0

  • operations-center-clusterops 2.7.0.0

  • operations-center-ec2-cloud 2.7.0.0

  • operations-center-elasticsearch-provider 2.7.0.0

  • operations-center-embedded-elasticsearch 2.7.0.0

  • operations-center-jnlp-controller 2.7.0.0

  • operations-center-mesos-cloud 2.7.0.0

  • operations-center-monitoring 2.7.0.0

  • operations-center-rbac 2.7.0.0

  • operations-center-sso 2.7.0.0

  • operations-center-updatecenter 2.7.0.0

  • pam-auth 1.3

  • plain-credentials 1.2

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • workflow-step-api 2.3

CJOC 2.7.21.0.1

CJOC 2.7.21.0.1 includes Jenkins Core 2.7.21

The list of plugins is the same than in the previous version 2.7.20.0.2

CJOC 2.7.20.0.2

CJOC 2.7.20.0.2 includes Jenkins Core 2.7.20

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.2

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0

  • cloudbees-ha 4.7

  • cloudbees-license 8.1

  • cloudbees-monitoring 2.5

  • cloudbees-support 3.7

  • credentials 2.1.4

  • jackson2-api 2.7.3

  • mailer 1.17

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.1

  • node-iterator-api 1.5

  • operations-center-agent 2.7.0.0

  • operations-center-context 2.7.0.0

  • operations-center-license 2.7.0.0

  • operations-center-server 2.7.0.0

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • active-directory 1.47

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-slave-plugin 0.3.3

  • bouncycastle-api 1.648.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-ssh-slaves 1.5

  • cloudbees-update-center-plugin 4.20

  • credentials-binding 1.8

  • durable-task 1.12

  • ec2 1.35

  • email-ext 2.46

  • icon-shim 2.0.3

  • infradna-backup 3.30

  • javadoc 1.4

  • junit 1.18

  • ldap 1.12

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mesos 0.13.1

  • monitoring 1.59.0

  • nectar-rbac 5.9

  • nectar-vmware 4.3.5

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-dashboards 2.7.0.0

  • operations-center-analytics-feeder 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-analytics-viewer 2.7.0.0

  • operations-center-analytics 2.7.0.0

  • operations-center-azure-cloud 2.7.0.0

  • operations-center-clusterops 2.7.0.0

  • operations-center-ec2-cloud 2.7.0.0

  • operations-center-elasticsearch-provider 2.7.0.0

  • operations-center-embedded-elasticsearch 2.7.0.0

  • operations-center-jnlp-controller 2.7.0.0

  • operations-center-mesos-cloud 2.7.0.0

  • operations-center-monitoring 2.7.0.0

  • operations-center-rbac 2.7.0.0

  • operations-center-sso 2.7.0.0

  • operations-center-updatecenter 2.7.0.0

  • pam-auth 1.3

  • plain-credentials 1.2

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • workflow-step-api 2.3

CJOC 2.7.19.0.1

CJOC 2.7.19.0.1 includes Jenkins Core 2.7.19

The required plugins included in the release are:

  • async-http-client 1.7.24.1

  • cloudbees-assurance 2.7.3.1

  • cloudbees-folder 5.12

  • cloudbees-folders-plus 3.0

  • cloudbees-ha 4.7

  • cloudbees-license 8.1

  • cloudbees-monitoring 2.5

  • cloudbees-support 3.7

  • credentials 2.1.4

  • jackson2-api 2.7.3

  • mailer 1.17

  • mapdb-api 1.0.9.0

  • metrics 3.1.2.9

  • nectar-license 8.0

  • node-iterator-api 1.5

  • operations-center-agent 2.7.0.0

  • operations-center-context 2.7.0.0

  • operations-center-license 2.7.0.0

  • operations-center-server 2.7.0.0

  • structs 1.3

  • support-core 2.32

  • token-macro 1.12.1

  • variant 1.0

The optional plugins included in the release are:

  • active-directory 1.47

  • antisamy-markup-formatter 1.5

  • authentication-tokens 1.3

  • aws-credentials 1.16

  • aws-java-sdk 1.10.50

  • azure-slave-plugin 0.3.3

  • bouncycastle-api 1.648.3

  • cloudbees-aborted-builds 1.9

  • cloudbees-aws-credentials 1.8.4

  • cloudbees-nodes-plus 1.14

  • cloudbees-plugin-usage 1.6

  • cloudbees-quiet-start 1.2

  • cloudbees-ssh-slaves 1.5

  • cloudbees-update-center-plugin 4.20

  • credentials-binding 1.8

  • durable-task 1.12

  • ec2 1.35

  • email-ext 2.46

  • icon-shim 2.0.3

  • infradna-backup 3.30

  • javadoc 1.4

  • junit 1.18

  • ldap 1.12

  • matrix-auth 1.4

  • matrix-project 1.7.1

  • maven-plugin 2.13

  • mesos 0.13.1

  • monitoring 1.59.0

  • nectar-rbac 5.8

  • nectar-vmware 4.3.5

  • operations-center-analytics-config 2.7.0.0

  • operations-center-analytics-dashboards 2.7.0.0

  • operations-center-analytics-feeder 2.7.0.0

  • operations-center-analytics-reporter 2.7.0.0

  • operations-center-analytics-viewer 2.7.0.0

  • operations-center-analytics 2.7.0.0

  • operations-center-azure-cloud 2.7.0.0

  • operations-center-clusterops 2.7.0.0

  • operations-center-ec2-cloud 2.7.0.0

  • operations-center-elasticsearch-provider 2.7.0.0

  • operations-center-embedded-elasticsearch 2.7.0.0

  • operations-center-jnlp-controller 2.7.0.0

  • operations-center-mesos-cloud 2.7.0.0

  • operations-center-monitoring 2.7.0.0

  • operations-center-rbac 2.7.0.0

  • operations-center-sso 2.7.0.0

  • operations-center-updatecenter 2.7.0.0

  • pam-auth 1.3

  • plain-credentials 1.2

  • script-security 1.21

  • secure-requester-whitelist 1.0

  • skip-plugin 3.8

  • ssh-agent 1.13

  • ssh-credentials 1.12

  • ssh-slaves 1.11

  • suppress-stack-trace 1.5

  • translation 1.15

  • unique-id 2.1.3

  • wikitext 3.7

  • workflow-step-api 2.3


1. this is not true for the JNLP connector where a connection is required, but as the connection is only used to control leasing CJOC can maintain more of these connections open than a regular Jenkins instance would when running builds
2. Note that this is still going to fail the build. If you wish to have builds survive Jenkins crashes or planned restarts, we recommend you have a look either at Pipeline jobs, or at the Long-Running Build Plugin (We recommend using Pipelines in case you don’t know which one to choose).