Installing CloudBees Jenkins Platform

23 minute read

CloudBees Jenkins Platform is based on the Jenkins Long Term Support (LTS) releases, and a refresh build is released every time the Jenkins community performs an LTS release, which is typically every 3 months.

The downloadable artifacts are named as 2.x.y.z where 2.x.y is a Jenkins LTS based on the 2.x weekly release, and z is a CloudBees Jenkins Platform build number. This is so that an existing CloudBees Jenkins Platform package can do an in-place installation of the files without having to uninstall CloudBees Jenkins Platform packages.

Hardware and software requirements

Refer to Supported Platforms for CloudBees Jenkins Platform for supported Java versions, Docker environments, supported operating systems, and supported NFS versions.

CloudBees Jenkins Platform complies with the Jenkins browser compatibility policy. For more information, please visit the Jenkins Browser Compatibility Matrix.

Network requirements

Both CloudBees Jenkins Platform and operations center run services that require network communication over several configurable ports. You should open ports according to the services you plan to use.

Defining inbound (listen) ports

Jenkins listens for connections on the ports listed below. Many of these ports are used for optional services and can be disabled or enabled according to your needs.

Default Port Example Port 1 Service Configure Description Reference

8080

80

HTTP

--httpPort=$HTTP_PORT at command line (-1 to disable)

When leveraging the built-in Jetty servlet container, Jenkins defaults to listening on port 8080 for the Jenkins web application.

Starting and Accessing Jenkins from jenkins-ci.org

Disabled

443

HTTPS

--httpsPort=$HTTP_PORT at command line (default or -1 to disable)

When leveraging the built-in servlet container, Jenkins can optionally respond over HTTPS.

Starting and Accessing Jenkins from jenkins-ci.org

8009

AJP

--ajp13Port=$AJP_PORT at command line (-1 to disable)

When leveraging the built-in servlet container, Jenkins can optionally respond over AJP v1.3 as an alternative to HTTP for reverse proxies.

Starting and Accessing Jenkins from jenkins-ci.org

Random

48014

JNLP

Manage Jenkins  Configure Global Security  TCP port for JNLP agents

Jenkins exposes a port for agents to connect via the Java Network Launch Protocol (JNLP). It is also the primary port used by the Jenkins CLI.

operations center uses this port for Client controller connectivity.

Distributed Builds from jenkins-ci.org

Random

2222

SSH

Manage Jenkins  Configure System  SSH Server  SSHD Port

Jenkins runs an SSH server, exposing a subset of CLI commands and allowing plugins to add functionality over SSH.

CloudBees Jenkins Platform optionally uses the SSH port for the Validated Merge plugin.

Jenkins SSH from jenkins-ci.org

33848/udp

UDP

-Dhudson.udp=$UDP_PORT at command line (-1 to disable)

Allows Jenkins to be auto-discovered using UDP multicast.

Auto-discovering Jenkins on the network from jenkins-ci.org

5353

DNS

-Dhudson.DNSMultiCast.disabled=true

Allows Jenkins to be auto-discovered using DNS Multicast.

Auto-discovering Jenkins on the network from jenkins-ci.org

Random

TCP

bind_port in jgroups.xml (JENKINS_HA=false to disable)

CloudBees Jenkins Platform High Availability plugin leverages jGroups for clustering support. This port is used for jGroups primary transport.

High Availability from cloudbees.com

7500

TCP

diagnostics_port in jgroups.xml (JENKINS_HA=false to disable)

CloudBees Jenkins Platform High Availability plugin leverages jGroups for clustering support. This port is used for jGroups diagnostics.

High Availability from cloudbees.com

9200

HTTP

Manage Jenkins  Configure Analytics  Analytics  Elasticsearch Configuration  HTTP Port

The Elasticsearch http.port setting. This is an unauthenticated HTTP port, so be careful to ensure it is not exposed to untrusted access.

CloudBees Jenkins Analytics from cloudbees.com

9300

Additional plugins or even your build jobs could run services that open additional ports. Further, running Jenkins in other Java web containers, Tomcat for example, could open other, container-specific ports.

1 - ports less than 1024 on Linux based systems require Jenkins to run as root which we do not recommend. The Example port column shows examples of how you might configure Jenkins to appear when fronted by a reverse proxy. See the High Availability and Reverse Proxy to Jenkins section for more details.

Defining outbound ports

For some features, Jenkins requires outbound access to services on ports as laid out below. Because these are outbound ports which could change, we’ve described the standard ports but your network may run these services on different ports and may require additional configuration.

Standard Port Service Configure Description Reference

25

SMTP

Manage Jenkins  Configure System  E-mail Notification  SMTP Port

For sending emails from build failures or via other plugins' email functionality, Jenkins needs access to an SMTP server.

GMail from jenkins-ci.org

389 (636)

LDAP (LDAPS)

Manage Jenkins  Configure Global Security  Access Control  Security Realm  LDAP  Server

If you plan to authenticate Jenkins users via an LDAP server, Jenkins will need access to the LDAP or LDAPS port. When accessing Microsoft Active Directory server, design for access to the Active Directory-specific ports, eg: 3268 for the Global Catalog.

LDAP Plugin from jenkins-ci.org

9200

HTTPS

Manage Jenkins  Configure Analytics  Analytics  Elasticsearch Configuration  Elasticsearch URLs

operations center's CloudBees Analytics can optionally use an external Elasticsearch instance. If so, operations center will need access to this service on the HTTP port.

CloudBees Jenkins Analytics from cloudbees.com

Of course, Jenkins will require access to additional outbound ports based on the requirements of your jobs and additional plugins you configure.

Setting proxy configuration

If your network uses a web proxy, you will need to configure Jenkins to enable access to services outside the network. This is important for access to external services, such as the CloudBees Jenkins Plugin Update Site or external source control systems, but is not required for Jenkins to run. You may also need to set some hosts on the network to bypass the proxy, for example your internally resolved binary artifact repository.

Some plugins rely on proxy settings in different locations, so it’s best to set each:

Setting Location Examples Reference

Manage Jenkins  Manage Plugins  Advanced  Proxy

  • Plugin Update Center (core)

  • Git Client Plugin

Jenkins Behind Proxy from jenkins-ci.org

  • http.proxyHost, http.proxyPort, http.nonProxyHosts

  • https.proxyHost, https.proxyPort, and https.nonProxyHosts

  • Twitter Plugin

Jenkins Behind Proxy from jenkins-ci.org

Plugin Specific Proxy Settings

  • Subversion SCM Plugin

  • Rally Plugin

Jenkins Behind Proxy from jenkins-ci.org

2 - These properties should automatically default to the corresponding environment variables (http_proxy, no_proxy), but some Java distributions do not support this default.

Using high availability and adding reverse proxy to Jenkins

With CloudBees' High Availability plugin, the above networking requirements are the same. With High Availability, you run two or more Jenkins instances in a cluster, so typically a Reverse Proxy is added to allow the two instances to respond to a single, pretty URL. Our High Availability documentation covers this configuration.

Installing CloudBees Jenkins Platform

The instructions in this section provide details on how to install and run a CloudBees Jenkins Platform client controller as a service running directly on a JVM. Alternatively, you can install a servlet container like Tomcat or JBoss, which can run as a service by itself and then deploy a client controller to the servlet container. If using Tomcat or JBoss (or any other servlet container) you can deploy by simply deploying the WAR file.

When using servlet containers, Jenkins will set the JENKINS_HOME to the $APP_SERVER_USER/.jenkins/ folder. If the servlet container installation does not include write permissions to this folder for this user (sometimes done for security), you either need to grant appropriate permissions or override this setting by adding the "-DJENKINS_HOME=$MY_JENKINSPATH" argument in your servlet container startup (see documentation for that container).

Installing JRE

You need to install JRE before installing a CloudBees Jenkins Platform client controller. Oracle 8 JRE is recommended for running a client controller.

You do not need to install the JDK to run a client controller instance. A client controller can run on JRE. If you are doing Java development, then JDK tools will be needed, but only on agents, and we do not recommend running builds on the controller. The one reason to install the JDK on the controller would be to have access to jstack/jmap and the like, but we would generally only recommend this in a support ticket if we actually needed those tools to be run (normally the Support Core plugin generates adequate diagnostics).

Installing Oracle JRE

To install JDK (as needed) follow the below instructions and replace references to JRE related downloads to JDK.

  1. First, download the latest Java JDK version. Navigate to Oracle Java download page and download the required version depending upon your distribution architecture.

    If you would like to install a different release of Oracle JRE, go to the Oracle Java 8 JRE Downloads Page, accept the license agreement, and copy the download link of the appropriate Linux .tar.gz package. Substitute the copied download link in place of the highlighted part of the wget command.
  2. Change to the /opt or wherever you would like to install the JDK directory and download the Oracle Java 8 JDK .tar.gz archive with the following command:

    cd /opt sudo wget --no-cookies --no-check-certificate --header \ "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \ "https://download.oracle.com/otn-pub/java/jdk/8u45-b14/server-jre-8u45-linux-x64.tar.gz" sudo tar xvf server-jre-8u45-linux-x64.tar.gz sudo chown -R root: jdk1.8.0_45
The JDK executable files, (Java, Javac, and JAR), are now installed at /opt/jdk1.8.0_45/bin, which is not in your PATH variable, so the commands can only be used if you reference their locations. To remedy this, you can either add this directory to your PATH variable or use the alternatives command to add symbolic links to individual executable files to the /usr/bin directory.

Using alternative commands to manage Java executables

Use the following alternatives commands to add symbolic links in the /usr/bin directory to the Java and JAR commands:

sudo update-alternatives --install /usr/bin/java java /opt/jdk1.8.0_45/bin/java 1 sudo update-alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_45/bin/jar 1
  • Use the alternatives command in a similar fashion to manage any of the other Java executable files.

  • You can delete the archive file that you downloaded earlier:

sudo rm /opt/server-jre-8u45-linux-x64.tar.gz

Installing on Linux

Installing on Ubuntu or Debian

You must install a JRE before installing a CloudBees Jenkins Platform client controller. Oracle 8 JRE is recommended for Linux operating systems. Follow the instructions described under Installing JRE.

On Debian-based distributions, such as Ubuntu, you can install a client controller through apt-get.

The key and repos refer to the latest version of the client controller. To install a specific version, refer to downloads.cloudbees.com/cje/.

Before installing a CloudBees Jenkins Platform client controller, you must add the client controller repository and the key.

  1. Add the following entry in your /etc/apt/sources.list:

    deb https://downloads.cloudbees.com/cje/rolling/debian binary/
  2. Then, add the following key to your system:

    wget -q -O - https://downloads.cloudbees.com/jenkins-enterprise/rolling/debian/cloudbees.com.key | sudo apt-key add -

    The binaries produced after January 2016 are signed with a different key from those produced prior to this date. If you imported the key prior to this date, the key will need to be re-imported.

  3. Update your local package index:

    sudo apt-get update
  4. Install the client controller:

    sudo apt-get install jenkins

The installation process has started the client controller.

  • In general, to start/stop the client controller, simply execute these commands:

    sudo service jenkins start sudo service jenkins stop
  • The log file is located at /var/log/jenkins/jenkins.log

  • By default, the client controller starts on port 8080. To access the client controller point your favorite browser to http://(hostname-or-ip):8080/

See the Jenkins Handbook for more information.

Installing on Red Hat, CentOS, or Fedora

On Fedora-based distributions, such as CentOS, you can install a CloudBees Jenkins Platform client controller through yum or dnf (starting with Fedora 23) [1]. You must install JRE before installing a client controller. Oracle 8 JRE is recommended for Linux operating systems. Follow the instructions described under Installing JRE.

The epel-release package is required to install CloudBees Jenkins Platform using an RPM, however some CentOS distributions do not install the package by default. For those CentOS distributions, run the following command before you begin the installation procedure:

sudo yum install epel-release

Before installing a CloudBees Jenkins Platform client controller, add the client controller repository and the key:

  1. First, add the repository:

    The key and repository refer to the latest version of the client controller. To install a specific version, refer to downloads.cloudbees.com/cje/.
    For more information about CloudBees Jenkins Platform downloads, refer to the downloads page.

    sudo wget -O /etc/yum.repos.d/jenkins.repo https://downloads.cloudbees.com/cje/rolling/rpm/jenkins.repo
  2. Then, add the key:

    sudo rpm --import https://downloads.cloudbees.com/jenkins-enterprise/rolling/rpm/cloudbees.com.key
  3. Update yum cache before installing the client controller:

    === The binaries produced after January 2016 are signed with a different key from those produced prior to this date. If you imported the key prior to this date, the key will need to be re-imported. ===
    sudo yum update
  4. Install the client controller:

    sudo yum install jenkins

The installation process has started the client controller.

  • In general, to start/stop the client controller, simply execute these commands:

    sudo service jenkins start sudo service jenkins stop
  • The log file is located at /var/log/jenkins/jenkins.log

  • By default, the client controller starts on port 8080. To access the client controller point your favorite browser to http://(hostname-or-ip):8080/

See the user handbook for more information.

Installing on Windows

If you are running on Microsoft Windows, it is good practice to run the CloudBees Jenkins Platform client controller as a service so it starts up automatically without requiring a user to log in.

The easiest way is to run the Windows installer from CloudBees Jenkins Platform download site, which will install client controller as a service. This also has the advantage of being easier to automate.

For more information about CloudBees Jenkins Platform downloads, refer to the downloads page.

=== The client controller installer requires a 64-bit operating system and a compatible .Net framework (2.0, 3.5.1, 4.0) in order to be installed as a service. Most Windows versions install a compatible framework by default, however Windows 7 operating system includes the .NET Framework 3.5.1 but it is not installed by default so should be installed before running the client controller installer. ===

If you are running from WAR, to install the client controller as a Windows service, refer to these instructions.

Installing on Docker

You should have Docker properly installed on your machine. Check Docker installation guide for details.

For how to start the CloudBees Jenkins Platform client controller in a Docker container, see the Jenkins Enterprise Docker Hub repo.

Apache Tomcat

If you are deploying a CloudBees Jenkins Platform client controller on Apache Tomcat, then you need to set the system property org.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH to true so encoded slashes are allowed.

Upgrading CloudBees Jenkins Platform

Upgrading CloudBees Jenkins Platform requires the same upgrade procedures used in upgrading Jenkins. Detailed steps on upgrading Jenkins from a previous version can be found here: Jenkins wiki.

If you are already running Jenkins, and CloudBees Jenkins Platform is based on the same version (or newer), you can upgrade your installation by simply starting to run the CloudBees Jenkins Platform WAR file with the same $JENKINS_HOME.

Any bundled plugin (those which were packaged in the CloudBees Jenkins Platform WAR file) are replaced by a new bundled version, assuming you did not explicitly update them originally. If you did update them previously, they will be shown as pinned in the Plugin Manager interface. These will not get updated automatically, even if the newly bundled version is newer than what you have installed.

As such, you should consider updating them after the core upgrade by doing the following:

  1. Click Check Now in the Advanced tab since the update center definition varies according to the Jenkins core version.

  2. Check the plugin release notes (normally hyperlinked from the plugin name in the Updates tab) to see if you want to accept updates. In some cases, a plugin update will be mandatory for a core upgrade (when the older version of the plugin was found to be incompatible with the new core).

Please note that when upgrading your CloudBees Jenkins Platform, a double restart of Jenkins may be required (and Jenkins will let you know this).

double restart banner
Figure 1. Double Restart Banner
=== Make sure you update CloudBees License Manager plugin to the last available version before upgrading CloudBees Jenkins Platform. ===

CloudBees Assurance Program

If you upgrade from CloudBees Jenkins Platform based on Jenkins 1.x to CloudBees Jenkins Platform based on Jenkins 2 (more specifically, CloudBees Jenkins Platform 2.7.19.1 or higher), then you can take advantage of the CloudBees Assurance Program, which is enabled by default through the Beekeeper Page ("Beekeeper"). The CloudBees Assurance Program monitors and modifies the installed plugins to improve the stability and security of your Jenkins instance.

Reporting problems on CloudBees Jenkins Platform

You can file tickets on the customer-only web portal. Use the Support plugin to include diagnostics about your CloudBees Jenkins Platform installation.

Creating custom extensions for CloudBees Jenkins Platform

For some advanced use cases, you may wish to create custom plugins depending on CloudBees Jenkins Platform plugins. Or, you may wish to call APIs in CloudBees Jenkins Platform plugins from custom Groovy scripts and want details on these APIs.

In either case, you can use the https://repo.cloudbees.com/content/repositories/dev-connect/ Maven repository. This includes *.jar artifacts for plugins and associated libraries, suitable for inclusion in a classpath; *.hpi artifacts (plugins ready to be deployed); *-javadoc.jar artifacts with Javadoc-format documentation for plugins and associated libraries; and *.pom metadata for use from Maven.

This repository is "blind": there is currently no (Nexus-format) index, and artifacts are not displayed in directory listings until explicitly requested. As such, you need to know the group ID, artifact ID, and version (GAV) of every artifact you wish to use. New releases are available in this repository within a day or two of their being published.

An easy way to find the Maven GAV based on a plugin you are running is to go to Manage Jenkins  Manage Plugins  Installed. Find the plugin, such as CloudBees Template Plugin, and click on its hyperlinked version number. You will be taken to a page such as /pluginManager/plugin/cloudbees-template/thirdPartyLicenses which in this case will show a GAV such as com.cloudbees.nectar.plugins:cloudbees-template:4.17. That would become a dependency in your pom.xml like

<dependency> <groupId>com.cloudbees.nectar.plugins</groupId> <artifactId>cloudbees-template</artifactId> <version>4.17</version> </dependency>

If browsing Javadoc, you may find it useful to visit overview-tree.html Hierarchy For All Packages and search for hudson.ExtensionPoint. Classes defined in the plugin which directly implement this marker interface (as opposed to extending some type defined elsewhere which implements it) are likely to be useful starting points. In this case you might find FolderInitActivity, ModelProperty, ModelTransformer, ModelVisibility and UIControl: the ways in which a custom plugin could extend Templates.

Accessing additional resources

Terms and definitions

Jenkins

Jenkins is an open-source automation server, as well as an independent open-source community to which CloudBees actively contributes. You can find more information about Jenkins and CloudBees contributions on the CloudBees site.

CloudBees Jenkins Platform

Commercial version of Jenkins based on Jenkins Long-Term Support (LTS) releases with frequent updates by CloudBees. CloudBees Jenkins Platform also provides a number of plugins that help organizations address main needs of enterprise installations: Security, High Availability, Continuous Delivery, etc.

operations center (or operations center)

Operations console for Jenkins that allows you to manage the multiple Jenkins controllers within. See details on the CloudBees site.

CloudBees Jenkins Analytics

Provides insight into your usage of Jenkins and the health of your Jenkins cluster by reporting events and metrics to operations center.

CloudBees Jenkins Platform on the Microsoft Azure Marketplace

CloudBees Jenkins Platform is no longer available on the Microsoft Azure Marketplace for new subscribers. However, CloudBees CI is available on the Microsoft Azure Marketplace for new subscribers.

High Availability

CloudBees Jenkins Platform comes with the capability to run in a high-availability setup, where two or more JVMs form a so-called "HA singleton" cluster, to ensure business continuation for Jenkins. This improves the availability of the service against unexpected problems in the JVM, the hardware that it runs on, etc. When a Jenkins JVM becomes unavailable (for example, when it stops responding, or when it dies), other nodes in the cluster automatically take over the role of the controller, thereby restoring service with minimum interruption. It is also important for users to understand what this feature does not do in its current form. Namely, it is not a symmetric cluster, where participating nodes will share workloads together. At any given point only one of the nodes is performing the controller role (hence "HA singleton"). Because of this, when a failover takes place, users will experience a brief downtime, comparable to someone rebooting a Jenkins controller in a non-HA setup. Builds that were in progress would be lost, too. This guide will walk through the High Level Overview, detail implementation and configuration steps, and finish with troubleshooting tips.

Note: This guide is tailored towards operations center, but can be applied to CloudBees Jenkins Platform client controllers. Where applicable, a note will describe differences between the two.

Principal design

The diagram, below, outlines the principal design details with routing for each communication protocol.

cjp ha network diagram

Alpha and Beta

To create the "HA singleton", it is necessary to have two copies of operations center: Alpha and Beta (active and standby, respectively). In HA mode, the two Jenkins instances cooperatively elect the "primary" JVM, and depending on the outcome of this election, members of a cluster starts/stops the client controller in the same JVM. From the viewpoint of the code inside operations center, this is as if it is started/stopped programmatically. operations center relies on JGroups for the underlying group membership service.

By default, operations center uses TCP to communicate between members, with IP addresses and ports registered in a directory $JENKINS_HOME/jgroups (which all members must be able to write to). This can be changed by creating $JENKINS_HOME/jgroups.xml that describes the JGroups protocol stack configuration XML. See the JBoss Clustering documentation for the format of this file, as well as typical configuration tips and troubleshooting.

HA failover

A failover is effectively (1) shutting down the current Jenkins JVM, followed by (2) starting it up in another location. Sometimes step 1 doesn’t happen, for example when the current controller crashes. Because these controllers work with the same $JENKINS_HOME, this failover process has the following characteristics:

  • Jenkins global settings, configuration of jobs/users, fingerprints, record of completed builds (including archived artifacts, test reports, etc.), will all survive a failover.

  • User sessions are lost. If your Jenkins installation requires users to log in, they’ll be asked to log in again.

During the startup phase of the failover, Jenkins will not be able to serve inbound requests or builds. Therefore, a failover typically takes a few minutes, not a few seconds.

Note: For HA client controllers, builds that were in progress will normally not survive a failover, although their records will survive. The builds based on Jenkins Pipeline Jobs will continue to execute. No attempt will be made to re-execute interrupted builds, though the Restart Aborted Builds plugin will list the aborted builds.

Load Balancer

In general, all Jenkins traffic should be routed through the load balancer with the exception of the JNLP transport. The JNLP transport is used by client controllers to connect to operations center and by Build Agents configured to use the "Java Web Start" launcher.

Operations center accepts incoming JNLP connections from multiple components: client controllers and agents (usually shared agents). On a failover (a new operations center instance gets primary role in the cluster), all those JNLP connections are automatically restored.

This hudson.TcpSlaveAgentListener.hostName system property is available to clients as an HTTP header (X-Jenkins-CLI-Host) which included in all HTTP responses. JNLP clients will use this header to determine which hostname to use for the JNLP protocol. Clients will continue attempting to connect to the JNLP port until a connection is successfully established after failover. If the system property is not provided, the host name of the Jenkins root URL is used. The hudson.TcpSlaveAgentListener.hostName must be properly set on all instances for load-balancing to work correctly in an HA configuration

There are many software and hardware load balancer solutions. CloudBees recommends the Open Source solution haproxy which doubles as a reverse proxy. For truly highly-available setup, haproxy itself needs to be made highly available. Support for HAProxy is available from HAProxy Technologies Inc.

In general, the Load Balancers in HA will be configured to:

  • Route HTTP and HTTPS traffic

  • Route SSHD TCP traffic

  • Listen for the heartbeat on /ha/health-check

  • (Optional if using HTTPS) Set location of SSL key for HTTPS

This guide will detail the setup and configuration for HAProxy. For guidance on configuring other load balancers, please see:

HTTPS and SSL

In addition to acting as a load balancer and reverse proxy, haproxy acts as a SSL Termination point for operations center. This allows internal traffic to remain in the default http configuration while providing a secured endpoint for users.

If you do not have access to your environments' SSL key files, please reach out to your operations teams.

If the SSL certificate used by haproxy is not trusted by default by the operations center JVM, it must be added to the JVM keystores of both operations centers and all client controllers.

If the SSL certificate is used to secure the connection to elasticsearch, it must be added to both operations center JVM keystores.

Note: When making client controllers highly available, if the SSL certificate used by haproxy is not trusted by default by the operations center JVM, it is recommended that it is to the keystores of both operations centers.

For further information, please refer to How to install a new SSL certificate

Shared storage

Each member node of a operations center HA cluster needs to see a single coherent shared file system that can be read and written simultaneously. That is to say, for node Alpha and Beta in the cluster, if node Alpha creates a file in $JENKINS_HOME, node Beta needs to be able to see it within a reasonable amount of time. A "reasonable amount of time" here means the time window during which you are willing to lose data in case of a failure. This is commonly accomplished with a NFS Shared File System.

To set up the NFS Server please see: NFS Guide

Note: For truly highly-available Jenkins, NFS storage itself needs to be made highly-available. There are many resources on the web describing how to do this.

If you are using the Amazon’s Elastic File System Management service for your NFS Server, please ensure that the server’s Performance Mode is set to "Max I/O Performance Mode".

CloudBees Jenkins HA monitor tool

The CloudBees Jenkins Platform HA monitor tool is an optional tool that can be set up, but is not required for setting up an HA cluster. It is a small background application that executes code when the primary Jenkins JVM becomes unresponsive. Such setup/teardown scripts can only be reliably triggered from outside Jenkins. Those scripts also normally require root privileges to run. It can be downloaded from the jenkins-ha-monitor section of the download site.

Its 3 defining options are as follows:

  • The -home option that specifies the location of $JENKINS_HOME. The monitor tool picks up network configuration and other important parameters from here, so it needs to know this location.

  • The -host-promotion option that specifies the location of the promotion script, which gets executed when the primary Jenkins JVM moves in from another system into this system as a result of election. In native packages, this file is placed at /etc/jenkins-ha-monitor/promotion.sh.

  • The -host-demotion option that specifies the demotion script, which is the opposite of the promotion script and gets executed when the primary Jenkins JVM moves from this system into another system. In native packages, this file is placed at /etc/jenkins-ha-monitor/demotion.sh. Promotion and demotion scripts need to be idempotent, in the sense that the monitor tool may run the promotion script on an already promoted node, and the demotion script on an already demoted node. This can happen, for example, when a power outage hits a stand-by node and when it comes back up. The monitor tool runs the demotion script again on this node, since it cannot be certain about the state of the node before the power outage.

Run the tool with the -help option to see the complete list of available options. Configuration file that is read during start of the service is located at /etc/sysconfig/jenkins-ha-monitor.

Setup procedure

The HA setup in operations centers provides the means for multiple JVMs to coordinate and ensure that the Jenkins controller is running somewhere, but it does so by relying on the availability of the storage that houses $JENKINS_HOME and the HTTP reverse proxy mechanism that hides a failover from users who are accessing Jenkins. Aside from NFS as a storage and the reverse proxy mechanism, operations center can run on a wide range of environments. The following sections describe the parameters required from them and discuss more examples of the deployment mode.

The setup procedure follows this basic format:

  • Before you begin

  • Configure Shared Storage

  • Configure operations center and client controllers

  • Install operations center HA monitor tool

  • Configure HAProxy

After these steps are followed, you should follow the Testing and Troubleshooting section.

Configure shared storage on Alpha and Beta

All the member nodes of a CloudBees Jenkins Platform HA cluster need to see a single coherent file system that can be read and written to simultaneously. Creating a shared storage mount is outside the scope of this guide. Please see the Shared Storage section, above.

Mount the Shared Storage to Alpha and Beta:

$ mount -t nfs -o rw,hard,intr sierra:/jenkins /var/lib/jenkins

/var/lib/jenkins is chosen to match what the CloudBees Jenkins Platform packages use as $JENKINS_HOME. If you change them, update /etc/default/jenkins (on Debian) or /etc/sysconfig/jenkins (on RedHat and SUSE) to have $JENKINS_HOME point to the correct directory.

Before continuing, test that the mountpoint is accessible and permissions are correct on both machines by executing touch /<mount_point>/test.txt on Alpha and then try editing the file on Beta.

Install and Configure operations center and client controllers

Choose the appropriate debian/redhat/openSUSE package format depending on the type of your distribution. Install CJOC on Alpha and Beta.

Upon installation, both instances of Jenkins will start running. Stop them by issuing /etc/init.d/jenkins stop, while we work on the HA setup configuration. If you don’t know how to set-up a Java Argument on Jenkins you can follow this KB article.

When operations center is being used in an HA configuration, both Alpha and Beta instances must modify the $JENKINS_HOME Jenkins argument to point to the shared NFS mount.

It is an important performance optimization that the WAR file is not extracted to the $JENKINS_HOME/war directory in the shared filesystem. In a rolling upgrade scenario, an upgrade of the secondary instance followed by a (standby) boot can corrupt this directory. Some configurations may do this by default, but WAR extraction can easily be redirected to a local cache (ideally SSD for better Jenkins core I/O) on the container/VM’s local filesystem with the JENKINS_ARGS properties --webroot=$LOCAL_FILESYSTEM/war --pluginroot=$LOCAL_FILESYSTEM/plugins. For example, on debian installations, where $NAME refers to the name of the jenkins instance: --webroot=/var/cache/$NAME/war --pluginroot=/var/cache/$NAME/plugins

$JENKINS_HOME is read intensively during the start-up. If bandwidth to your shared storage is limited, you’ll see the most impact in startup performance. Large latency causes a similar issue, but this can be mitigated somewhat by using a higher value in the bootup concurrency by a system property -Djenkins.InitReactorRunner.concurrency=8.

To ensure that JNLP connecting in a HA configuration, each instance must set the system property JENKINS_OPTS=-Dhudson.TcpSlaveAgentListener.hostName= equal to a hostname (preferred and required when using HTTPS) or IP address ( unsupported for HTTPS configurations) of that instance that can be resolved and connected to by all client controllers and build agents which use the JNLP connection protocol. This requires more exposure of operations center servers, as all instances must be addressable by all client controllers.

Because we are configuring HTTPS traffic, it is necessary to update the property JAVA_ARGS and add -Djavax.net.ssl.trustStore=path/to/jenkins-truststore.jks -Djavax.net.ssl.trustStorePassword=changeit". To debug ssl issues, add "-Djavax.net.debug=all" Oracle Debugging SSL/TLS Connections and How to install a new SSL certificate

When using the redhat package (RPM) it is highly recommended after the installation to set the option JENKINS_INSTALL_SKIP_CHOWN to false in /etc/sysconfig/jenkins. This option will prevent in future upgrades to apply a chown on your JENKINS_HOME folder which may take a lot of time (especially for an HA setup where JENKINS_HOME is on remote file system using NFS and will be updated by the upgrade of the two nodes).

Install operations center HA monitor tool

Next, we set up a monitoring service to ensure the HA failover completes. To do this, log on to Alpha and install the jenkins-ha-monitor package. This monitoring program watches Jenkins as root, and when the role transition occurs, it’ll execute the promotion script or the demotion script.

The monitor tool is packaged into a single jar file that can be executed as java -jar jenkins-ha-monitor.jar. It is also packaged as the jenkins-ha-monitor RPM/DEB packages for an easier installation on the jenkins-ha-monitor section of the download site

To install it you need to do it as any RPM file, for example:

sudo rpm -i jenkins-ha-monitor-<version>.noarch.rpm

And for uninstalling you can do it by calling:

sudo rpm -e jenkins-ha-monitor

The configuration file that is read during start of the service is located at /etc/sysconfig/jenkins-ha-monitor. CloudBees Jenkins Platform HA monitor tool logs by default at /var/log/jenkins/jenkins-ha-monitor.log.

Configure HAProxy

Let’s expand on this setup further by introducing an external load balancer and reverse proxy that receives traffic from users, determines the primary JVM based on heartbeat, then direct them to the active primary JVM. CloudBees recommends configuring HTTPS and this guide is tailored for HTTPS. HAProxy can be installed on most Linux systems via native packages, such as apt-get install haproxy or yum install haproxy, however CloudBees recommends that haproxy version 1.6 (or newer) be installed. For operations center HA, the configuration file (normally /etc/haproxy/haproxy.cfg) should look like the following:

global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy # Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # Default ciphers to use on SSL-enabled listening sockets. # For more information, see ciphers(1SSL). This list is from: # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ ssl-default-bind-ciphers FFF+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3 tune.ssl.default-dh-param 2048 defaults log global option http-server-close option log-health-checks option dontlognull timeout http-request 10s timeout queue 1m timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-keep-alive 10s timeout check 500 default-server inter 5s downinter 500 rise 1 fall 1 #redirect HTTP to HTTPS listen http-in bind *:80 mode http redirect scheme https code 301 if !{ ssl_fc } listen https-in # change the location of the pem file bind *:443 ssl crt /etc/ssl/certs/your.pem mode http option httplog option httpchk HEAD /ha/health-check option forwardfor option http-server-close # alpha and beta should be replaced with hostname (or ip) and port # 8888 is the default for CJOC, 8080 is the default for {CC}s server alpha alpha:8888 check server beta beta:8888 check reqadd X-Forwarded-Proto:\ https listen ssh bind 0.0.0.0:2022 mode tcp option tcplog option httpchk HEAD /ha/health-check # alpha and beta should be replaced with hostname (or ip) and port # 8888 is the default for CJOC, 8080 is the default for {CC}s server alpha alpha:2022 check port 8888 server beta beta:2022 check port 8888

The global section contains stock settings and defaults has been configured with typical timeout settings. You must configure the listen blocks to your particular environment. To determine the port configuration of an existing installation, please reference the How to add Java arguments to Jenkins guide.

The "Default SSL material locations" and bind *:443 ssl crt /etc/ssl/certs/server.bundle.pem sections define the paths to the necessary ssl keyfiles.

Together, the https-in and http-in sections determine the bulk of the configuration necessary for operations center routing. These listen blocks tell haproxy to forward traffic to two servers alpha and beta, and periodically check their health by sending a GET request to /ha/health-check. Unlike active nodes, standby nodes do not respond positively to this health check, and that’s how haproxy determines traffic routing.

The ssh services section is configured to forward tcp requests. For these services, haproxy uses the same health check on the application port. This ensures that all services fail over together when the health check fails.

Testing and Troubleshooting your HA Installation

HA Cluster Membership

When two nodes that form a cluster lose contact, each node will assume that the other had died, and will take the responsibility as the primary node. This is called a "split brain" problem. This is problematic as you end up having two independently acting operations centers. A similar problem can occur if one node in the cluster is severely stressed under load. The HA Monitor Tool includes a sanity check script which provides users an opportunity to apply some heuristics to reduce the likelihood of this problem.

To test for cluster membership roles, the executable the sanity-check.sh script in $JENKINS_HOME is run before a node assumes the primary role, as well as when there’s a change in the cluster membership. The use case is for you to make sure that the node should really proceed to act as the primary. If the script exits with 0, the node will boot up as the primary node, and if it exists with non-zero, the node will not act as the primary node. In addition to those checks, the sanity script allows the user to: check the availability of $JENKINS_HOME, if the load balancer is routable, or if the system load is reasonably low.

HAProxy

If you are having issues with haproxy connectivity, modify the defaults section of the haproxy config to troubleshoot with these additional options:

defaults log global # The following log settings are useful for debugging # Tune these for production use option logasap option http-server-close option redispatch option abortonclose option log-health-checks mode http option dontlognull retries 3 maxconn 2000 timeout http-request 10s timeout queue 1m timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-keep-alive 10s timeout check 500 default-server inter 5s downinter 500 rise 1 fall 1

Everything else

For additional troubleshooting tips please see How to Troubleshoot HA Installations.


1. In the commands below, simply replacing yum with dnf should work just fine.