Operating CloudBees Core on modern cloud platforms

Managing operation center

Using CloudBees Backup

CloudBees offers the CloudBees Backup Plugin which creates a new job type specifically for backup.

The "Backup" job type provides a build step for backup. This plugin offers the option to configure backups of any of the following:

  • Build records for jobs

  • Job configurations

  • System configurations - this option also allows you to specify whether you’d like to exclude the master.key file or any other files in your $JENKINS_HOME.

The backup destination is configurable, including local storage, Amazon S3, Azure Blob Storage, remote SFTP or WebDAV. The retention policy and format of the backup (tar.gz or zip) can also be selected as part of job configuration.

Like other jobs, backup jobs can be triggered by remote triggers, or at periodic intervals, by other jobs, or due to changes in a software configuration management (SCM) repository. Administrators use the familiar interface to schedule and monitor backups.

Creating a backup job

To create a backup, click "New Item" from the left and select "Backup and Restore". This will take you to the page where you can configure the backup project.

Configuration of a backup job is very similar to that of a freestyle project. In particular, you can use arbitrary build triggers to schedule backup. Once the trigger is configured, click "Add build step" and add "Take backup" builder.

You can add multiple backup build steps to take different subsets of backups at the same time, or you can create multiple backup jobs to schedule different kind of backup execution at different interval.

backup config
Figure 1. Backup job configuration options

For example, you might have a daily job that only backs up the system configuration and job configuration, as they are small but more important, then use another job to take the full backup of the system once a week.

Creating a Manual Backup

For certain scenarios such as a Jenkins Update, a Manual backup, which is a exactly snapshot of your current instance before the update, is a better approach.

$ kubectl exec cjoc-0 -it -- tar -C /var/jenkins_home/ -czf /tmp/oc-jenkins-home.backup.tar.gz .
tar: removing leading '/' from member names
$ kubectl cp cjoc-0:/tmp/oc-jenkins-home.backup.tar.gz oc-jenkins-home.backup.tar.gz
tar: removing leading '/' from member names

Restore

For Operations Center, there is no "Restore from backup" item or project. Therefore, this process must be done manually.

The approach is to create a "rescue-pod" that mounts the CJOC volume via the same persistentVolumeClaim, and restore the backup from there.

Steps

  1. Check the ownership id for jenkins user and group in the pod to restore. Note that the rescue-pod has not the same configuration as the cjoc-0 so you might need to change the ownership based on the user and group id.

    $ kubectl --namespace=cje-cluster-example exec cjoc-0 -- cat /etc/passwd | grep jenkins
    jenkins:x:1000:1000:Linux User,,,:/var/jenkins_home:/bin/bash
    $ kubectl --namespace=cje-cluster-example exec cjoc-0 -- cat /etc/group | grep jenkins
    jenkins:x:1000:jenkins
  2. Scale down the cjoc

    $ kubectl --namespace=cje-cluster-example scale statefulset/cjoc --replicas=0
    statefulset.apps "cjoc" scaled
  3. List the Persistent Volume Claims

    $ kubectl --namespace=cje-cluster-example get pvc
    NAME                  STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    jenkins-home-cjoc-0   Bound     pvc-6b27e963-b770-11e8-bcbf-42010a8400c1   20Gi       RWO            standard       46d
    jenkins-home-mm1-0    Bound     pvc-b2b7e305-ba66-11e8-bcbf-42010a8400c1   50Gi       RWO            standard       42d
    jenkins-home-mm2-0    Bound     pvc-6561b8da-c0c8-11e8-bcbf-42010a8400c1   50Gi       RWO            standard       34d
  4. Run the rescue-pod with the required pvc (jenkins-home-cjoc-0 in this example)

    $ cat <<EOF | kubectl --namespace=cje-cluster-example create -f -
    kind: Pod
    apiVersion: v1
    metadata:
      name: rescue-pod
    spec:
      volumes:
        - name: rescue-storage
          persistentVolumeClaim:
           claimName: jenkins-home-cjoc-0
      containers:
        - name: rescue-container
          image: nginx
          volumeMounts:
            - mountPath: "/tmp/jenkins-home"
              name: rescue-storage
    EOF
    pod "rescue-pod" created
  5. Move the backup file to the rescue-container

    kubectl cp oc-jenkins-home.backup.tar.gz rescue-pod:/tmp/
  6. (Optional) Clean the previous Jenkins Home.

    In the case, you have a complete copy of the $JENKINS_HOME (Manual Backup) and you wish to perform a rollback after a failed update, wipe out the old content.

    # file
    kubectl exec --namespace=cje-cluster-example rescue-pod -it -- find /tmp/jenkins-home -type f -name '*.*' -delete
    # folders
    kubectl exec --namespace=cje-cluster-example rescue-pod -it -- find /tmp/jenkins-home/ -mindepth 1 -type d -name '*' -exec rm -rf {} \;
  7. Uncompress the backup file inside the cjoc Persistent Volume Claim

    kubectl exec --namespace=cje-cluster-example rescue-pod -it -- tar -xzf /tmp/oc-jenkins-home.backup.tar.gz -C /tmp/jenkins-home
  8. Check the permissions

    kubectl exec --namespace=cje-cluster-example rescue-pod -it -- ls -laR /tmp/jenkins-home

    In the case some files and/or folders are not owned by the jenkins user and group, set the ownership recursively with:

    kubectl exec --namespace=cje-cluster-example rescue-pod -it -- chown -R 1000:1000 /tmp/jenkins-home
    Note
    1000:1000 according to the previous step 1.
  9. Delete the rescue-pod

    kubectl --namespace=cje-cluster-example delete pod rescue-pod
  10. Scale up the cjoc

    kubectl --namespace=cje-cluster-example scale statefulset/cjoc --replicas=1

Managing masters

CloudBees Core enables Continuous Delivery as a Service by offering a highly elastic infrastructure that supports centralized management of teams, shareable resources across the cluster and automated management of the underlying cluster infrastructure.

With CloudBees Core on modern cloud platforms’s Managed Masters, administrators can offer their teams highly available build environments with a minimum of management and configuration overhead. This section describes the CloudBees Core on modern cloud platforms architecture, how administrators can scale their CloudBees Core on modern cloud platforms cluster, and some upgrade and disaster recovery strategies for CloudBees Core on modern cloud platforms clusters.

Individual agents can also be provisioned and controlled from CloudBees Core on modern cloud platforms. Refer to Managing Agents for more information.

See CloudBees Core Reference Architectures - Kubernetes for more details of the CloudBees Core on modern cloud platforms architecture.

Managed masters in specific Kubernetes namespaces

By default, managed masters are created in the same namespace that Operations Center is running in.

To create a managed master in a specific Kubernetes namespace, the namespace must be pre-created with the proper resources.

Those resources are:

  • The 'jenkins' Kubernetes ServiceAccount that will be used by the managed master(s) to provision Jenkins agents.

  • The Role and RoleBinding of the 'jenkins' ServiceAccount

  • The Role and RoleBinding of Operations Center ServiceAccount to allow Operations Center to manage the master resources

Here is the definition of the 'jenkins' service account and associated Role and RoleBinding:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: pods-all
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get","list","watch"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: pods-all
subjects:
- kind: ServiceAccount
  name: jenkins

To create a managed master in a specific namespace, Operations Center must have the Role privileges to do so.

NOTE that the RoleBinding must specify the namespace in which the cjoc ServiceAccount is defined (in the following example: cje)

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: master-management
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get","list","watch"]
- apiGroups: ["apps"]
  resources: ["statefulsets"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: ["extensions"]
  resources: ["ingresses"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["list"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["get","list","watch"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: cjoc
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: master-management
subjects:
- kind: ServiceAccount
  name: cjoc
  # cjoc service account namespace
  namespace: cje

Optionally, you can give Operations Center the privileges to list namespaces so that the user can select the namespace instead of typing the namespace in. To accomplish this, Operations Center must have the ClusterRole privileges to do so.

NOTE that the ClusterRoleBinding must specify the namespace in which the cjoc ServiceAccount is defined (in the following example: cje)

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: cjoc-ns-management
rules:
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["list"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: cjoc-ns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cjoc-ns-management
subjects:
- kind: ServiceAccount
  name: cjoc
  # cjoc service account namespace
  namespace: cje

The namespace for the master resources can be configured as the default namespace for all managed masters in the main Operations Center configuration screen.

The namespace can also be specify for a specific managed master in the master configuration screen.

NOTE: Leave the namespace value empty to use the value defined by the Kubernetes endpoint.

Managed masters in specific OpenShift projects

By default, managed masters are created in the same projects that Operations Center is running in.

To create a managed master in a specific OpenShift project, the project must be pre-created with the proper resources.

Those resources are:

  • The 'jenkins' ServiceAccount that will be used by the managed master(s) to provision Jenkins agents.

  • The Role and RoleBinding of the 'jenkins' ServiceAccount

  • The Role and RoleBinding of Operations Center ServiceAccount to allow Operations Center to manage the master resources

Note

Red Hat recommends that OpenShift production clusters use the ovs-multitenant network plugin. This plugin makes it so no namespaces can reference each others services without going through a route exposed on the router.

If ovs-multitentant is enabled, then the project running Operations Center needs to be a global project to run managed masters in other projects. Use the oc adm command below to make the project global; replace cloudbees with the name of your project.

oc adm pod-network make-projects-global cloudbees

Here is the definition of the 'jenkins' service account and associated Role and RoleBinding:

Note
The RoleBinding namespace '<PROJECT-MASTER-X>' should be the newly created project name.
apiVersion: v1
kind: List
items:
 -
  kind: ServiceAccount
  apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: jenkins

 -
  kind: Role
  apiVersion: v1
  metadata:
    name: pods-all
  rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get","list","watch"]

 -
  kind: RoleBinding
  apiVersion: v1
  metadata:
    name: jenkins
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: pods-all
    # The new project name
    namespace: <PROJECT-MASTER-X>
  subjects:
  - kind: ServiceAccount
    name: jenkins
    namespace: <PROJECT-MASTER-X>

To create a managed master in a specific OpenShift project, Operations Center must have the Role privileges to do so.

Note
The RoleBinding namespace '<PROJECT-MASTER-X>' should be the newly created project name.

The RoleBinding must specify the namespace in which the cjoc ServiceAccount is defined (in the following example, cje).

apiVersion: v1
kind: List
items:
 -
  kind: Role
  apiVersion: v1
  metadata:
    name: master-management
  rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get","list","watch"]
  - apiGroups: ["apps"]
    resources: ["statefulsets"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: ["route.openshift.io",""]
    resources: ["routes"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: ["route.openshift.io"]
    resources: ["routes/custom-host"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["list"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get","list","watch"]

 -
  kind: RoleBinding
  apiVersion: v1
  metadata:
    name: cjoc
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: master-management
    namespace: <PROJECT-MASTER-X>
  subjects:
  - kind: ServiceAccount
    name: cjoc
    # cjoc service account project name
    namespace: cje

Optionally, you can give Operations Center the privileges to list namespaces so that the user can select the project/namespace instead of typing the namespace in. To accomplish this, Operations Center must have the ClusterRole privileges to do so.

Note
The ClusterRoleBinding must specify the namespace in which the cjoc ServiceAccount is defined (in the following example, cje).
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cjoc-ns-management
rules:
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["list"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cjoc-ns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cjoc-ns-management
subjects:
- kind: ServiceAccount
  name: cjoc
  # cjoc service account namespace
  namespace: cje

Master provisioning configuration

To provision masters in their own projects, each master must use a specific sub-domain. For example, if the Operations Center domain is 'cd.example.org' and the URL is 'https://cb.example.org/cjoc/', a master dev1 is needed to use the sub-domain 'dev1.cd.example.org' or 'dev1-cd.example.org'. It is often preferable to use the latter if using a wild card certificates for domain 'example.org'.

To configure each master to use a specific sub-domain, set the 'Master URL Pattern' in the main Jenkins configuration page 'Manage Jenkins → Configure System' under 'Kubernetes Master Provisioning' advanced options. For example if the Operations Center domain is 'cd.example.org', the 'Master URL Pattern' would be 'https://*-cd.example.org/*/'.

Provision Masters

The project for the master resources can be configured as the default project for all managed masters in the main Operations Center configuration screen with the 'namespace' parameter.

The project can also be specify for a specific managed master in the master configuration screen with the 'namespace' parameter.

Note
Leave the namespace value empty to use the value defined by the Kubernetes endpoint.

CloudBees Assurance Program

CloudBees Core on modern cloud platforms provide greater stability and security for Jenkins installations through the CloudBees Assurance Program. The CloudBees Assurance Program supports all Jenkins-based components, including CloudBees Core on modern cloud platforms and CloudBees Jenkins Team. The CloudBees Assurance Program simplifies securing and upgrading multiple Jenkins instances. It assures compliance with CloudBees-recommended master configurations as detailed in the Beekeeper Upgrade Assistant page.

Learn more about managing upgrades and downgrades with the CloudBees Assurance Program.

Operating Managed and Client Masters

When new teams join an organization, or existing teams start a new project, CloudBees Core makes it easy to provision a fully managed and access controlled Jenkins master per team. In CloudBees Core on modern cloud platforms, a Jenkins master is referred to as a Managed Master.

Administrators can provision Managed Masters from a standardized template or they can allow team leads to provision their own Managed Masters "on-demand." The number of masters a CloudBees Core on modern cloud platforms environment can handle is limited only by the capacity of the cluster.

Adding Client Masters

Occasionally administrators will need to connect existing masters to a CloudBees Core cluster, such as in the case of a team requiring a master on Windows. Existing masters that are connected to Operations Center lack key benefits of Managed Masters like high availability and automatic agent management. Whenever possible, administrators should use a Managed Master with CloudBees Core on modern cloud platforms rather than connecting an existing master.

Client Masters are monitored by Operations Center just as Managed Masters are monitored. Administrators can see the status of all their Managed Masters, Team Masters, and Client Masters from the Operations Center masters page. Client Masters can receive configuration updates from Operations Center with configuration snippets. Client Masters can share agents hosted in the cluster, offloading the burden of agent management from teams.

Client Masters do not have the high availability features of Managed Masters.

Note: The existing Client Master and Operations Center must both accept JNLP requests. See Kubernetes client master configuration for more information.

To add a Client Master, log into CloudBees Core and navigate to the Operations Center dashboard. Click the New Item option in the left-hand menu and provide the following information:

  • Item name: the name of the existing Client Master to connect to the cluster

  • Select Client Master. Just as with Managed Masters, Client Masters offer some customization and configuration options on creation:

  • On-master executors: Number of builds to execute concurrently on the Client Master itself. Default setting is 2.

  • Email addresses: Contact information for the administrator responsible for maintaining the Client Master.

Once these settings are saved, Operations Center will attempt to connect the Client Master to the CloudBees Core on modern cloud platforms cluster.

Verify that Operations Center and the existing Client Master can communicate with each other over both HTTP and JNLP ports. Host and port to use for JNLP is advertised through HTTP headers by each Jenkins master.

You can connect an existing Client Master to Operations Center by giving that Client Master the TLS certificate for Operations Center, typically through the Configure Global Security page in Operations Center. For more information, see How to programmatically connect a Client Master to CJOC.

If you are connecting multiple Client Masters to your cluster, it is a good idea to automate that task using shared configurations.

Once the Client Master is connected, administrators should configure security and access controls for the master.

Configuring a Client Master

This section describes how to configure a Client Master that has already been connected to your Operations Center instance.

To access a Client Master’s configuration:

  1. Ensure you are logged in to Operations Center as a user with the Client/Managed Master > Configure permission.

  2. From the Operations Center home page/Dashboard, click to the right of your configured Client Master (avoiding its name) and choose Configure from the dropdown menu.
    Client Master dropdown menu

  3. On the resulting Client Master configuration page, you can configure the following properties:

    • Description - Enter an optional description for the Client Master.

    • Health Reporting - When this check box is selected, health-related metrics from this Client Master are collected periodically. The default data collection period is once per minute, when data consumers are present (e.g. Weather columns or CloudBees Jenkins Analytics).

    • Analytics Reporting - When this check box is selected, report events and other metrics from this Client Master for CloudBees Jenkins Analytics are collected.

    • On-master executors - Select Enforce to specify the # of executors, which periodically ensures that the number of executors on the Client Master is the value specified in this # of executors field. Allowing items (i.e. projects or jobs) to execute directly on the Client Master is a security risk since such projects/jobs could potentially access the file system and the build records of all previously run projects/jobs (which may contain sensitive information). Therefore, set this value to 0 to prevent any items from being executing directly on the Client Master.

    • Master Owner - Specify the email address/es (one per line) of the "Owner/s" to be notified whenever this Client Master goes offline or changes state.
      Note: Clicking the Advanced button opens the Delay before notification field, which allows you to specify the number of minutes (a value between 1 and 60) between notifications.

    • Plugin Catalog - Select Specify a plugin catalog for this master to choose a plugin catalog to apply to this Client Master.

Configuring Plugin Catalogs

The Beekeeper Upgrade Assistant feature of your Operations Center’s Manage Jenkins area is the main interface and entry point to the CloudBees Assurance Program.

Beekeeper Upgrade Assistant manages appropriate upgrades (and downgrades) of plugins on your Operations Center instance, in accordance with the CloudBees Assurance Program.

On Client Masters, the Beekeeper Upgrade Assistant feature is also available, and:

  • Is accessible through a Client Master’s Manage Jenkins area

  • Performs exactly the same types of management activities (as this feature does for your Operations Center instance) on their respective Client Master instances.

However, if you also need to install non-CloudBees Assurance Program plugins on your Client Masters, such as plugins which are unknown to CloudBees (perhaps developed internally within your organization) or older versions of plugins, you can configure a plugin catalog. A plugin catalog is used by the Beekeeper Upgrade Assistant to widen the acceptable scope of plugins beyond those defined by the CloudBees Assurance Program. This scope includes the ability to specify plugin dependency rules and acceptable version ranges for these plugins. Your plugin catalog can also specify a Maven 2 artifact repository from which to download plugins.

Once a plugin catalog is defined and is added to your Operations Center instance, it can then be deployed to your connected Client Master instances.

This section contains the following procedures:

Defining a Plugin Catalog

Before you can add a plugin catalog to your Operations Center instance, you must first define it.

A plugin catalog is defined in JSON file format, which in turn defines one or more plugin configurations. A plugin configuration defines the set of rules for one or more plugins (some potentially dependencies) and their versions, which can be installed to your Operations Center instance, along with the CloudBees Assurance Program-defined plugins.

To create this file, using your favorite text editor, copy and paste the unannotated empty template below into a new file and save it with an appropriate file name, similar to that of the name or displayName value.

The following JSON content is an example of the contents of a plugin catalog file, whose annotations are explained underneath. An appropriate name for this plugin catalog file might be
additional-pipeline-plugins-catalog.json. Use this information to help complete your own plugin catalog file.

Annotated plugin catalog file
{
    "type" : "plugin-catalog", (1)
    "version" : "1", (2)
    "name" : "additional-pipeline-plugins", (3)
    "displayName" : "Additional Pipeline plugins", (4)
    "configurations" : [ (5)
        {
            "description" : "Additional Pipeline plugins for masters versions 2.61 and 2.73", (6)
            "prerequisites": { (7)
                "productVersion": "[2.60, 2.73]"
            },
            "includePlugins" : { (8)
                "tier3-plugin": {
                    "groupId": "com.my-company.jenkins.plugins", (9)
                    "version" : "1.8.2"
                },
                "my-corporate-plugin": {
                    "url" : "https://download.my-company.com/jenkins/my-corporate-plugin-3.2.1.hpi"
                },
                "my-other-corporate-plugin": {
                    "sha1" : "SC/TzSN+eOrewaDZJZyzLpIQV7E=", (10)
                    "url" : "https://download.my-company.com/jenkins/my-other-corporate-plugin-1.2.3.hpi" (11)
                }
            }
        },
        { (12)
            "description" : "Additional Pipeline plugins for masters versions 2.89+",
            "prerequisites": {
                "productVersion": "[2.89)"
            },
            "includePlugins" : {
                "tier3-plugin": {
                    "groupId": "com.my-company.jenkins.plugins",
                    "version" : "1.8.3"
                },
                "my-corporate-plugin": {
                    "url" : "https://download.my-company.com/jenkins/my-corporate-plugin-3.4.0.hpi"
                },
                "my-other-corporate-plugin": {
                    "sha1" : "SC/TzSN+eOrewaDZJZyzLpIQV7E=",
                    "url" : "https://download.my-company.com/jenkins/my-other-corporate-plugin-1.3.0.hpi"
                }
            }
        }
    ]
    settings { (13)
        httpCredentialsProvider: { (14)
            credentials: [ (15)
                {
                    "authenticationScope": "ANY", (16)
                    "credentialsId": "download-server-credentials" (17)
                }
            ]
        },
        "repository": { (18)
            "layout": "maven2", (19)
            "url": "https://repo.my-company.com/content/repositories/dev-connect"
        }
    }
}
  1. type (required) - must have the fixed value plugin-catalog, which defines this JSON file as a plugin catalog file.

  2. version (required) - must currently have the value 1, which defines the current metadata format for the plugin catalog.

  3. name (required) - the internal name used to identify the plugin catalog. Avoid using spaces in this name.

  4. displayName (optional) - a human-readable name for the plugin catalog. This is the name of the plugin catalog that appears in the Operations Center interface.

  5. configurations (required) - the plugin catalog can define one or more plugin configurations within this array, where each plugin configuration is represented by an individual element of this array. At least one plugin configuration is required.

  6. description - provides more context regarding the configuration itself.

  7. prerequisites (optional) - an object which allows the definition of the productVersion member, which defines a range of supported Client Master versions (using the Maven Version Range Specification) for the plugin configuration. If the prerequisites object (and hence its productVersion member) is omitted, then any Client Master version is supported.

  8. includePlugins (optional) - an object which allows you to specify the ID values of one or more plugins to include as part of this plugin configuration. For each plugin (specified by its plugin-id), specify the pair made up of the supported groupId and version for the plugin or the URL from where download it.

  9. groupId (optional) - provides the groupId for the plugin when its download will take place from a maven repository. If the groupId is not provided, this field will consider org.jenkins-ci.plugins as default value.

  10. sha1 (optional) - the SHA1 hash of the plugin that will be used to verify the downloaded plugin.

  11. url (optional) - the download URL for the plugin.

  12. (Optional) - an additional plugin configuration that provides support for different plugin versions (or plugins themselves), for different Client Master versions (specified in the prerequisites > productVersion member).

  13. settings (optional) - the plugin catalog defines all parameters used in the download of the plugins.

  14. httpCredentialsProvider (optional) - the plugin catalog defines a set of credentials using the interface CredentialsProvider.

  15. credentials (required) - the array containing the credentials. Currently must contain only one element.

  16. authenticationScope (required) - if the credentials are defined, must currently have the value ANY.

  17. credentialsId (required) - the ID of a credential that must be defined in the Client Master where the plugin catalog will be installed. This credential will be used for plugin downloads from servers which require authentication.

  18. repository (optional) - an object which allows the definition of a Maven 2 repository from which the plugins (defined in the configurations array above) can be downloaded. If a plugin’s url has not been specified within its relevant configurations section, then the Client Master will attempt to download the plugin from this repository‍'s url. Defining a repository is useful when you need to restrict a Client Master’s access to the Internet for security reasons (part of a process known as air gapping), [1] but want the Client Master to install and update plugins from the repository, which can be achieved by setting up a proxy artifact repository.

  19. layout (optional) - must currently have the value maven2. However, if this value is not defined, it is assumed to be maven2 by default.

Copyable plugin catalog file
{
    "type" : "plugin-catalog",
    "version" : "1",
    "name" : "",
    "displayName" : "",
    "configurations" : [
        {
            "description" : "",
            "prerequisites": {
                "productVersion": ""
            },
            "includePlugins" : {
                "replace-with-appropriate-plugin-id": {
                    "groupId": "",
                    "version" : ""
                },
                "replace-with-other-appropriate-plugin-id": {
                    "sha1" : "",
                    "url" : ""
                }
            }
        }
    ]
    settings {
        httpCredentialsProvider: {
            credentials: [
                {
                    "authenticationScope": "ANY",
                    "credentialsId": ""
                }
            ]
        },
        "repository": {
            "layout": "maven2",
            "url": ""
        }
    }
}
Adding a Plugin Catalog to Operations Center

Once you have defined your plugin catalog, you can add it to your Operations Center instance using the Jenkins CLI tool. This section assumes you have configured an alias for the Jenkins CLI tool.

  1. In a terminal/command prompt (window), cd to the directory containing the plugin catalog file you defined above.

  2. Enter the command:

    jenkins-cli plugin-catalog --put < plugin-catalog-file.json

    where plugin-catalog-file.json is the name of your plugin catalog file.
    For example, if you used the plugin catalog example above and saved it to a file named
    additional-pipeline-plugins-catalog.json, then you would enter the command:

    jenkins-cli plugin-catalog --put < additional-pipeline-plugins-catalog.json

    If the command was successful, the Jenkins CLI tool returns the following JSON response:

    {
      "id": "additional-pipeline-plugins",
      "message": "Catalog updated correctly",
      "status": "SUCCESS"
    }

    You can now validate the plugin catalog’s suitability for deployment on a configured Client Master.

Note:

  • If the command was not successful, for example - the JSON format of the plugin file was malformed, the Jenkins CLI tool returns an error like:

    {
      "message": "Unexpected character ('}' ... [Source: ... line: 16, column: 14]",
      "status": "FAILURE"
    }
Updating a Plugin Catalog on Operations Center

If one or more plugins (defined in a plugin catalog) have updates, you may need to specify these plugin updates in the plugin catalog and then update this plugin catalog on Operations Center.

This process is identical to adding the plugin catalog to Operations Center:

  1. Modify the plugin catalog file defined previously (for example, with the updated plugin versions).

  2. Re-add the plugin catalog to Operations Center.

You should then re-validate and then re-deploy the updated plugin catalog to the Client Masters on which the older version of this plugin catalog had been deployed.

If you are unsure of which Client Masters the older version of the plugin catalog had been deployed to, verify the plugin catalog’s deployment on the relevant Client Masters.

Validating a Plugin Catalog’s Suitability for a Client Master

Once you have added a plugin catalog to Operations Center, you can verify this on the Operations Center interface when validating the plugin catalog’s suitability for deployment to a configured Client Master.

This validation process verifies:

  • That the Client Master version and plugin configuration satisfies exactly one plugin catalog definition

  • That plugin dependencies are resolved

  • That credentials referenced in the plugin catalog are defined in the Client Master

  • The credential count

  • The credential scope

To validate your plugin catalog’s suitability for deployment to a Client Master:

  1. Access the relevant Client Master’s configuration in Operations Center.

  2. Scoll down to the Plugin Catalog property and select its Specify a plugin catalog for this master check box.

  3. Select the plugin catalog from the Catalog dropdown.
    This name is the displayName specified when the plugin catalog was defined.

  4. Click the Check Validity button.
    If your plugin catalog’s definition is suitable for deployment to the Client Master, the message The catalog is compatible with the master is displayed.
    Plugin Catalog validity check

  5. ( Optional ) Scroll down the page and click the Save or Apply button to deploy the plugin catalog to the Client Master.

Note
To validate your plugin catalog in this manner against all Client Masters that have been connected to your Operations Center instance, repeat this procedure on each Client Master.
Deploying a Plugin Catalog to a Client Master

A plugin catalog which has been added to Operations Center can be deployed to a configured Client Master (after optionally validating its suitability for deployment) using either the Operations Center interface or the Jenkins CLI tool.

Note
Be aware that only one plugin catalog can be deployed to a Client Master at a time.
Deploying a Plugin Catalog Using the Operations Center interface
  1. Access the relevant Client Master’s configuration in Operations Center.

  2. Scoll down to the Plugin Catalog property and select its Specify a plugin catalog for this master check box.

  3. Select the plugin catalog from the Catalog dropdown.
    This name is the displayName specified when the plugin catalog was defined.

  4. Scroll down the page and click the Save or Apply button to deploy the plugin catalog to the Client Master.

  5. ( Optional ) Verify the plugin catalog’s deployment to the Client Master.

Deploying a Plugin Catalog Using the Jenkins CLI Tool

This section assumes you have configured an alias for the Jenkins CLI tool.

  1. In a terminal/command prompt (window), enter the command:

    jenkins-cli plugin-catalog --push plugin-catalog-name --master client-master

    where plugin-catalog-name is the name of the plugin catalog specified when the plugin catalog was defined and client-master is the name of the Client Master configured on your Operations Center instance.
    For example, if you used the plugin catalog example above and have already added it to your Operations Center instance, then you would enter the command:

    jenkins-cli plugin-catalog --push additional-pipeline-plugins --master client-master

    If the command was successful, the Jenkins CLI tool returns the following JSON response:

    {
      "id": "additional-pipeline-plugins",
      "message": "Catalog applied correctly, it will be propagated in a few seconds",
      "status": "SUCCESS"
    }
    Note
    When using this Jenkins CLI command, the --push option must always be used in conjunction with the --master option. If you only use one of these options, the Jenkins CLI tool returns an error.
  2. ( Optional ) Verify the plugin catalog’s deployment to the Client Master.

Verifying a Plugin Catalog’s Deployment to a Client Master

To validate that a plugin catalog has been successfully deployed to the Client Master:

  1. Ensure you are logged in to the Client Master as a user with the Administer permission.

  2. From the Client Master home page/Dashboard, click Manage Jenkins on the left.

  3. Click Beekeeper Upgrade Assistant.

  4. Click Plugin Catalog on the left (or the section’s More Info button on the right).
    Plugin Catalog on master
    If you see the message There is a plugin catalog installed, then the plugin catalog was deployed to the Client Master and the remaining details on this page should depict the plugin catalog that had been defined and added to Operations Center, and deployed to this Client Master.

    Note
    If you only just deployed a plugin catalog to the Client Master and you see No Plugin Catalog is installed at the moment, it may take a few minutes before the Client Master registers this deployment. Therefore, wait a few moments and refresh this page, or revisit this procedure at a later point in time.
  5. ( Optional ) For any Included plugins shown on this page, you can now install or update these plugins on the Client Master itself (if these plugins have not yet been installed/updated).

Removing a Plugin Catalog from a Client Master

A plugin catalog that has been deployed to a Client Master can be removed from it using either the Operations Center interface or the Jenkins CLI tool.

Removing a Plugin Catalog Using the Operations Center interface
  1. Access the relevant Client Master’s configuration in Operations Center.

  2. Scoll down to the Plugin Catalog property and clear its Specify a plugin catalog for this master check box.

  3. Scroll down the page and click the Save or Apply button to confirm the removal of the plugin catalog from the Client Master.

  4. ( Optional ) Verify the plugin catalog’s removal from the Client Master.

Removing a Plugin Catalog Using the Jenkins CLI Tool

This section assumes you have configured an alias for the Jenkins CLI tool.

  1. In a terminal/command prompt (window), enter the command:

    jenkins-cli plugin-catalog --remove client-master

    where client-master is the name of the Client Master configured on your Operations Center instance.
    For example, if you have already added a plugin catalog to your Operations Center instance and deployed it to a Client Master whose name is client-master, then you would enter the command:

    jenkins-cli plugin-catalog --remove client-master

    If the command was successful, the Jenkins CLI tool returns the following JSON response:

    {
      "id": "client-master",
      "message": "Catalog removed",
      "status": "SUCCESS"
    }
  2. ( Optional ) Verify the plugin catalog’s deployment to the Client Master.

Verifying a Plugin Catalog’s Removal from a Client Master

To validate that a plugin catalog has been successfully removed from a Client Master:

  1. Ensure you are logged in to the Client Master as a user with the Administer permission.

  2. From the Client Master home page/Dashboard, click Manage Jenkins on the left.

  3. Click Beekeeper Upgrade Assistant.

  4. Click Plugin Catalog on the left (or the section’s More Info button on the right).
    If you see the message No Plugin Catalog is installed at the moment, then the plugin catalog was removed from the Client Master.

    Note
    If you only just removed a plugin catalog from the Client Master and you see There is a plugin catalog installed, it may take a few minutes before the Client Master registers this removal. Therefore, wait a few moments and refresh this page, or revisit this procedure at a later point in time.
Removing a Plugin Catalog from Operations Center

A plugin catalog added to your Operations Center instance can be removed from it using the Jenkins CLI tool. This section assumes you have configured an alias for the Jenkins CLI tool.

Note
A plugin catalog can only be removed from your Operations Center instance when no Client Masters are using the plugin catalog.
  • In a terminal/command prompt (window), enter the command:

    jenkins-cli plugin-catalog --delete plugin-catalog-name

    where plugin-catalog-name is the name of the plugin catalog specified when the plugin catalog was defined.
    For example, if you used the plugin catalog example above, then you would enter the command:

    jenkins-cli plugin-catalog --delete additional-pipeline-plugins

    If the command was successful, the Jenkins CLI tool returns the following JSON response:

    {
      "id": "additional-pipeline-plugins",
      "message": "Catalog [additional-pipeline-plugins] successfully deleted",
      "status": "SUCCESS"
    }

Note:

  • If the Jenkins CLI tool returns an error like:

    {
      "message": "Catalog [additional-pipeline-plugins] is being used by the following master/s: [client-master]. It can not be deleted",
      "status": "FAILURE"
    }

    then the plugin catalog is still being used by at least one Client Master (in this example, a Client Master named client-master). If the plugin catalog is being used by multiple Client Masters, then each Client Master is listed in the error response. To rectify this issue, remove the plugin catalog from each Client Master listed in the error response and then try following this procedure again to remove the plugin catalog from your Operations Center instance.

Configuring Masters through CLI

Jenkins allows some operations to be invoked through CLI, some of them being useful to configure Client Masters.

But also, when we are talking about Client Masters configuration, applying the same identical configuration to each one is something desirable. Thus we need a way to gather all connected masters from the command line and then perform some operations on each one.

The list-masters CLI command on Operations Center provides information about all connected masters in JSON format, allowing you to use that information to invoke commands on each Client Master:

{
  "version": "1",
  "data": {
    "masters": [
      {
        "fullName": "my master", (1)
        "url": "http://localhost:9090/", (2)
        "status": "ONLINE" (3)
      }
    ]
  }
}
  1. fullName - the name of the Client Master, including folders.

  2. url - URL for the Client Master.

  3. status - the connection status of the Client Master. It can be 'ONLINE' or 'OFFLINE'.

Examples:

Execute a Groovy script and install a plugin

The following bash script will execute a Groovy script and install the 'beer' plugin on all the online Client Masters:

#!/usr/bin/env bash

JENKINS_CLI=jenkins-cli.jar
JENKINS_CJOC_URL=http://localhost:8080/
JENKINS_AUTH=admin:admin

if [ -z "$JENKINS_CJOC_URL" ]; then
    echo "Need to set environment variable JENKINS_CJOC_URL ({CJOC} root URL)."
    exit 1
fi

if [ -z "$JENKINS_AUTH" ]; then
    echo "Need to set environment variable JENKINS_AUTH (format: 'userId:apiToken')."
    exit 1
fi


if [ -f "$JENKINS_CLI" ]
then
	echo "Using $JENKINS_CLI."
else
	wget -O "$JENKINS_CLI" $JENKINS_CJOC_URL/jnlpJars/jenkins-cli.jar
fi

java -jar $JENKINS_CLI -s $JENKINS_CJOC_URL -auth $JENKINS_AUTH list-masters | jq -r '.data.masters[] | select(.status == "ONLINE") | .url' | while read url; do
	java -jar $JENKINS_CLI -s $url -auth $JENKINS_AUTH groovy = < configuration-script.groovy
	java -jar $JENKINS_CLI -s $url -auth $JENKINS_AUTH install-plugin beer
done
Push a Plugin Catalog to all Client Masters

The following bash script will add a Plugin Catalog to Operations Center and push it to all the online Client Masters:

#!/usr/bin/env bash

JENKINS_CLI=jenkins-cli.jar
JENKINS_CJOC_URL=http://localhost:8080/
JENKINS_AUTH=admin:admin

if [ -z "$JENKINS_CJOC_URL" ]; then
    echo "Need to set environment variable JENKINS_CJOC_URL ({CJOC} root URL)."
    exit 1
fi

if [ -z "$JENKINS_AUTH" ]; then
    echo "Need to set environment variable JENKINS_AUTH (format: 'userId:apiToken')."
    exit 1
fi


if [ -f "$JENKINS_CLI" ]
then
	echo "Using $JENKINS_CLI."
else
	wget -O "$JENKINS_CLI" $JENKINS_CJOC_URL/jnlpJars/jenkins-cli.jar
fi

java -jar $JENKINS_CLI -s $JENKINS_CJOC_URL -auth $JENKINS_AUTH  plugin-catalog --put < "java-web-catalog.json" | jq

java -jar $JENKINS_CLI -s $JENKINS_CJOC_URL -auth $JENKINS_AUTH list-masters | jq -r '.data.masters[] | select(.status == "ONLINE") | .fullName' | while read masterName; do
	java -jar $JENKINS_CLI -s $JENKINS_CJOC_URL -auth $JENKINS_AUTH plugin-catalog --push "java-web" --master "$masterName" | jq
done

Upgrading Managed Masters

To update a Managed Master’s version, the administrator needs to update the version of the Docker image for the Managed Master. Once this version is updated, the Managed Master and its default plugins will be upgraded to the latest versions defined in the the image, while any non-default plugins will be left at their current versions.

master configuration overview 2

Once the Docker image definition is updated, the CloudBees Core on modern cloud platforms administrator will need to restart the instance so the Managed Master can begin using its upgraded components.

action stop
action stop confirm
action start

Bulk-upgrading Managed Masters

When a CloudBees Core on modern cloud platforms cluster serves many teams and contains many masters, an administrator can save time and greatly reduce the overhead of upgrading those masters by creating a repeatable task to automate this process. In CloudBees Core on modern cloud platforms, this can be achieved by defining a cluster operation in Operations Center.

To create this task, the administrator will first need to log into Operations Center, then create a New Item of the type Cluster Operations. The administrator will then needs to select the Managed Masters cluster operation, and will then be given a set of pre-configured upgrade patterns to choose from.

The administrator will then need to specify which masters to target by using the filter Uses Docker Image and picking a Docker image used by their cluster’s masters as the upgrade target for this operation. Any masters in the cluster using the selected image will be affected by this cluster operation.

In the Steps section, the administrator should select Update Docker Image and pick the new Docker image to bulk-upgrade the targeted masters to. Next, the administrator should add a Reprovision step to restart the targeted masters.

Once these settings are configured, the administrator can run the cluster operation to perform a bulk upgrade of their cluster’s masters, or schedule the operation to run at a later time.

Quiet start

There may be times during an upgrade or other maintenance when it is best to have Jenkins start, but launch no projects. For example, if an upgrade is being performed in multiple steps, the intermediate steps may not be fully configured to run projects successfully. The "quiet start plugin" can immediately place the Jenkins server in "quieting down" state on startup.

Enable "quiet start" by checking the box in Manage Jenkins  Quiet Restart. When the server is restarted, it will be in the "quieting down" state. An administrator can cancel that state using the regular UI.

Uncheck the box in Manage Jenkins  Quiet Restart when maintenance is complete. Projects will start as usual on server restart.

Reviewing plugin usage

Large Jenkins installations often have many plugins installed. CloudBees Core can help administrators identify plugins which may be unused or no longer useful. The CloudBees Plugin Usage Plugin helps you keep track of which plugins you are actively using and where they are used.

Plugin usage report

A table of installed plugins is available from Manage Jenkins  Plugin Usage, including a usage count and a list of uses of that plugin. A strikethrough font indicates disabled plugins. A progress bar runs while Jenkins scans jobs and global settings. Once the scan is complete, the Usage Count column becomes a sortable column. Click the column heading to sort the column.

usage count
Figure 2. Usage Count

The third column shows detected uses of the plugin, hyperlinked to the matching configuration screen. In the list of hyperlinks, "Jenkins" refers to global Jenkins configuration. Most other labels are names of configurable items such as jobs (using » to show folder structure where relevant); configuration associated with Jenkins users (such as credentials) will also be shown. Code packaged as a plugin but really used only as a library shared between several "real" plugins (Async Http Client Plugin, for example) will be shown as used by those plugins. »… is appended to items which have nested elements also using the plugin; for example, a native Maven project may have several modules all of which refer to the plugin.

Only a Jenkins administrator can perform a complete plugin usage analysis. Configuration read permission is required to check plugin usage.

Limitations

Beware that some plugins will not be listed in this table, because they are not directly mentioned in any stored configuration. They may affect how Jenkins runs in various ways without configuration; the Jenkins Translation Assistance Plugin is an example.

Conversely, a plugin might have some stored configuration which is of no interest or never used at all. For example, Jenkins Email Extension Plugin displays various controls in the global Configure System screen; when you click Save, Email Extension configuration is saved in hudson.plugins.emailext.ExtendedEmailPublisher.xml in your $JENKINS_HOME, even if you have made no customizations. This plugin will thus appear to be "in use" by Jenkins. Only if you have enabled the Editable Email Notification post-build action for a job will it have a usage count greater than 1, however.

Use the plugin usage table as a starting point for investigating plugins that might be unused. It is not relevant for all plugins. The table works best for checking the usage of plugins which offer concrete additions to configuration, such as build steps or publishers in jobs.

Plugins used in configuration generated by a builder (or publisher) template will not be listed (in either the template or jobs using it), since this configuration is created on the fly during a build rather than being stored permanently in the job. Configuration generated by a job (or folder) template is considered.

Alerting

CloudBees Core on modern cloud platforms automatically monitors its infrastructure elements and all of the Jenkins masters that it manages. These alerts can be helpful in troubleshooting.

The CloudBees Core infrastructure element monitoring includes Operations Center and Managed Masters. For the various infrastructure nodes, it monitors the following metrics.

  • Available disk space

  • CPU utilization for the most recent 5 minutes

  • RAM utilization for the most recent 5 minutes

If any of the data points for these metrics exceed 90% or more, a threshold which is currently immutable, CloudBees Core on modern cloud platforms will emit an alert, for example:

Health checks failing: [worker-14: Disk util at 95%, worker-9: Worker down]

The following table show the possible error messages and corresponding descriptions.

Table 1. Possible Failure Messages
Messages Descriptions

Disk util at <number>%

Disk utilization reaches 90% or higher

RAM util at <number>%

RAM utilization reaches 90% or higher for five or more minutes

CPU util at <number>%

Total CPU utilization reaches 90% or higher for five or more minutes. The percent utilization is normalized to 100% across all CPU’s on the node.

Additional monitoring is available with the Elasticsearch Reporter plugin.

Using CloudBees Backup

CloudBees offers the CloudBees Backup Plugin which creates a new job type specifically for backup.

The "Backup" job type provides a build step for backup. This plugin offers the option to configure backups of any of the following:

  • Build records for jobs

  • Job configurations

  • System configurations - this option also allows you to specify whether you’d like to exclude the master.key file or any other files in your $JENKINS_HOME.

The backup destination is configurable, including local storage, Amazon S3, Azure Blob Storage, remote SFTP or WebDAV. The retention policy and format of the backup (tar.gz or zip) can also be selected as part of job configuration.

Like other jobs, backup jobs can be triggered by remote triggers, or at periodic intervals, by other jobs, or due to changes in a software configuration management (SCM) repository. Administrators use the familiar interface to schedule and monitor backups.

Creating a backup job

To create a backup, click "New Item" from the left and select "Backup and Restore". This will take you to the page where you can configure the backup project.

Configuration of a backup job is very similar to that of a freestyle project. In particular, you can use arbitrary build triggers to schedule backup. Once the trigger is configured, click "Add build step" and add "Take backup" builder.

You can add multiple backup build steps to take different subsets of backups at the same time, or you can create multiple backup jobs to schedule different kind of backup execution at different interval.

backup config
Figure 3. Backup job configuration options

For example, you might have a daily job that only backs up the system configuration and job configuration, as they are small but more important, then use another job to take the full backup of the system once a week.

Configuring Backup

There are three main aspects to the configuration of the Take backup builder.

First you should decide "what to backup": the data included/excluded from the backup. This plugin divides Jenkins storage into three classifications:

System configuration

This includes all the settings scoped to the entire CloudBees Jenkins Enterprise, such as everything in the system configuration page, plugin binaries, plugin settings, user information, fingerprints. Anything that lives outside the individual job directory.

Note
Security

This category includes a master key which is used by CloudBees Jenkins Enterprise to encrypt all other private information such as passwords in job configuration. You are recommended to select the configuration option to omit this master key from backup, since it is small and never changes once created; store it elsewhere and manually re-add it when resurrecting a backup.

Job configuration

This includes all the settings scoped to individual jobs. Mostly this maps to the job configuration, and is generally the most precious data.

Build records

This includes information scoped to individual builds, such as its console output, artifacts, test reports, code analysis reports, and so on. Build records can be quite large and may be dispensable.

When selecting any of these, you may optionally specify a list of file patterns to exclude from the backup. The Backup plugin ships with a number of standard exclusions for well-known files created by Jenkins and popular plugins that would be pointless or expensive to back up. Nonetheless, you may need to add some custom entries appropriate for your own installation. You can also use the exclude list to suppress certain jobs (or whole folders) from backup, and so on.

The next important aspect is "where to backup", which controls the destination of the backup.

Azure Blob Storage

This tells CloudBees Jenkins Enterprise to backup data as a file to a configured Azure Blob Storage container.

azure
Amazon S3

This tells CloudBees Jenkins Enterprise to backup data as a file to a configured Amazon S3 bucket.

amazon
Local directory

This tells CloudBees Jenkins Enterprise to backup data as a file in the file system of the master. For disaster recovery purpose, it is recommended that you back up files to a network mounted disk.

Remote SFTP server

This tells CloudBees Jenkins Enterprise to connect to a remote SFTP server and send a backup over there. This is more convenient way to back up to a remote system on Unix as it requires no shared file system.

Remote WebDAV server

This tells CloudBees Jenkins Enterprise to connect to a WebDAV server and store a backup in a configured directory.

Note
Storage requirements

For S3 backups a temp file with the whole backup has to be temporarily stored during upload, this is a limitation of the S3 service.

The last important aspect is the retention policy, which controls how many/long backups are kept. By using this switch you can avoid run-away disk consumption caused by old backups. Backup retention is based on file names; backup file names include the job name and the build number of the backup job. So you can safely use a single destination directory for multiple backup jobs and their backup retention policies are enforced without interfering with each other.

Exponential decay

This tells Jenkins to keep lots of newer backups and fewer old backups. It’s called "exponential decay" because the number of backups kept is proportional to e^-t. In this way, you’ll always cover the entire lifespan of your Jenkins installation with some backups, with more coverage on recent data.

Keep all backups

In this mode, Jenkins will not delete any backups that it has created. This is useful if you are managing backup retention externally, such as via logrotate.

Keep N backups

In this mode, Jenkins will keep the latest N backups and delete any older ones.

There are other aspects that can be configured:

Format

You can choose whether to store the backup as a ZIP or TAR.GZ file.

Skip recently modified files

For files that are modified often, you might want to skip backing them up unless there has been a certain amount of time since they were last modified. That amount of time can be 10 seconds, 30 seconds, 1 minute or 5 minutes.

Wait and block

Enabling this will make the plugin block the execution of any new tasks until the backup is complete as well as waiting for all running jobs to complete before launching such backup.

CLI command

The backup-master CLI command, that can be invoked on a master, invokes the Backup plugin to perform a backup of such master. This backup can now be configured through a JSON file provided through the standard input:

jenkins-cli backup-master < backup-configuration.json

The following JSON content is an example of the content of a configuration file, whose annotations are explained underneath. Use this information as an example when creating your own configuration file.

To create this file, using your favorite text editor, copy and paste the copyable code snippet (template) below into a new file and save it with an appropriate file name (i.e. backup-configuration.json).

Annotated code snippet
{
    "version" : "1", (1)
    "data" : { (2)
        "type": "backup-definition", (3)
        "backupScope": { (4)
            "system": { (5)
                "excludes": "", (6)
                "omitMasterKey": false (7)
            },
            "jobs": { (8)
                "excludes": "" (9)
            },
            "builds": { (10)
                "excludes": "" (11)
            }
        },
        "backupDestination": { (12)
            "amazonS3": { (13)
                "credentialsId": "my-credentials", (14)
                "region": "us-east-1",
                "bucketName": "name",
                "bucketFolder": "folder",
                "serverSideEncryption": {
                    "enabled":  true (15)
                }
            },
            "azureBlobStorage": { (16)
                "container": "name",
                "folder": "path/to/folder/inside/the/container",
                "connection": "account", (17)
                "accountConfiguration": { (18)
                    "account": "my-account",
                    "credentialsId": "my-credentials", (14)
                    "blobEndpointUrl": "http://blob.core.windows.net/"
                }
            },
            "sftp": { (19)
                "host": "host-name",
                "port": 22, // Port 22 By default (20)
                "credentialsId": "my-credentials", (14)
                "directory": "/my/personal/directory"
            },
            "webDav": { (21)
                "url": "https://www.google.es",
                "credentialsId": "my-credentials" (14)
            },
            "localDirectory" : "/path/to/local/directory" (22)
        },
        "retention": "5", (23)
        "skipRecent": "off", (24)
        "format": "tar.gz" (25)
    }
}
  1. version (required) - must currently have the value 1, which defines the current metadata format for the configuration.

  2. data (required) - actual content for the configuration.

  3. type (required) - must have the fixed value backup-definition, which defines this JSON file as a backup configuration file.

  4. backupScope (required) - field to specify "what to back up". At least one of the possible options must be configured.

  5. system (optional) - configures a system scoped settings back up.

  6. excludes (optional) - comma-separated list of Ant-style patterns to exclude from backup (relative to $JENKINS_HOME). For example: .git/, jobs/scratch-folder/, */~. "" by default.

  7. omitMasterKey (optional) - boolean value (false by default) to specify if the master key file is to be backed up.

  8. jobs (optional) - configures a jobs' scoped settings back up.

  9. excludes (optional) - comma-separated list of Ant-style patterns to exclude from backup (relative to $JENKINS_HOME). For example: .git/, jobs/scratch-folder/, */~. "" by default.

  10. builds (optional) - configures a builds' scoped information back up.

  11. excludes (optional) - comma-separated list of Ant-style patterns to exclude from backup (relative to $JENKINS_HOME). For example: .git/, jobs/scratch-folder/, */~. "" by default.

  12. backupDestination (required) - field to specify "where to back up". One and only one of the possible options must be configured.

  13. amazonS3 (optional) - configures the back up to be stored in Amazon S3.

  14. credentialsId (optional) - specifies a credential ID in Jenkins storing the credentials for Amazon S3. none by default.

  15. enabled (optional) - specified if Amazon S3 server side encryption should be used. False by default.

  16. azureBlobStorage (optional) - configures the back up to be stored in Azure Blob Storage.

  17. connection (required) - where the connection details will be obtained from. The allowed values are instance or account. For the first one the details will be obtained from the Azure instance metadata.

  18. accountConfiguration (option) - in case the connection details where to be obtained from account, this is the configuration used.

  19. sftp (optional) - configures the back up to be stored in a remote SFTP server.

  20. port (optional) - connection port to the SFTP server. 22 by default.

  21. webDav (optional) - configures the back up to be stored in a remote WebDAV server.

  22. localDirectory (optional) - configures the back up to be stored in a local directory.

  23. retention (required) - configures the retention policy. The allowed values are exponential, all or a number.

  24. skipRecent (optional) - configures how recently modified files should be treated. The allowed values are off (the default one), 10s. 30s, 1m and 5m.

  25. format (optional) - format of the file generated with the backup contents. The allowed values are tar.gz (the default one) and zip.

Copyable code snippet
{
    "version" : "1",
    "data" : {
        "type": "backup-definition",
        "backupScope": {
            "system": {
                "excludes": "",
                "omitMasterKey": false
            },
            "jobs": {
                "excludes": ""
            },
            "builds": { //
                "excludes": ""
            }
        },
        "backupDestination": {
            "amazonS3": {
                "credentialsId": "my-credentials",
                "region": "us-east-1",
                "bucketName": "name",
                "bucketFolder": "folder",
                "serverSideEncryption": {
                    "enabled":  true
                }
            },
            "azureBlobStorage": {
                "container": "name",
                "folder": "path/to/folder/inside/the/container",
                "connection": "account",
                "accountConfiguration": {
                    "account": "my-account",
                    "credentialsId": "my-credentials",
                    "blobEndpointUrl": "http://blob.core.windows.net/"
                }
            },
            "sftp": {
                "host": "host-name",
                "port": 22, // Port 22 By default
                "credentialsId": "my-credentials",
                "directory": "/my/personal/directory"
            },
            "webDav": {
                "url": "https://www.google.es",
                "credentialsId": "my-credentials"
            },
            "localDirectory" : "/path/to/local/directory"
        },
        "retention": "5",
        "skipRecent": "off",
        "format": "tar.gz"
    }
}
Backup identifier

By default, the generated backup file will be named as backup-${jenkinsInstanceId}-${timestamp}. However if you want a different name to be used instead of the instance ID, for example a name identifying the master, you can provide it as an argument to the CLI command.

jenkins-cli backup-master "my-own-master" < backup-configuration.json

"Backup to S3" with Amazon S3 compatible storage systems

Note
Since "CloudBees Back-up Plugin" version 3.33

The "Backup to S3" destination can be used to store backups in storage systems that are compatible with Amazon S3 (e.g. EMC Atmos storage, OpenStack Swift…​).

To do this, you have to use the standard mechanism of the AWS SDK to modify the list of Amazon S3 endpoints known by the SDK defining a file com/amazonaws/partitions/override/endpoints.json in the classpath of Jenkins. This modification can either be to add a new Amazon S3 endpoint or to replace the existing Amazon S3 endpoints.

Notes on endpoints.json:

  • CloudBees recommends its customers to ask their storage system vendor for their recommendations to customize the standard AWS SDK endpoints.json configuration file. Note that this endpoints.json file is used in all the AWS SDK including aws-sdk-net, aws-sdk-js, aws-sdk-java and aws-sdk-ruby.

  • The desired S3 compatible endpoint MUST be declared in the section partitions.regions and in the section partitions.services.s3.endpoints.

  • The section partitions.services.s3.endpoints. my-s3-compatible-endpoint .signatureVersions (where my-s3-compatible-endpoint is the new section) must be filled according to the specifications of the vendor of the S3 compatible storage system.

Customization of the Jenkins startup command line to add the endpoints.json file to the classpath

A solution to add this endpoints.json file in the classpath of Jenkins is to use the java commandline parameter -Xbootclasspath/a:/path/to/boot/classpath/folder/ and to move com/amazonaws/partitions/override/endpoints.json in the folder /path/to/boot/classpath/folder/.

Sample:

Jenkins startup command line
java -Xbootclasspath/a:/opt/jenkins/boot-classpath/ -jar /opt/jenkins/jenkins.war

Customization of endpoints.json

To add the Amazon S3 Compatible Storage Endpoint keeping all the AWS existing endpoints, we recommend to edit the endpoints.json file and add a partition to the out of the box AWS partitions.

To remove all the existing AWS endpoints and just have the Amazon S3 Compatible Storage Endpoint, we recommend to edit the endpoints.json file, delete all the existing AWS partitions and add a partition with the Amazon S3 Compatible Storage Endpoint.

Sample AWS SDK endpoints.json adding a "My Company" partition containing an "us-mycompany-east-1" region with the s3 endpoint "s3-us-mycompany-east-1.amazonaws.com".

com/amazonaws/partitions/override/endpoints.json
{
  "partitions": [
    {"_comment": "OUT OF THE BOX AWS PARTITIONS ..."},

    {
      "defaults": {
        "hostname": "{service}.{region}.{dnsSuffix}",
        "protocols": [
          "https"
        ],
        "signatureVersions": [
          "v4"
        ]
      },
      "dnsSuffix": "cloud.mycompany.com",
      "partition": "mycompany",
      "partitionName": "My Company",
      "regionRegex": "^(us|eu|ap|sa|ca)\\-\\w+\\-\\d+$",
      "regions": {
        "us-mycompany-east-1": {
          "description": "My Company US East 1"
        }
      },
      "services": {
        "s3": {
          "defaults": {
            "protocols": [
              "http",
              "https"
            ],
            "signatureVersions": [
                "s3",
                "s3v4"
             ]
          },
          "endpoints": {
            "us-mycompany-east-1": {
              "hostname": "s3-us-mycompany-east-1.amazonaws.com"
            }
          }
        }
      }
    }
   ],
   "version": 3
}

Creating a restore job

To restore a backup, click "New Item" from the left and select "Backup and Restore". This will take you to the page where you can configure the restore project.

Configuration of a restore job is very similar to that of a freestyle project. Click "Add build step" and add the "Restore from backup" builder. Select the source of the backup to be restored and provide credentials for it.

Restoring will by default delete anything that is currently in jenkins_home, if there is data in jenkins_home that is not in the backup and needs to be kept, click the "Advanced" button under "Restore options" and select "Preserve Jenkins home contents". This will copy the current jenkins_home into a directory named "archive-<timestamp>".