Installing CloudBees Core on OpenShift

This document explains the cluster requirements, points you to the Red Hat OpenShift documentation you will need to create a cluster and explains how to install CloudBees Core in your new OpenShift cluster.

Important
The OpenShift cluster requirements must be satisfied before CloudBees Core can be installed.

OpenShift Cluster Requirements

The CloudBees Core installer requires:

  • On your local computer or a bastion host:

  • OpenShift client 3.7 (or newer) installed and configured (oc)

    • A cluster running OpenShift 3.7 (or newer) installed and configured

    • Must have network access to container images (public Docker Hub or a private Docker Registry)

  • Load balancer configured and pointing to the Router service

    • A DNS record that points to the Load balancer

    • SSL certificates (needed when you deploy CloudBees Core

  • Project assigned for CloudBees Core

  • A Kubernetes cluster Default Storage Class defined and ready to use

Important
OpenShift beta releases are not supported. Use production releases.

Creating your OpenShift cluster

Refer to the Red Hat OpenShift documentation for complete instructions on how to deploy an OpenShift cluster on your own infrastructure or create an OpenShift cluster via the OpenShift online service.

Run Installer

CloudBees Core runs on a OpenShift cluster. OpenShift cluster installations are configured with YAML files. The CloudBees Core installer provides a cloudbees-core.yml file that is modified for each installation.

  • Download installer

  • Unpack installer

    $ export INSTALLER=cloudbees-core_2.121.3.1_openshift.tgz
    $ sha256sum -c $INSTALLER.sha256
    $ tar xzvf $INSTALLER
  • Prepare shell variables for your installation.

    Replace example.com with your domain name and openshifted with your hostname.

    $ ZONE_HOSTNAME=openshifted
    $ ZONE_NAME=${ZONE_HOSTNAME}.example.com
    $ PROJECT=openshifted-project
  • Edit the cloudbees-core.yml file for your installation

    $ cd cloudbees-core_2.121.3.1_openshift
    $ sed -e s,cloudbees-core.example.com,cloudbees-core.$ZONE_NAME,g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml
    $ sed -e s,myproject,$PROJECT,g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml
  • Run the installer

    $ oc apply -f cloudbees-core.yml
    configmap "cjoc-config" configured
    configmap "cjoc-configure-jenkins-groovy" configured
    statefulset "cjoc" configured
    service "cjoc" configured
    serviceaccount "cjoc" configured
    role "master-management" configured
    rolebinding "cjoc" created
    serviceaccount "jenkins" configured
    role "pods-all" configured
    rolebinding "jenkins" created
    route "cjoc" configured
  • Read the admin password

    $ oc exec cjoc-0 -- cat /var/jenkins_home/secrets/initialAdminPassword
    h95pSND6rMJzz7r2GxxCvgGQ3t

Open Operations Center

CloudBees Core is now installed, configured, and ready to run. Open the CloudBees Core URL and log in with the initial admin password. Install the CloudBees Core license and the recommended plugins.

See Administering CloudBees Core for further information.

Adding Client Masters

Occasionally administrators need to connect existing masters to a CloudBees Core cluster. Existing masters connected to a CloudBees Core cluster are called "Client Masters" to distinguish them from Managed Masters. A master running on Windows is one example that requires a Client Master.

Configure Ports

Note: The existing master and Operations Center must both accept JNLP requests.

  1. Confirm the Client Master accepts JNLP requests from Operations Center

    $ export CLIENT_MASTER=jenkins.example.com # Your Jenkins hostnamne
    $ oc exec -ti cjoc-0 curl $CLIENT_MASTER:50000
    Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, MultiMaster, OperationsCenter2, Ping
    Jenkins-Version: 2.107.1.2
    Jenkins-Session: d70ee5f1
    Client: 10.10.11.12
    Server: 93.184.216.34
  2. Confirm the Client Master accepts HTTPS requests from Operations Center

    $ export CLIENT_MASTER=jenkins.example.com # Your Jenkins hostnamne
    $ oc exec -ti cjoc-0 curl https://$CLIENT_MASTER/
    <html><head>...
  3. Confirm Operations Center is internally ready to answer interal JNLP requests

    $ oc exec -ti cjoc-0 curl localhost:50000
    Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, MultiMaster, OperationsCenter2, Ping
    Jenkins-Version: 2.107.1.2
    Jenkins-Session: 21b9da08
    Client: 0:0:0:0:0:0:0:1
    Server: 0:0:0:0:0:0:0:1
  4. Open the JNLP port (50000) in the OpenShift cluster

    $ oc create -f cjoc-external-masters.yml
    service "cjoc-jnlp" created

Upgrading CloudBees Core

To upgrade to a newer version of CloudBees Core, follow the same process as the installation process.

  • Download installer

  • Unpack installer

  • Edit the cloudbees-core.yml file for your installation to match the previous changes made during initial installation

  • Run the installer

    $ oc apply -f cloudbees-core.yml
  • Wait until CJOC is rolled out

    $ oc rollout status sts cjoc

Once the new version of Operations Center is rolled out, you can log in to Operations Center again and upgrade the managed masters. See Upgrading Managed Masters for further information.

Removing CloudBees Core

If you need to remove CloudBees Core from OpenShift, use the following steps:

  • Stop Operations Center

    $ oc scale statefulsets/cjoc --replicas=0
    statefulset "cjoc" scaled
  • Delete CloudBees Core

    $ oc delete -f cloudbees-core.yml
    configmap "cjoc-config" deleted
    configmap "cjoc-configure-jenkins-groovy" deleted
    statefulset "cjoc" deleted
    service "cjoc" deleted
    serviceaccount "cjoc" deleted
    role "master-management" deleted
    serviceaccount "jenkins" deleted
    role "pods-all" deleted
    route "cjoc" deleted
    configmap "jenkins-agent" deleted
  • Delete remaining pods and data

    $ oc delete pod,statefulset,pvc,route,service -l com.cloudbees.cje.tenant
    persistentvolumeclaim "jenkins-home-cjoc-0" deleted
  • Delete services, pods, persistent volume claims, etc.

    $ oc delete svc --all
    service "cjoc-jnlp" deleted
    $ oc delete statefulset --all
    No resources found
    $ oc delete pod --all
    No resources found
    $ oc delete pvc --all
    No resources found

Additional topics

Using Kaniko with CloudBees Core

Using Kaniko with CloudBees Core

Introducing Kaniko

Kaniko is a utility that creates container images from a Dockerfile. The image is created inside a container or Kubernetes cluster, which allows users to develop Docker images without using Docker or requiring a privileged container.

Since Kaniko doesn’t depend on the Docker daemon and executes each command in the Dockerfile entirely in the userspace, it enables building container images in environments that can’t run the Docker daemon, such as a standard Kubernetes cluster.

The remainder of this chapter provides a brief overview of Kaniko and illustrates using it in CloudBees Core with a Declarative Pipeline.

How does Kaniko work?

Kaniko looks for the Dockerfile file in the Kaniko context. The Kaniko context can be a GCS storage bucket, an S3 storage bucket, or local directory. In the case of either a GCS or S3 storage bucket, the Kaniko context must be a compressed tar file. Next, if the context contains a compressed tar file, then Kaniko expands it. Otherwise, it starts to read the Dockerfile.

Kaniko then extracts the filesystem of the base image using the FROM statement in the Dockerfile. It then executes each command in the Dockerfile. After each command completes, Kaniko captures filesystem differences. Next, it applies these differences, if there are any, to the base image and updates image metadata. Lastly, Kaniko publishes the newly created image to the desired Docker registry.

Security

Kaniko runs as an unprivileged container. Kaniko still needs to run as root to be able to unpack the Docker base image into its container or execute RUN Dockerfile commands that require root privileges.

Primarily, Kaniko offers a way to build Docker images without requiring a container running with the privileged flag, or by mounting the Docker socket directly.

Note
Additional security information can be found under the Security section of the Kaniko documentation. Also, this blog article on unprivileged container builds provides a deep dive on why Docker build needs root access.

Kaniko parameters

Kaniko has two key parameters. They are the Kaniko context and the image destination. Kaniko context is the same as Docker build context. It is the path Kaniko expects to find the Dockerfile in and any supporting files used in the creation of the image. The destination parameter is the Docker registry where the Kaniko will publish the images. Currently, Kaniko supports hub.docker.com, GCR, and ECR as the Docker registry.

In addition to these parameters, Kaniko also needs a secret containing the authorization details required to push the newly created image to the Docker registry.

Kaniko debug image

The Kaniko executor image uses scratch and doesn’t contain a shell. The Kaniko project also provides a debug image, gcr.io/kaniko-project/executor:debug, this image consists of the Kaniko executor image with a busybox shell.

Note
For more details on using the Debug Image, see Debug Image section of the Kaniko documenation.

Pipeline example

This example illustrates using Kaniko to build a Docker image from a Git repository and pushing the resulting image to a private Docker registry.

Requirements

To run this example, you need the following:

  • A Kubernetes cluster with an installation of CloudBees Core

  • A Docker account or another private Docker registry account

  • Your Docker registry credentials

  • Ability to run kubectl against your cluster

  • CloudBees Core account with permission to create the new pipeline

Steps

These are the high-level steps for this example:

  1. Create a new Kubernetes Secret.

  2. Create the Pipeline.

  3. Run the Pipeline.

Create a new Kubernetes secret

The first step is to provide credentials that Kaniko uses to publish the new image to the Docker registry. This example uses kubectl and a docker.com account.

Tip
If you are using a private Docker registry, you can use it instead of docker.com. Just create the Kubernetes secret with the proper credentials for the private registry.

Kubernetes has a create secret command to store the credentials for private Docker registries.

Use the create secret docker-registry kubectl command to create this secret:

Kubernetes create secret command
 $ kubectl create secret docker-registry docker-credentials \ (1)
    --docker-username=<username>  \
    --docker-password=<password> \
    --docker-email=<email-address>
  1. The name of the new Kubernetes secret.

Create the Pipeline

Create a new pipeline job in CloudBees Core. In the pipeline field, paste the following Declarative Pipeline:

Sample Scripted Pipeline
def label = "kaniko-${UUID.randomUUID().toString()}"

podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: Always
    command:
    - /busybox/cat
    tty: true
    volumeMounts:
      - name: jenkins-docker-cfg
        mountPath: /kaniko/.docker
  volumes:
  - name: jenkins-docker-cfg
    projected:
      sources:
      - secret:
          name: docker-credentials (1)
          items:
            - key: .dockerconfigjson
              path: config.json
""") {
  node(label) {
    stage('Build with Kaniko') {

       git 'https://github.com/cb-jeffduska/simple-docker-example.git'
        container(name: 'kaniko', shell: '/busybox/sh') {
           withEnv(['PATH+EXTRA=/busybox']) {
            sh '''#!/busybox/sh
            /kaniko/executor --context `pwd` --destination <docker-username>/hello-kaniko:latest (2)
            '''
           }
        }
      }
    }
  }
  1. This is where the docker-credentials secret, created in the previous step, is mounted into the Kaniko Pod under /kaniko/.docker/config.json.

  2. Replace destination with your Docker username such as hello-kaniko.

Save the new Pipeline job.

Run the new Pipeline

The sample Pipeline is complete. Run the Pipeline to build the Docker image. When the pipeline is successful, a new Docker image should exist in your Docker registry. The new Docker image can be accessed via standard Docker commands such as docker pull and docker run.

Limitations

Kaniko does not use Docker to build the image, thus there is no guarantee that it will produce the same image as Docker would. In some cases, the number of layers could also be different.

Important
Kaniko supports most Dockerfile commands, even multistage builds, but does not support all commands. See the list of Kaniko Issues to determine if there is an issue with a specific Dockerfile command. Some rare edge cases are discussed in the Limitations section of the Kaniko documentation.

Alternatives

There are many tools similar to Kaniko. These tools build container images using a variety of different approaches.

Tip
There is a summary of these tools and others in the comparison with other tools section of the Kaniko documentation.

Here are links to a few of them:

References

This chapter is only a brief introduction into using Kaniko. In addition to the Kaniko documentation, the following is a list of helpful articles and tutorials:

Using self-signed certificates in CloudBees Core

This optional component of CloudBees Core allows to use self-signed certificates or custom root CA (Certificate Authority). It works by injecting a given set of files (certificate bundles) into all containers of all scheduled pods.

Prerequisites

OpenShift 3.10 or later, with admission controller MutatingAdmissionWebhook enabled.

In order to check whether it is enabled for your cluster, you can run the following command:

oc api-versions | grep admissionregistration.k8s.io/v1beta1

The result should be:

admissionregistration.k8s.io/v1beta1

In addition, the MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controllers should be added and listed in the correct order in the admission-control flag of apiserver.

Installation

This procedure requires a context with cluster-admin privilege in order to create the MutatingWebhookConfiguration.

In the CloudBees Core archive, you will find a directory named sidecar-injector. The following instructions assume this is the working directory.

Create a certificate bundle

In the following instructions, we assume you are working in the project where CloudBees Core is installed, and the certificate you want to install is named mycertificate.pem.

For a self-signed certificate, add the certificate itself. If the certificate has been issued from a custom root CA, add the root CA itself.

# Copy reference files locally
oc cp cjoc-0:/etc/ssl/certs/ca-certificates.crt .
oc cp cjoc-0:/etc/ssl/certs/java/cacerts .
# Add root CA to system certificate bundle
cat mycertificate.pem >> ca-certificates.crt
# Add root CA to java cacerts
keytool -import -noprompt -keystore cacerts -file mycertificate.pem -storepass changeit -alias service-mycertificate;
# Create a configmap with the two files above
oc create configmap --from-file=ca-certificates.crt,cacerts ca-bundles

Setup injector

  1. Browse to the directory where CloudBees Core archive has been unpacked, then go to sidecar-injector folder.

  2. Create a project to deploy the sidecar injector.

    oc new-project sidecar-injector
  3. Create a signed cert/key pair and store it in an OpenShift secret that will be consumed by sidecar deployment.

    ./webhook-create-signed-cert.sh \
     --service sidecar-injector-webhook-svc \
     --secret sidecar-injector-webhook-certs \
     --namespace sidecar-injector
  4. Patch the MutatingWebhookConfiguration by set caBundle with correct value from OpenShift cluster

    cat sidecar-injector.yaml | \
        webhook-patch-ca-bundle.sh > \
        sidecar-injector-ca-bundle.yaml
  5. Switch to sidecar-injector project

    oc project sidecar-injector
  6. Deploy resources

    oc create -f sidecar-injector-ca-bundle.yaml
  7. Verify everything is running

The sidecar-inject-webhook pod should be running

# oc get pods
NAME                                                  READY     STATUS    RESTARTS   AGE
sidecar-injector-webhook-deployment-bbb689d69-882dd   1/1       Running   0          5m
# oc get deployment
NAME                                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
sidecar-injector-webhook-deployment   1         1         1            1           5m

Configure namespace

  1. Label the namespace where CloudBees Core is installed with sidecar-injector=enabled

    oc label namespace mynamespace sidecar-injector=enabled
  2. Check

    # oc get namespace -L sidecar-injector
    NAME          STATUS    AGE       SIDECAR-INJECTOR
    default       Active    18h
    mynamespace   Active    18h       enabled
    kube-public   Active    18h
    kube-system   Active    18h

Verify

  1. Switch to the CloudBees Core project

    oc project mynamespace
  2. Deploy an app in the OpenShift cluster, take sleep app as an example.

    # cat <<EOF | oc create -f -
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: sleep
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: sleep
        spec:
          containers:
          - name: sleep
            image: tutum/curl
            command: ["/bin/sleep","infinity"]
    EOF
  3. Verify injection has happened

    # oc get pods -o 'go-template={{range .items}}{{.metadata.name}}{{"\n"}}{{range $key,$value := .metadata.annotations}}* {{$key}}: {{$value}}{{"\n"}}{{end}}{{"\n"}}{{end}}'
    sleep-d5bf9d8c9-bfglq
    * com.cloudbees.sidecar-injector/status: injected

Conclusion

You are now all set to use your custom CA across your OpenShift cluster.

To pick up the new certificate bundle, restart Operations Center and running Managed Masters. When scheduling new build agents, they will also pick up the certificate bundle and allow connection to remote endpoints using your certificates.