Table of contents

Installing CloudBees Jenkins Enterprise


EKS cluster

Caution

This guide is an old version of EKS cluster, and is superseded by CloudBees Core EKS Cluster.

Please refer to CloudBees Core EKS Cluster for updated content.

To create a Amazon Kubernetes cluster (EKS) refer to the official Amazon documentation at EKS Documentation

More information on Kubernetes concepts is available from the Kubernetes site, including:

Important
The Kubernetes cluster requirements must be satisfied before CloudBees Jenkins Enterprise 2.x can be installed.

CloudBees Jenkins Enterprise Prerequisites

The CloudBees Jenkins Enterprise 2.x installer requires:

  • A running AWS EKS cluster 1.9+

  • With nodes that have 1 full CPU / 1 GB available, so nodes need at least 2 CPUs, 4 GBs of memory

  • A NGINX Ingress Controller installed in the cluster (v0.9.0 minimum)

    • A DNS record pointing to your ingress controller load balancer

    • SSL certificates installed on the load balancer or the ingress controller

  • A namespace in the cluster (provided by your admin) with permissions to create Role and RoleBinding objects

  • A Kubernetes cluster storage class defined and ready to use

Refer to CloudBees Jenkins Enterprise 2.x architecture for an architectural overview.

NGINX Ingress Controller

CloudBees Jenkins Enterprise requires an NGINX Ingress Controller. See the NGINX controller install guide for installation instructions. Follow the instructions to install NGINX with RBAC for AWS.

Kubectl Install

In summary the following commands will install the NGINX Ingress Controller for AWS with a L4 Load Balancer:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/provider/aws/service-l4.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/provider/aws/patch-configmap-l4.yaml

kubectl patch service ingress-nginx -p '{"spec":{"externalTrafficPolicy":"Local"}}' -n ingress-nginx

HTTPS Setup

To setup the NGINX ingress controller to support SSL termination, see the EKS Reference Architecture TLS Termination at Ingress chapter

HTTPS Load Balancer

As an alternative, SSL termination can be setup at the AWS Load Balancer level.

To configure the AWS Load Balancer to perform SSL termination, you need to change the listener for port 443:

  • Set the 'Load Balancer Protocol' to 'HTTPS(Secure HTTP)'

  • Set the 'Instance Protocol' to 'HTTP'

  • Set the 'Instance Port' to match the 'Instance Port' of the listener for port 80

  • Click on Change the 'SSL Certificate' 'Change' link and setup your certificate

aws elb https listener

DNS Record

Create a DNS record, for the domain you want to use for CloudBees Jenkins Enterprise, pointing to your NGINX ingress controller load balancer. Since the AWS ELB has a redundant setup with multiple IPs, a DNS CNAME to the ELB DNS is recommended.

Persistent volumes storage class

By default EKS does not define a default storage class.

To use EBS disks for persistent storage, it is recommended to create a storage class of type 'gp2' at least.

See Kubernetes AWS storage class documentation for more information on all supported parameters.

Create a 'gp2-storage.yaml' file with the following content for 'gp2' type with encryption enabled:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2
provisioner: kubernetes.io/aws-ebs
# Uncomment the following for multi zone clusters
# volumeBindingMode: WaitForFirstConsumer
parameters:
  type: gp2
  encrypted: "true"

Create the storage class

$ kubectl create -f gp2-storage.yaml

To make the new storage class, the default storage class:

$ kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

To list the storage classes:

$ kubectl get sc
NAME            PROVISIONER             AGE
gp2 (default)   kubernetes.io/aws-ebs   1d

Elastic Block Storage (EBS) restrictions

Kubernetes Persistent Volume life cycle

Kubernetes does not yet support fail-over to a different zone for persistent disk. This is partly due to the Kubernetes scheduling behavior where the persistent volume (PV) is created first and that dictates which zone the pod will be deployed into, effectively tying the pod to the PV zone. This PV allocation does not take into considerations (as of Kubernetes 1.10), the pod constraints. This is also due to volumes being zone specific and not having a dynamic provisioner supporting a snapshot mechanism that would allow re-creating a volume from a snapshot in a different zone.

Managed Masters fail-over considerations

  • If the cluster has nodes in multiple zones, the Managed Masters will be spread across all zones that have healthy nodes. This also mean that all zones defined for nodes should have healthy nodes in order to be able to provision Managed Masters, otherwise a Managed Master provisioning action might not succeed if the volume is assigned to a zone with no healthy nodes.

  • If a node running a Managed Master fails, the Managed Master will be re-started on a node in the same zone. This is because persistent disk volumes are tied to a specific zone.

  • If there are no healthy nodes in the zone, the Managed Master will NOT be able to restart in a different zone. The Managed Master will only be restarted once a healthy node is available in the original zone the volume was created in.

CloudBees Jenkins Enterprise Namespace

By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.

Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following:

$ kubectl get namespaces
NAME            STATUS    AGE
default         Active    13m
ingress-nginx   Active    8m
kube-public     Active    13m
kube-system     Active    13m

It is recommended to use a CloudBees Jenkins Enterprise specific namespace in the cluster with permissions to create Role and RoleBinding objects. For example to create a 'cje' namespace

echo "apiVersion: v1
kind: Namespace
metadata:
  labels:
    name: cje
  name: cje" > cje-namespace.yaml

Create the 'cje' namespace using kubectl:

$ kubectl create  -f cje-namespace.yaml

Switch to the newly created namespace.

$ kubectl config set-context $(kubectl config current-context) --namespace=cje

Run installer

CloudBees Jenkins Enterprise 2.x runs on a Kubernetes cluster. Kubernetes cluster installations are configured with YAML files. The CloudBees Jenkins Enterprise 2.x installer provides a cloudbees-core.yml file that is modified for each installation.

  • Download installer

  • Unpack installer

    $ export INSTALLER=cje2_2.107.2.1_kubernetes.tgz
    $ sha256sum -c $INSTALLER.sha256
    $ tar xzvf $INSTALLER
  • Prepare shell variables for your installation. Replace cloudbees-core.example.com with your domain name.

    $ DOMAIN_NAME=<YOUR_DOMAIN_NAME_FOR_CJE>

    If you do not have an available domain, you can use xip.io combined with the IP of the Ingress controller.

    $ CLOUDBEES_CORE_IP=$(kubectl -n ingress-nginx get svc ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
    $ DOMAIN_NAME="jenkins.$CLOUDBEES_CORE_IP.xip.io"
  • Edit the cloudbees-core.yml file for your installation

    $ cd cje2-kubernetes
    $ sed -e s,cloudbees-core.example.com,$DOMAIN_NAME,g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml
  • Disable SSL redirection if you do not have SSL certificates.

    $ sed -e s,https://$DOMAIN_NAME,http://$DOMAIN_NAME,g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml
    $ sed -e s,ssl-redirect:\ \"true\",ssl-redirect:\ \"false\",g < cloudbees-core.yml > tmp && mv tmp cloudbees-core.yml
  • Run the installer

    $ kubectl apply -f cloudbees-core.yml
    serviceaccount "cjoc" created
    role "master-management" created
    rolebinding "cjoc" created
    configmap "cjoc-config" created
    configmap "cjoc-configure-jenkins-groovy" created
    statefulset "cjoc" created
    service "cjoc" created
    ingress "cjoc" created
    ingress "default" created
    serviceaccount "jenkins" created
    role "pods-all" created
    rolebinding "jenkins" created
    configmap "jenkins-agent" created
  • Wait until CJOC is rolled out

    $ kubectl rollout status sts cjoc
  • Read the admin password

    $ kubectl exec cjoc-0 -- cat /var/jenkins_home/secrets/initialAdminPassword
    h95pSNDaaMJzz7r2GxxCjrGQ3t

Open Operations Center

CloudBees Jenkins Enterprise is now installed, configured, and ready to run. Open the CloudBees Jenkins Enterprise URL and log in with the initial admin password. Install the CloudBees Jenkins Enterprise license and the recommended plugins.

See Administering CloudBees Jenkins Enterprise for further information.

Adding Client Masters

Occasionally administrators need to connect existing masters to a CloudBees Jenkins Enterprise cluster. Existing masters connected to a CloudBees Jenkins Enterprise cluster are called "Client Masters" to distinguish them from Managed Masters. A master running on Windows is one example that requires a Client Master.

Note: Operations Center must accept JNLP requests.

Configure ports

  • Confirm Operations Center is ready to answer internal JNLP requests

    $ kubectl exec -ti cjoc-0 curl localhost:50000
    Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, MultiMaster, OperationsCenter2, Ping
    Jenkins-Version: 2.107.1.2
    Jenkins-Session: 21b9da08
    Client: 0:0:0:0:0:0:0:1
    Server: 0:0:0:0:0:0:0:1
  • Open the JNLP port (50000) in the Kubernetes cluster

    $ kubectl create -f cjoc-external-masters.yml

Configure load balancer

The AWS load balancer routes traffic from the public internet into the Kubernetes cluster. The standard installation opens the http port (80) and the https port (443). TCP Port 50000 must be opened and must route traffic to the Kubernetes internal TCP port.

  • Find the instance port for JNLP

    $ kubectl get svc/cjoc-jnlp -o jsonpath='{.spec.ports[0].nodePort}'
    30748
  • Map JNLP TCP port 50000 to instance port with AWS load balancer

Configure AWS load balancer

Configure security group

The AWS security group acts as a firewall to control traffic from the public internet into the Kubernetes cluster. The standard installation opens the http port (80) and the https port (443). Port 50000 must be opened on the node security group. Incoming data may be restricted to specific addresses or address ranges using security group rules as needed.

  • Allow JNLP port 50000 through the node security group

Configure AWS security group

Test Connection

You can confirm that Operations Center is ready to receive external JNLP requests with the following command:

$ curl $DOMAIN_NAME:50000
Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, MultiMaster, OperationsCenter2, Ping
Jenkins-Version: 2.107.1.2
Jenkins-Session: b02dc475
Client: 10.20.4.12
Server: 10.20.5.10
Remoting-Minimum-Version: 2.60

Continue installation

Once ports and security are correctly configured in your cloud and on your Client Master, continue the instructions in Adding Client Masters.

Adding JNLP Agents

To provide connectivity for JNLP agents, the master must be configured to "Allow external agents". If the master is not configured as such, edit the master configuration, enable "Allow external agents and then restart the master.

The "Allow external agents" will create a Kubernetes Service of type NodePort for the jnlp port. The NodePort exposed port can be retrieve by looking at the master service.

For example if the master name is 'master-1', the NodePort service will be called 'master-1-jnlp'. In the example below, the jnlp exposed port (JNLP_NODE_PORT) for 'master-1' is 32075 and the master jnlp port (JNLP_MASTER_PORT) is 50004

$ kubectl get svc master-1-jnlp
NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE
master-1-jnlp   NodePort   10.23.248.32   <none>        50004:32075/TCP   2h

Configure Load Balancer

The load balancer routes traffic from the public internet into the Kubernetes cluster. In order for the jnlp agent to connect to the master, the master jnlp port must be opened and traffic routed to it on the load balancer.

  • Map <JNLP_MASTER_PORT> TCP port to <JNLP_NODE_PORT> port under the AWS load balancer listeners (50004 to 32075 in the example above)

Configure node security group

The AWS security group acts as a firewall to control traffic from the public internet into the Kubernetes cluster. The standard installation opens the http port (80) and the https port (443). Port <JNLP_NODE_PORT> must be opened on the node security group. Incoming data may be restricted to specific addresses or address ranges using security group rules as needed.

  • Allow <JNLP_NODE_PORT> port through the node security group

Test Connection

You can confirm that Master is ready to receive external JNLP requests with the following command:

$ curl $DOMAIN_NAME:<JNLP_NODE_PORT>
Jenkins-Agent-Protocols: Diagnostic-Ping, JNLP4-connect, OperationsCenter2, Ping
Jenkins-Version: 2.107.1.2
Jenkins-Session: b02dc475
Client: 10.20.4.12
Server: 10.20.5.10
Remoting-Minimum-Version: 2.60

Continue installation

Once the jnlp port is correctly configured in your cloud, you can then create a new 'node' in your master under 'Manage Jenkins → Manage Nodes'.

NOTE that the node should be configured with: - Launch method: 'Launch agent via Web Start'

Auto-scaling nodes

Auto-scaling of nodes can be achieved by installing the kubernetes cluster autoscaler

IAM policy

The worker running the cluster autoscaler will need access to certain resources and actions.

A minimum IAM policy would look like:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
    ]
}

If the current NodeInstanceRole defined for the EKS cluster nodes does not have the policy actions required for the autoscaler, create a new 'eks-auto-scaling' policy as outlined above and then attach this policy to the NodeInstanceRole.

Install cluster autoscaler

Examples for deployment of the cluster autoscaler in AWS can be found here: AWS cluster autoscaler

As an example let’s use the single auto-scaling group example.

Download the 'cluster-autoscaler-one-asg.yaml' file

curl -O https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-one-asg.yaml

A few things will need to be modified to match your EKS cluster setup. Here is a sample extract of the autoscaler deployment section:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      serviceAccountName: cluster-autoscaler
      containers:
        - image: k8s.gcr.io/cluster-autoscaler:v1.2.2
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --nodes=1:10:acme-eks-worker-nodes-NodeGroup-FD1OD4CZ0J77
          env:
            - name: AWS_REGION
              value: us-east-1
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-bundle.crt
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-bundle.crt"
  1. If the EKS is using Kubernetes v 1.10 or above use version 1.2.x for the autoscaler

  2. Update the '--nodes=' command parameter. The syntax is 'ASG_MIN_SIZE:ASG_MAX_SIZE:ASG_NAME'. Multiple '--nodes' parameter can be defined to have the autoscaler autoscale multiple AWS auto-scaling groups.

  3. Update the env AWS_REGION to match the EKS cluster region

  4. If using AWS Linux 2 AMIs for the nodes, set the ssl cert paths to '/etc/ssl/certs/ca-bundle.crt'

To install the autoscaler:

$ kubectl create -f cluster-autoscaler-one-asg.yaml

Removing CloudBees Jenkins Enterprise

If you need to remove CloudBees Jenkins Enterprise from Kubernetes, use the following steps:

  • Delete all masters from Operations Center

  • Stop Operations Center

    kubectl scale statefulsets/cjoc --replicas=0
  • Delete CloudBees Jenkins Enterprise

    kubectl delete -f cloudbees-core.yml
  • Delete remaining pods and data

    kubectl delete pod,statefulset,pvc,ingress,service -l com.cloudbees.cje.tenant
  • Delete services, pods, persistent volume claims, etc.

    kubectl delete svc --all
    kubectl delete statefulset --all
    kubectl delete pod --all
    kubectl delete ingress --all
    kubectl delete pvc --all