Table of contents

Administering CloudBees Jenkins Enterprise 2.x


Managing agents

Caution

This guide is an old version of Managing Agents, and is superseded by the current documentation for CloudBees Core Managing Agents.

Please refer to CloudBees Core Managing Agents for updated content.

CloudBees Jenkins Enterprise enables Continuous Delivery as a Service with:

CloudBees Jenkins Enterprise allows administrators to standardize their build environments and share those environments with their teams. Standardized build environments are easier to manage, easier to upgrade, and provide consistency for users. Refer to building pipelines on CloudBees Jenkins Enterprise for more details.

There may be times when a CloudBees Jenkins Enterprise standardized build environment can’t represent the required agent. Some examples include:

  • Operating systems such as Microsoft Windows, Oracle Solaris, FreeBSD, or OpenBSD

  • Tool licenses which only allow the tool to run on a dedicated agent

  • Security configurations which require a specific agent (code signing certificates, hardware security devices, etc.)

CloudBees Jenkins Enterprise provides advanced capabilities to manage platform specific and purpose-built agents and to schedule work on agents.

Adding SSH agents

The CloudBees SSH Build Agents plugin is an alternative SSH agent connector that uses a Non-Blocking I/O architecture. This alternative connector shows different scalability properties from the standard SSH agent connector.

There are three main differences in scalability characteristics:

  • The Non-blocking I/O connector limits the number of threads that are used to maintain the SSH channel. Thus when there are a large number of channels (i.e. many SSH agents) the Non-blocking connector will use less threads and consequently the Jenkins UI will remain more responsive than with the standard SSH agent connector.

  • When the Non-blocking I/O connector requires more CPU resources than are available, it responds by applying back-pressure to the channels generating the load. This allows the system to remain responsive at a consequence of increasing build times. It is important to note that under this type of load the traditional SSH agent connector typically has lost the connection with a corresponding build failure.

  • The Non-blocking I/O connector is optimized for reduced connection time. For example, it avoids copying the agent JAR file unless necessary; and by default it suppresses the logging of the agent environment.

There are two important technical notes regarding SSH keys:

  • The SSH client library used in the Non-Blocking I/O connector currently only support RSA, DSA and ECDSA keys (ECDSA support was added in 2.0)

  • The maximum key size is determined by the Java Cryptography Extension (JCE) policy of the Jenkins Master’s JVM. Without installation of the unrestricted policy the RSA key size will be limited to 2048 bits. Refer to "Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files" on the Oracle Java SE downloads page for more JCE information.

Versions of the plugin prior to 2.0 did not support connecting to CYGWIN or other Microsoft Windows based SSH servers.

Using the SSH Build Agents plugin

The plugin adds an additional launch method: Launch build agents via SSH (Non-blocking I/O) The configuration options are almost identical to those in the traditional SSH agents connector.

configure
Figure 1. Configuring the NIO SSH Agents launcher

The differences between the two plugins configuration parameters are as follows:

  • If specified, the Prefix Start Agent Command will be appended with a space character before being prepended to the start agent command. The traditional SSH Agents connector requires that the user remember to end the prefix with a space or a semicolon in order to avoid breaking the agent launch command.

  • If specified, the Suffix Start Agent Command will be prepended with a space character before being appended to the start agent command. The traditional SSH Agents connector requires that the user remember to start the suffix with a space or a semicolon in order to avoid breaking the agent launch command.

  • The traditional SSH Agents connector will always copy the agent environment variables to the agent launch log. The NIO SSH Agents connector provides this as an option that defaults off. The logging of the agent environment is only of use when debugging initial connection issues and has a performance impact on the startup time. Once the agent channel has been established the agent environment variables are accessible via the Node’s System Information screen.

Microsoft Windows

The CloudBees SSH Build Agents Plugin version 2.0 and above support agents running Microsoft Windows. There are some restrictions with connecting Microsoft Windows agents over SSH:

  • SSH server implementations on Microsoft Windows do not all behave the same when it comes to launching processes and managing the I/O streams from those processes. As a result not every SSH server implementation will work for launching Jenkins agents. At the time of writing the recommended SSH server implementation is that provided the Cygwin project’s port of OpenSSH.

    Note

    The Microsoft port of OpenSSH does not work correctly for connecting Jenkins agents. The way I/O streams are forwarded causes the connection to be terminated early. The plugin will log a warning if the Microsoft port of OpenSSH is detected, but will attempt to connect regardless.

  • The CloudBees recommended SSH server is Cygwin’s port of OpenSSH. Other SSH implementations on Windows may require special configuration and handling in order to function properly.

  • Microsoft Windows by default does not provide command line tools to verify the integrity of the agent JAR file. As a result, when launching agents over SSH the agent JAR file will always be transferred.

Configuring Windows agents

We have included agent installation instructions as a reference for Microsoft Windows administrators. These instructions have been validated by CloudBees to confirm that they are a minimal setup for Microsoft Windows Server 2012 using a GUI. If you cannot get any Microsoft Windows agents to connect over SSH, it may be useful to setup an agent using this procedure so that you can determine whether it is a network issue or an issue with the configuration of your Microsoft Windows operating system images.

Note

These instructions are provided as a working example. Additional steps and configuration settings may be required in your environment. The exact steps required depend on your own network risk profile and domain settings.

This procedure assumes the following prerequisites:

  • A server or virtual machine with at least 2GB of RAM and 60GB of disk space.

  • Installation of Microsoft Windows Server 2012 R2 x86_64 Standard (Server with a GUI).

  • All security updates from Microsoft have been applied to operating system.

  • The server has not joined a domain, nor have any group policy files been applied to the server.

The first step is to install the Oracle Java Virtual Machine:

  1. Go to https://java.com/en/download

  2. Download and run the Java virtual machine installer (at the time of writing this was Version 8 Update 151).

    win java install
    Figure 2. Downloading and running the Oracle Java Runtime Environment installer
  3. Start a Command Prompt and verify that Java is correctly on the system PATH by running java -version. The output from the command should report the version of Java that has been installed.

The second step is to install Cygwin’s port of OpenSSH:

  1. Go to http://cygwin.com/

  2. Download and run the setup-x86_64.exe installer

    cygwin download setup
    Figure 3. Downloading and running the Cygwin Net Release Setup Program
  3. When the installer starts you should see a screen similar to the following:

    cygwin setup 1
    Figure 4. The Cygwin Net Release Setup Program welcome screen
  4. Click the Next button and when prompted select the Install from Internet option

    cygwin setup 2
    Figure 5. Selecting installation from internet
  5. The default options should be sufficient but it may be necessary to configure a HTTP proxy if the server cannot connect directly and it will be necessary to select a local mirror to download from

    cygwin setup 3
    Figure 6. Choosing installation directory
    cygwin setup 4
    Figure 7. Selecting connection type
    cygwin setup mirror
    Figure 8. Selecting the download site
  6. The Select Packages screen is where configuration changes are required:

    cygwin setup 5
    Figure 9. The Select Packages screen of the Cygwin setup program
  7. In the search box type openssh

    cygwin setup 6
    Figure 10. Filtering packages to those containing openssh
  8. Expand the Net category

    cygwin setup 7
    Figure 11. The Net category expanded to show the OpenSSH server and client package.
  9. Click on the OpenSSH package to select it for installation

    cygwin setup 8
    Figure 12. The OpenSSH server and client package selected for installation.
  10. Proceed to the next screen of the installer. Ensure that the Select required packages (RECOMMENDED) option is enabled.

    cygwin setup 9
    Figure 13. The resolving dependencies screen.
  11. After selecting Next the installation should proceed.

    cygwin installing
    Figure 14. The installation in progress.
  12. When the installation is completed you will be asked if you want to create an icon on the desktop and a start menu item. It is recommended that you select both options.

    cygwin post install
    Figure 15. The post-installation status.

At this stage it is recommended to validate that Cygwin and Java are correctly installed:

  1. Launch a Cygwin64 Terminal

  2. Run java -version

  3. Verify that the output looks similar to the following (version numbers and user names may differ):

    cygwin validation
    Figure 16. Verifying that Java is installed in the Cygwin PATH.
  4. Verify that the cygrunsrv program is correctly installed by running cygrunsrv -h. The output should be a help screen similar to the following:

    cygwin validation 2
    Figure 17. Verifying that cygrunsrv is installed

The third step is to configure Cygwin’s port of OpenSSH:

  1. From a Cygwin64 Terminal launched to run as an Administrator run the ssh-host-config script:

    ssh host config
    Figure 18. Running the ssh-host-config program
  2. Provide the following answers:

    1. Should StrictModes be used: yes

    2. Should privilege separation be used: yes

    3. new local account 'sshd': yes

    4. Do you want to install sshd as a service: yes

    5. Enter the value of CYGWIN for the daemon:

      Note
      A significant number of tutorials on the internet will incorrectly advise entering binmode ntsec at this step. Those instructions are incorrect as of the Cygwin 1.7 release (2009)
    6. Do you want to use a different name: no

    7. Create new privileged user account 'WIN-…​\cyg_server' (Cygwin name: 'cyg_server'): yes

    8. Please enter the password: (you will need to provide a password)

The ssh-host-config installation
*** Info: Generating missing SSH host keys
ssh-keygen: generating new host keys: RSA DSA ECDSA ED25519
*** Info: Creating default /etc/ssh_config file
*** Info: Creating default /etc/sshd_config file

*** Info: StrictModes is set to 'yes' by default.
*** Info: This is the recommended setting, but it requires that the POSIX
*** Info: permissions of the user's home directory, the user's .ssh
*** Info: directory, and the user's ssh key files are tight so that
*** Info: only the user has write permissions.
*** Info: On the other hand, StrictModes don't work well with default
*** Info: Windows permissions of a home directory mounted with the
*** Info: 'noacl' option, and they don't work at all if the home
*** Info: directory is on a FAT or FAT32 partition.
*** Query: Should StrictModes be used? (yes/no) yes

*** Info: Privilege separation is set to 'sandbox' by default since
*** Info: OpenSSH 6.1.  This is unsupported by Cygwin and has to be set
*** Info: to 'yes' or 'no'.
*** Info: However, using privilege separation requires a non-privileged account
*** Info: called 'sshd'.
*** Info: For more info on privilege separation read /usr/share/doc/openssh/README.privsep.
*** Query: Should privilege separation be used? (yes/no) yes
*** Info: Note that creating a new user requires that the current account have
*** Info: Administrator privileges.  Should this script attempt to create a
*** Query: new local account 'sshd'? (yes/no) yes
*** Info: Updating /etc/sshd_config file

*** Query: Do you want to install sshd as a service?
*** Query: (Say "no" if it is already installed as a service) (yes/no) yes
*** Query: Enter the value of CYGWIN for the daemon: []
*** Info: On Windows Server 2003, Windows Vista, and above, the
*** Info: SYSTEM account cannot setuid to other users -- a capability
*** Info: sshd requires.  You need to have or to create a privileged
*** Info: account.  This script will help you do so.

*** Info: It's not possible to use the LocalSystem account for services
*** Info: that can change the user id without an explicit password
*** Info: (such as passwordless logins [e.g. public key authentication]
*** Info: via sshd) when having to create the user token from scratch.
*** Info: For more information on this requirement, see
*** Info: https://cygwin.com/cygwin-ug-net/ntsec.html#ntsec-nopasswd1

*** Info: If you want to enable that functionality, it's required to create
*** Info: a new account with special privileges (unless such an account
*** Info: already exists). This account is then used to run these special
*** Info: servers.

*** Info: Note that creating a new user requires that the current account
*** Info: have Administrator privileges itself.

*** Info: No privileged account could be found.

*** Info: This script plans to use 'cyg_server'.
*** Info: 'cyg_server' will only be used by registered services.
*** Query: Do you want to use a different name? (yes/no) no
*** Query: Create new privileged user account 'WIN-...\cyg_server' (Cygwin name: 'cyg_server')? (yes/no) yes
*** Info: Please enter a password for new user cyg_server.  Please be sure
*** Info: that this password matches the password rules given on your system.
*** Info: Entering no password will exit the configuration.
*** Query: Please enter the password:
*** Query: Reenter:

*** Info: User 'cyg_server' has been created with password 'phish!998'.
*** Info: If you change the password, please remember also to change the
*** Info: password for the installed services which use (or will soon use)
*** Info: the 'cyg_server' account.


*** Info: The sshd service has been installed under the 'cyg_server'
*** Info: account.  To start the service now, call `net start sshd' or
*** Info: `cygrunsrv -S sshd'.  Otherwise, it will start automatically
*** Info: after the next reboot.

*** Info: Host configuration finished. Have fun!
  1. Start the CYGWIN sshd service from the Services management panel

    openssh start
    Figure 19. The CYGWIN sshd service in the Services management panel
    openssh starting
    Figure 20. Starting the CYGWIN sshd service
  2. Open Windows Firewall

    firewall 1
    Figure 21. The Windows Firewall with Advanced Security management panel
  3. Select Inbound rules in the left hand pane

    firewall 2
    Figure 22. The inbound firewall rules
  4. Select New Rule in the right hand pane. Select a Port rule as the type of rule to create

    firewall 3
    Figure 23. Creating a Port rule
  5. For the Protocol and Ports, select TCP and port 22

    firewall 4
    Figure 24. Configuring the protocol and ports
  6. For the Action, select Allow the connection

    firewall 5
    Figure 25. Configuring the action.
  7. For the Profile, select all networks

    firewall 6
    Figure 26. Configuring the profile
  8. Give the rule the name SSH

    firewall 7
    Figure 27. Configuring the rule name
  9. Verify that the inbound rule is created and enabled

    firewall 8
    Figure 28. The newly created SSH inbound rule

The final step is to configure the user account that Jenkins will login with to use SSH’s key based authentication. There is no requirement to use key based authentication, but it presents significant advantages in terms of security and is the recommended authentication mechanism when connecting agents using SSH.

  1. Login to the Microsoft Windows server with the account that Jenkins will connect as.

  2. Open a Cygwin64 Terminal and run the command ssh-keygen -t ecdsa (other key types are supported)

    Note

    A 256 bit ECDSA key is stronger than a 2048 bit RSA key. This means that without applying the JCE Unlimited Strength Jurisdiction Policy Files, ECDSA keys will be more secure than RSA keys as the JVM will be limited to 2048 bit RSA keys

    Generating an ECDSA key for the Jenkins user.
    $ ssh-keygen -t ecdsa
    Generating public/private ecdsa key pair.
    Enter file in which to save the key (/home/Jenkins/.ssh/id_ecdsa):
    Created directory '/home/Jenkins/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /home/Jenkins/.ssh/id_ecdsa.
    Your public key has been saved in /home/Jenkins/.ssh/id_ecdsa.pub.
    The key fingerprint is:
    SHA256:JfjsfW6pbmW/ENOe400/MIrmB1jxCE8Cy68/cUMoEik Jenkins@WIN-62JQBI8M63K
    The key's randomart image is:
    +---[ECDSA 256]---+
    |    ...          |
    | E o. .+ o       |
    |  . .o. B =      |
    |   . ..+ B . .   |
    |    . ..S   o .  |
    |      .+ =  o* . |
    |     .  + =o+o* .|
    |      .. o.=oo.=.|
    |       .+++o. o.+|
    +----[SHA256]-----+
    generating key
    Figure 29. Generating the SSH keypair
  3. Next change the working directory to ~/.ssh by running cd ~/.ssh

  4. Now create the authorized_keys by copying the public key that we have just generated: cp idecdsa.pub authorized_keys

  5. Finally you need to extract the private key from the server. If the server is running as a virtual machine this is typically just a question of displaying the public key on the screen using copy & paste to transfer to the host operating system. With a physical server it may be easier to use sftp to copy the file out.

    extract key
    Figure 30. Extracting the SSH private key from a virtual machine

Now we are ready to connect the agent to Jenkins.

  1. Create the SSH key in Jenkins

    add private key
    Figure 31. Adding the SSH private key into Jenkins
  2. Configure the agent to use the SSH key credential.

    configured agent
    Figure 32. Configuring the agent

    If you select Manually trusted key Verification Strategy and leave Require manual verification of initial connection checked then Jenkins will refuse to connect until you approve the SSH Host key from the SSH Host Verification action. The SSH Host Verification action is only available after the first connection attempt.

    verify host
    Figure 33. Verifying the SSH host key to enable connection.
  3. The agent should connect.

    connection established
    Figure 34. Configuring the agent

Agent scheduling

CloudBees Jenkins Enterprise includes powerful scaling capabilities for growing and changing organizations. For more on scaling CloudBees Jenkins Enterprise, refer to CD as a Service . In addition to worker scaling, CloudBees Jenkins Enterprise also includes agent scheduling facilities, including:

Even scheduler

When using many agent nodes, Jenkins needs to decide where to run any given build. This is called "scheduling", and there are several aspects to consider.

First, it can be necessary to restrict which subset of nodes can execute a given build. For example, if your job is to build a Windows installer, chances are it depends on some tools that are only available on Windows, and therefore you can only execute this build on Windows nodes. This portion of the scheduling decision is controlled by Jenkins core.

The second part is determining one node that carries out the build, of all the qualifying nodes, and that is what this plugin deals with.

Default Behavior

To better understand what this plugin does, let us examine the default scheduling algorithm of Jenkins. How does it choose one node out of all the qualifying nodes?

By default, Jenkins employs the algorithm known as consistent hashing to make this decision. More specifically, it hashes the name of the node, in numbers proportional to the number of available executors, then hashes the job name to create a probe point for the consistent hash. More intuitively speaking, Jenkins creates a priority list for each job that lists all the agents in their "preferred" order, then picks the most preferred available node. This priority list is different from one job to another, and it is stable, in that adding or removing nodes generally only causes limited changes to these priority lists.

As a result, from the user’s point of view, it looks as if Jenkins tries to always use the same node for the same job, unless it’s not available, in which case it’ll build elsewhere. But as soon as the preferred node is available, the build comes back to it.

This behaviour is based on the assumption that it is preferable to use the same workspace as much as possible, because SCM updates are more efficient than SCM checkouts. In a typical continuous integration situation, each build only contains a limited number of changes, so indeed updates (which only fetch updated files) run substantially faster than checkouts (which refetch all files from scratch.)

This locality is also useful for a number of other reasons. For example, on a large Jenkins instance with many jobs, this tends to keep the number of workspaces on each node small. Some build tools (such as Maven and RVM) use local caches, and they work faster if Jenkins keeps building a job on the same node.

However, the notable downside of this strategy is that when each agent is configured with multiple executors, it doesn’t try to actively create balanced load on nodes. Say you have two agents X and Y, each with 4 executors. If at one point X is building FOO #1 and Y is completely idle, then on average, the upcoming BAR #1 still gets assigned to X with 3/7 chance (because X has 3 idle executors and Y has 4 idle executors.)

Even Loading Strategy

What this plugin offers is a different scheduling algorithm, which we refer to as "even loading strategy".

Under this strategy, the scheduler prefers idle nodes absolutely over nodes that are doing something. Thus in the above example, BAR #1 will always run on Y, because X is currently building FOO #1. Even though it still has 3 idle executors, it will not be considered so long as there are other qualifying agents that are completely idle.

However, idle executors are still idle, so if more builds are queued up, they will eventually occupy the other 3 executors of X, thereby using this agent to its fullest capacity. In other words, with this algorithm, the total capacity of the system does not change—only the order in which the available capacity is filled.

The strength of this algorithm is that you are more likely to get a fully idle node. Quite simply, executing a build on a fully idle system is faster than executing the same thing on a partially loaded system, all else being equal.

However, the catch is that all else may not be equal. If the node does not have a workspace already initialized for this job, you’ll pay the price of fully checking out a new copy, which can cancel other performance gains. In a nutshell, even loading is most useful for jobs which are slow to build, but for which the very first build on a node is not notably slower than any other.

Using even scheduler

This plugin adds two ways to control which algorithm to use for a given job:

Global preference

Picks the scheduling algorithm globally for all jobs, unless specifically overridden by individual jobs.

Per-job preference

Picks the scheduling algorithm for a specific job, regardless of the global preference.

Selecting Global Preference

Set global preferences from Manage Jenkins  Configure System  Default Scheduler Preference. Selecting "Prefer execution on idle nodes over previous used nodes" will make the even scheduler loading strategy the default strategy.

Selecting Per-Job Preference

Go to the configuration screen of a job, then select "Override the default scheduler preference". This will activate the per-job override. If "Prefer execution on idle nodes over previous used nodes" is selected, Jenkins will use the even loading strategy for this job. Otherwise Jenkins will use the default scheduling algorithm.

Label Throttling

The Label Throttle Build plugin brings hypervisor-aware scheduling to Jenkins. The plugin allows users to limit the number of builds on over-subscribed VM guests on a particular host.

When agents are hosted on virtual machines and share the same underlying physical resources, Jenkins may think that there is more capacity available for builds than there really is.

For example, in such an environment, Jenkins might think that there are 10 agents with 2 executors each, but in reality the physical machine cannot execute 20 concurrent builds without thrashing. The number is usually much lower; say, 4.[1] This is particularly the case when you have a single-system hypervisor, such as VMWare ESXi, VirtualBox, etc.

Every time a new build is to start, Jenkins schedules it to one of the available virtual agents. However, in this particular case the underlying physical infrastructure cannot support all the virtual agents running their respective builds concurrently.

CloudBees Jenkins Enterprise allows you to define an actual limit to the number of concurrent builds that can be run on the system. One can group agents together, then assign a limit that specifies how many concurrent builds can happen on all the agents that belong to that group. In this way, CloudBees Jenkins Enterprise avoids overloading your hypervisor host machine.[2]

The benefit of using this plugin is that builds run much faster as the underlying physical machines are not overloaded anymore.

Installing label throttling

Enable the CloudBees Label Throttle Build plugin in the plugin manager as shown in Install from the plugin manager. Restart CloudBees Jenkins Enterprise to enable the plugin.

throttle label
Figure 35. Install from the plugin manager
Configuring a label throttle

First, decide on a label and assign it to all the agents that you’d like to group together. For example, you can use the hypervisor host name as a label, and put that label on all the agents that are the virtual machines on that host. This can be done from the agent configuration page as shown in Set appropriate label on the agent configuration page.

throttle enter label
Figure 36. Set appropriate label on the agent configuration page

Then click the newly entered label to jump to the label page as show in Go to the labels page.

throttle click label
Figure 37. Go to the labels page

Then configure this label and enter the limit as shown in Set limit on the hypervisor.

throttle set limit
Figure 38. Set limit on the hypervisor

With this setting, as you can see, the total number of concurrent builds on hypervisor1 is limited to 2, and CloudBees Jenkins Enterprise will enforce this as you can see in the executor state in Label Throttle Build plugin in action. Two builds are already running, so the third job sits in the queue.

throttle usage
Figure 39. Label Throttle Build plugin in action

VMWare Pool Auto-Scaling

The VMware plugin connects to one or more VMware vSphere installations and uses virtual machines on those installations for better resource utilization when building jobs. Virtual machines on a vSphere installation are grouped into named pools. A virtual machine may be acquired from a pool when a job needs to be built on that machine, or when that machine is assigned to a build of a job for additional build resources. When such a build has completed, the acquired machine(s) are released back to the pool for use by the same or other jobs. When a machine is acquired it may be powered on, and when a machine is released it may be powered off.

This plugin requires VMware vSphere 4.0 or later.

Configuration

Before jobs can utilize virtual machines it is necessary to configure one or more machine centers, that connect to vSphere installations, and one or more machine pools from within those machine centers, from which builds may acquire machines.

From the main page click on "Pooled Virtual Machines", then click on "Configure" to goto the plugin configuration page.

After configuration the "Pooled Virtual Machines" page will display the current list of configured machine centers. From this page the click on a machine center to see the list of machine pools, and from that page click on a machine pool to see the list of machines.

Machine Centers

To configure a machine center enter a name for the center, which will be referred to later when configuring clouds or jobs, the host name where the vCenter service is located that manages the vSphere installation, and the user name/password of the user to authenticate to the vCenter and who is authorized to perform appropriate actions on machines in pools.

One or more machine centers may be configured to the same or different vCenter. For example, different pools may require different users that have authorized capabilities to the same vCenter, or there may be multiple vCenters that can be used. Details can be verified by clicking on the "Test Connection" button.

Machine Pools

One or more machine pools may be added to a machine center. There are currently two types of machines pool that can be added. A static pool and a folder pool. In either case power-cycling and power-on wait conditions may be configured for all machines in such pools.

It is guaranteed that a virtual machine will be acquired at most once, even if that machine is a member of two or more pools of two or more centers.

======= Static Pools

To configure a static machine pool enter a name for the pool, which will be referred to later when configuring clouds or jobs. Then, add one or more static machines to the pool. The name of the static machine is the name of the virtual machine as presented in the vCenter.

Note that there can be two or more virtual machines present in the vCenter with the same name, for example if those machines are located in separate folders or vApps. In such cases it is undetermined which machine will be associated with the configuration.

If the machine is assigned to a build then a set of optional properties, "Injected Env Vars", can be declared that will be injected into the build as environment variables.

======= Folder Pools

To configure a folder machine pool enter a name for the pool, which will be referred to later when configuring clouds or jobs. Then, declare the path to a folder or vApp where the machines contained in that folder or vApp comprise of the pool of machines. If the "recurse" option is selected then machines in sub-folders or sub-vApps will also be included in the pool.

If the machine is assigned to a build then IP address of the virtual machine will be declared in environment variable "VMIP" that is injected into the build.

Virtual machines may be added and removed from the folder or vApp without requiring changes to the configuration. Such pools are more dynamic than static pools.

======= Power Cycling

Power-on actions can be specified after a machine has been acquired from a pool, but before the machine has been assigned to a build. Power-off actions can be specified after a build has finished with the machine, but before that machine has been released back to the pool.

The set of power-on actions are as follows:

Power up

Powers up the machine. If the machine is already powered up this action does nothing.

Revert to last snapshot and power up

Revert the machine to the last known snapshot, and then power up. This can be useful to power up the machine in a known state.

Do nothing

No actions are performed on the machine and it is assumed the machine is already powered on.

The set of power-off actions are as follows:

Power off

Powers off the machine. This can be useful to save resources. Note that it can take some time for a machine to be powered on and fully booted, hence builds may take longer if the power-cycling represents a significant portion of the overall build time.

Suspend

Suspend the machine. This can be useful to save resources while being able to power on the machine more quickly.

Take snapshot after power off

Power off the machine, and then take a snapshot.

Take snapshot after suspend

Suspend the machine, and then take a snapshot.

Do nothing

No actions are performed on the machine, and it will be left powered-on.

======= Power-on Wait Conditions

After a machine is powered on, if power-on actions are configured, and before the machine is assigned to a build, certain wait conditions may be configured that ensure the machine is in an appropriate state.

The set of power-on wait conditions are:

Wait for a TCP port to start listening

A timeout, default of 5 minutes, can be specified. If waiting for the TCP port to become active takes longer that the timeout, then the machine will be released back to the pool and an error will result.

Wait for VMware Tools to come up

Optionally, also wait for the machine to obtain an IP address. A timeout, default of 5 minutes, can be specified. If waiting for VMware Tools takes longer that the timeout, then the machine will be released back to the pool and an error will result.

Building Jobs on Virtual Machines

To declare a machine pool as an agent pool, where machines in the pool are agents used to build jobs, it is necessary to configure a VMware cloud. From the main page of Jenkins click on "Manage Jenkins", click on "Configure", goto the "Cloud" section, click on the "Add a new cloud", and select "Pooled VMware Virtual Machines". Select the machine center and a pool from that machine center that shall be used as the pool of agents. Then, configure the other parameters as appropriate. For example, if all machines are Unix machines configured with SSH with the same SSH credentials then select the "Launch slave agents on Unix via SSH".

When a build is placed on the build queue, whose label matches the configured VMware cloud, then, a machine will be acquired from the pool, powered-on (if configured), and added as node that is assigned to build the job after the power-on wait conditions have been met.

If you have selected the checkbox Only one build per VM in the cloud configuration, then after the build completes the machine will be powered-off (if configured) and released back to the pool. (In this case it only makes sense to configure one executor.) Otherwise, the agent may accept multiple concurrent builds, according to the executor count, and will remain online for a while after the initial build in case other builds waiting in the queue can use it; Jenkins will release the agent to the pool only after it has been idle for a while and does not seem to be needed.

Note that if there are no machines free in the machine pool then the build will wait in the queue until a machine becomes available.

Reserving a Virtual Machine for a build

To reserve a machine for a build go to the configuration page of an appropriate job, go to the "Build Environment" section, select "Reserve a VMWare machine for this build", and select the machine center and a pool from that machine center that shall be used as the pool of reservable machines.

When a build of such a configured job is placed on the build queue then, the build will start, a machine will be acquired from the pool, powered-on (if configured), and the build will wait until the power-on wait conditions have been met. After the build completes the machine will be powered-off (if configured) and released back to the pool.

The build itself is not run on this machine; it is merely made available. To actually use the VM from your build, you would need to somehow connect to it, typically using the VMIP environment variable.

Note that if there are no machines free in the machine pool then the build will wait until a machine becomes available.

You may also select multiple machines. In this case all of the build will proceed once all of them are available. (The build may tentatively acquire some, release them, and later reacquire them, to avoid deadlocks.)

Taking virtual machines offline

From time to time you may need to perform maintenance work on particular VMs. During this time you would prefer for these VMs not to be used by Jenkins.

Click on Pooled Virtual Machines, click on a (static or folder) pool, and click on a machine in that pool. Just press the Take Offline button to prevent Jenkins from reserving it. (You can enter a description of what you are doing for the benefit of other users.) Later when the machine is ready, go back to the same page and press Take Online.

Offline status is remembered across Jenkins restarts.


1. This number of course depends on the machine’s specifications and configuration.
2. This is very handy in combination with the CloudBees Jenkins Enterprise VMWare Autoscaling plugin.