Value Streams Concepts

A CloudBees DevOptics Value Stream models a complex continuous delivery process. It can be assembled from multiple pipelines. A Value Stream shows a series of interconnected gates or steps that deliver value to a customer. Those steps are presented in the "Value Stream view". The Value Stream view allows you to:

  • Track changes

  • Detect stalled and delayed changes

  • Identify failures and blockages

  • Find components ready for testing

  • View contributing components

00 delivery view overview

Value Streams

A CloudBees DevOptics Value Stream is a visual model of the software delivery process. Value Streams are defined using phases and gates. A gate shows one or more tickets. Value Stream progress is tracked based on changes to tickets, commits, and artifacts.

Phases

Phases represent the flow of changes from initial implementation to delivery (promotion/deployment). The names of the phases and the number of phases can be customized for each Value Stream.

In simpler applications, phase definitions might reference software lifecycle terms like "Development", "Testing", "Deployment", and "Maintenance". In more complex applications, phase definitions might reference architectural elements like "Libraries", "Services", "Integration", and "Deploy".

00 delivery view phases

Gates

Gates are the Jenkins projects (Jobs) that create and package software artifacts. The condition of the gate is indicated by

  • Number of tickets currently in that gate

  • Solid green for success

  • Pulsing blue for running

  • Red for failed

The circle that represents the gate is drawn in colors segmented to show the status of tickets in the gate. Tickets are assigned to a gate when they are mentioned in a commit processed by the Jenkins project of that gate. If a project has built the ticket successfully, a green segment is shown. If a later Jenkins project builds the same ticket and fails, a red segment is added to show the failure. When the build of a project succeeds, all tickets for that project appear as green segments of the circle.

Gate status and ticket status are key parts of the Value Stream view.

00 delivery view gates

Tickets

Tickets are the Jira® issues that describe work to deliver the software. Tickets are assigned to a gate when they are mentioned in a commit.

Tickets move from gate to gate as development progresses. Tickets:

  • Move from one gate to another as artifacts created in that gate are delivered to later gates.

  • Remain in a gate until the artifact created in that gate is used in a later gate.

00 delivery view tickets

Commits

Commits are the changes applied to the software and commited to your SCM. Commits are the basic "unit of value" tracked by CloudBees DevOptics. Commits may reference one or more tickets by mentioning the ticket in the commit message. When a commit references a ticket, the ticket is assigned to the gate that processes the commit. CloudBees DevOptics supports only Git commits. Other source code control systems (Mercurial, Subversion, etc.) are not supported.

Artifacts

Artifacts are reusable/shareable components generated by Jenkins Jobs (gates) e.g. binary component files (e.g. JAR files, script files etc), NPM packages, Docker images, RPMs etc.

Artifacts are related back to SCM Commits because artifacts are built from source checked out from the SCM. For that reason, artifacts are also consider to be a unit of "value" that can be tracked.

To enable the tracking of artifacts produced and consumed by CI Jobs, DevOptics provides a Value Stream Artifact Tracking.

Warning
Use of Jenkins fingerprinting for artifact tracking in DevOptics is deprecated. Support for it will soon be withdrawn.

Define Sample Projects

To help you explore CloudBees DevOptics, define a sample Value Stream using two projects. The two projects are:

Application Build

In this example, a Jenkins Pipeline builds the application. Freestyle projects are also fully supported with CloudBees DevOptics. See Tracking the production/consumption of artifacts for how to track artifacts in Freestyle Jobs.

When this example Jenkins Pipeline runs, a file 'application.sh' is created. The Job then tells CloudBees DevOptics that it has "produced" this artifact by using the gateProducesArtifact pipeline step. See Tracking the production/consumption of artifacts.

  • Create a new GitHub repository for the sample application

  • Create a Jenkinsfile in the GitHub repository

The Jenkinsfile in the GitHub repository should contain:

pipeline {
  agent any
  stages {
    stage('Build') {
      steps {
        writeFile file: "application.sh", text: "echo Built ${BUILD_ID} of ${JOB_NAME}"
        gateProducesArtifact file: 'application.sh', label: "application.sh:${BUILD_ID}"
      }
    }
  }
}

Create a Jenkins project with the GitHub sample application repository. From Jenkins, click Jenkins  New Item

24 new item

Create a Pipeline

25 new pipeline

Add the GitHub repository definition and save the Pipeline by pressing Save.

26 add scm to pipeline

Run the Pipeline by pressing Build Now.

27 build now

Build results show the Pipeline completed successfully and generated an artifact, application.sh. The contents of application.sh change each time a build is performed.

28 build results

Application Deploy

A Jenkins Pipeline deploys the application. The project uses the Copy Artifact plugin to copy the results from the build project.

Install the Copy Artifact plugin by opening Jenkins  Manage Jenkins  Manage Plugins.

04 manage plugins

  • Click the "Available" tab and enter a "Filter" value of "copy artifact".

  • Select the checkbox for "Copy Artifact Plugin".

  • Press Download now and install after restart.

31 install copy artifact plugin

Restart Jenkins when the install is complete. The "Copy Artifact plugin" is ready to use.

32 copy artifact installed

Create a Jenkins Pipeline to copy the artifact from the Application Build. From Jenkins, click Jenkins  New Item

33 new item

Create a Pipeline

34 new pipeline

Add the Pipeline definition by pressing Save after inserting the Pipeline script into the editor. The Pipeline script should be:

pipeline {
  agent any
  stages {
    stage('Build') {
      steps {
        step([$class: 'CopyArtifact', projectName: 'application-build'])
        gateConsumesArtifact file: 'application.sh'
      }
    }
  }
}

The Pipeline copies 'application.sh' archive from the most recent successful Application Build. The build tells CloudBees DevOptics that it consumes the 'application.sh' file by calling the gateConsumesArtifact pipeline step. See Tracking the production/consumption of artifacts.

Run the Pipeline by pressing Build Now.

36 copy artifact build now

Build results show the Pipeline completed successfully and copied the artifact, application.sh, from the earlier Pipeline. Each run of the deliver Pipeline copies the most recent successfully built application.sh from the Application Build Pipeline.

37 copy artifact complete

Create a Value Stream

Open the CloudBees DevOptics editor with Create a Value Stream. That opens a new value stream in editor mode with default phases and gates in place.

Phases

the milestones or stages used to deliver the software

Gates

the Jenkins projects that create and package artifacts

Per default CloudBees DevOptics creates 3 commonly used phases "build", "test" and "release" with one unconfigured gate within each phase. All gates and phases can be changed in the visual editor or through a JSON representation of your values stream.

01 visual create

The "Untitled Gates" represent placeholder gates that can be instrumented to see tickets and commits as part of these gates and get devops metrics for a gate or the whole value stream.

Don’t forget to name the Value Stream in the top left corner so you can easily find it again.

What’s next?

Model your actual value stream with the visual editor or start instrumenting your value stream by tracking artifacts within your existing gates.

Model a Value Stream

In order to model a value stream, you need to configure the phases and gates so that value can be realized at the end. Think of your whole software delivery system, not just individual jobs or pipelines.

Value Stream Visual Editor

To get into editing mode of a value stream and start editing click the three dots icon in the top right of the screen to edit the Value Stream.

Click Edit Value Stream to use the Visual Editor.

00 visual editor

DevOptics visual editor lets you model different phases and gates of your value stream.

Phases

the milestones or stages used to deliver the software

Gates

the Jenkins pipeline that is run to move the code change forward (e.g. build, test, deploy, …​)

Once modeled you can instrument the Jenkins pipeline so DevOptics is able to track tickets and commits flowing through the value stream end to end.

Manage Phases

Phases represent the large milestones or stages needed to delivery the software.

Create new phases

You can create as many phases as you need. To do so focus your mouse in between 2 existing phases or at the edge of a starting a ending phase indicated by a blue dot which turns into a '' sign. By clicking on the '' sign a new phase is added at this position.

05 visual editor add new phase

Make sure to give the newly created phase a meaningful name.

Edit phases

To edit the name of a phase name click on the phase header and you can adjust the name of the phase. Make sure to hit save on your value stream in order to persist the changes.

05 visual editor edit phase name

Delete phases

You can only delete phases that do not have any gates within them. If you want to delete an empty phase, click on the phase header, and then click the delete icon on the right. Click save on the value stream to save your changes.

05 visual editor delete phase

Manage Gates

Gates represent the processes that are part of your software delivery system. They define the Jenkins pipelines that create and package artifacts and surface important metrics on efficiency of that gate.

A gate requires an existing phase and there can only be one gate per phase.

Create new connected gates

New gates can be created as a new connection from an existing gate. Hovering over an existing gate shows possible new connections that can be made.

From there a connection endpoint can be clicked and dragged into an empty phase to the drop target (dashed circle) until you see "Click to add gate" below the plus icon. Clicking on the drop target (dashed circle) in the empty phase will create a new gate at this position.

Note
Keep in mind that only a downstream gate (gate to the right) can create a connected gate to the left. Only the last gate can create new gates downstream.

05 visual editor create new gate

Make sure to name the gate and save the changes. In order to see work flow through created gates need to be configured and the connected Jenkins pipeline instrumented, see below.

Create unconnected gates or sub-streams

To create gates that are not connected to existing gates, click the plus sign (+) below the name of the phase in which you want to add an unconnected gate. From there you can now create new gates downstream and upstream of this initial gate and model independent gates or sub-streams.

Add unconnected gate

This is important when modeling microservices system and you want to see all your services in one place. You might want to see all the services in one place to understand where tickets and features are across all services that deliver software to the end user. See the template on microservice value stream below.

Note
This can also be easily modeled with the JSON editor
Configure gates

Gates can be created to get a model of the current software delivery system. In order to see work flow through the value stream these gates need to be configured and connected to the Jenkins job. To configure a gate, click on the gate and the settings item that will appear next to it.

05 visual editor gate settings

Once gate configuration screen opens the connection to the corresponding Jenkins job needs to be established. DevOptics will auto-complete existing connected masters and jobs on these masters.

Note
If the master does not appear in the list make sure the plugin is installed on that master and it is connected properly.

Including gate in deployment frequency of value stream:

If the gate is a deployment gate, meaning you want the gate to be part of the deployment frequency computation of the value stream, make sure to check 'This is a deployment job'.

05 visual editor gate configuration

Note
Additionally the Jenkins pipeline associated with that job needs to ensure it tracks the artifact that it consumes or produces properly, so DevOptics can track the tickets and commits. See the artifact tracking section on how to instrument Jenkins pipelines.
Delete gates

In order to delete gates ensure you are in editing mode and click on the gate that you want to delete. That will show the settings and deletion icon.

A gate can be deleted right from the deletion icon or from within the gate configuration and will be deleted after confirmation.

Note
Make sure to save the changes on the value stream.

Manage connections between gates

Create new connection

A connection is usually created when creating a new gate. However, connections can also be created between existing downstream and upstream gates.

In order to connect existing gates drag and click the connection between two existing gates.

05 visual editor new connection

Delete existing connection

In editor mode click on existing connection. This will show a solid line and red scissors. Clicking on the red scissors will remove the connection between these gates.

05 visual editor cut connection

Persist changes to value stream

Changes in the Visual Editor are not saved until you press Save in the top right of the screen.

Pressing Cancel will undo any changes you have made and revert the Value Stream to its previously saved state.

06 visual editor

Value Stream JSON Editor

Value streams can also be defined as json entities. In order to enter json editing mode, click on the three dots menu on the top right of your value stream and select 'Edit JSON'.

Json representation makes it easy to share templates and scaffolds or adds the ability to insert generated value streams based on your software delivery system.

It requires a list of phases. Each phase can have multiple gates.

{
  "phases": [
    {
      "id": "<custom_id_of_phase>",
      "name": "<name_of_phase>",
      "gates": [
        {
          ...
        }
      ]
    }
  ]
}

Defining a phase:

{
  "id": "<custom_id_of_phase>",
  "name": "<name_of_phase>",
  "gates": [<gate>]
}
id

Identifier for that phase.

name

(Optional) The name of the phase

gates

(Optional) List of gates within that phase

Defining a gate:

{
  "id": "<custom_id_of_gate>",
  "name": "<name_of_gate>",
  "master": "<master_connected_to_gate>",
  "job": "<job_connected_to_gate>",
  "feeds":"id_of_gate_this_gates_feeds_into",
  "type": "<deplpyment_gate>"
}
id

Identifier for that phase.

name

(Optional) The name of the phase

master

(Optional) Master that connects to this gate. (required to see tickets and commits within the gate)

job

(Optional) Job within master that connects to this gate. (required to see tickets and commits within the gate)

feeds

(Optional) ID of gate this gate feeds into. (Not needed for most right gate.)

type

(Optional) Set type to deployment if this gates represents a deployment job

See below for a simple example:

{
  "phases": [
    {
      "id": "phase1",
      "name": "Build",
      "gates": [
        {
          "id": "gate1",
          "name": "Untitled Gate",
          "master": "",
          "job": "",
          "feeds": "gate2"
        }
      ]
    },
    {
      "id": "phase2",
      "name": "Test",
      "gates": [
        {
          "id": "gate2",
          "name": "Integration Tests",
          "master": "",
          "job": "",
          "feeds": "gate3"
        }
      ]
    },
    {
      "id": "phase3",
      "name": "Release",
      "gates": [
        {
          "id": "gate3",
          "name": "Untitled Gate",
          "master": "",
          "job": "",
          "type": "deployment"
        }
      ]
    }
  ]
}

Value Stream Templates

Template: Large monolithic system

The software delivery system of a large and complex application usually contains many different components that need to go through rigorous testing and security checks before the release can be built and deployed. Value stream modeling visualizes the dependencies in these processes and surfaces the tickets and commits within the software delivery pipeline. That enables you to see bottlenecks and blockers early and act quickly to remove them and improve the overall system.

DevOptics lets you map all the dependencies of your software delivery processes from build to production.

05 visual editor use case large app

Here is a json representation of above value stream template. Copy & paste template into Json editor of your value stream to get started with this template.

{
  "phases": [
    {
      "id": "dev",
      "name": "Dev (Build/Test)",
      "gates": [
        {
          "id": "component_a",
          "name": "Component A",
          "master": "",
          "job": "",
          "feeds": "component_test_a"
        },
        {
          "id": "component_b",
          "name": "Component B",
          "master": "",
          "job": "",
          "feeds": "component_test_b"
        },
        {
          "id": "component_c",
          "name": "Component C",
          "master": " ",
          "job": "",
          "feeds": "component_test_c"
        },
        {
          "id": "component_d",
          "name": "Component D",
          "master": "",
          "job": "",
          "feeds": "component_test_d"
        }
      ]
    },
    {
      "id": "component_tests",
      "name": "Component Tests",
      "gates": [
        {
          "id": "component_test_a",
          "name": "Component A",
          "master": "",
          "job": "",
          "feeds": "integration"
        },
        {
          "id": "component_test_b",
          "name": "Component B",
          "master": "",
          "job": "",
          "feeds": "integration"
        },
        {
          "id": "component_test_c",
          "name": "Component C",
          "master": "",
          "job": "",
          "feeds": "integration"
        },
        {
          "id": "component_test_d",
          "name": "Component D",
          "master": "",
          "job": "",
          "feeds": "integration"
        }
      ]
    },
    {
      "id": "system_integration",
      "name": "system Integration",
      "gates": [
        {
          "id": "integration",
          "name": "Integration",
          "master": "",
          "job": "",
          "feeds": "integration_tests"
        }
      ]
    },
    {
      "id": "system_tests",
      "name": "System Tests",
      "gates": [
        {
          "id": "integration_tests",
          "name": "Integration Tests",
          "master": "",
          "job": "",
          "feeds": "staging_deploy"
        }
      ]
    },
    {
      "id": "staging",
      "name": "Staging",
      "gates": [
        {
          "id": "staging_deploy",
          "name": "Staging",
          "master": "",
          "job": "",
          "feeds": "production_deploy"
        }
      ]
    },
    {
      "id": "release-promotion",
      "name": "Production",
      "gates": [
        {
          "id": "production_deploy",
          "name": "Release",
          "master": "",
          "job": "",
          "type": "deployment"
        }
      ]
    }
  ]
}

Template: Microservice system

When delivering your application through multiple loosely coupled microservices the delivery process of each service becomes simpler, but the overall system becomes more complex. It is important to understand how these services deliver features and where, and if there are blockers and bottlenecks.

DevOptics lets you map sub-streams of your overall value streams with multiple endpoints and visualize everything in one value stream.

05 visual editor use case microservices

Here is a json representation of above value stream template. Copy & paste template into Json editor of your value stream to get started with this template.

{
  "phases": [
    {
      "name": "Build Services",
      "id": "build_services",
      "gates": [
        {
          "id": "service_a_build",
          "name": "Service A - Build",
          "master": "",
          "job": "",
          "feeds": "service_a_test"
        },
        {
          "id": "service_b_build",
          "name": "Service B - Build",
          "master": "",
          "job": "",
          "feeds": "service_b_test"
        },
        {
          "id": "service_c_build",
          "name": "Service C - Build",
          "master": "",
          "job": "",
          "feeds": "service_c_test"
        }
      ]
    },
    {
      "id": "tests",
      "name": "Tests",
      "gates": [
        {
          "id": "service_a_test",
          "name": "Service A - Test",
          "master": "",
          "job": "",
          "feeds": "service_a_staging"
        },
        {
          "id": "service_b_test",
          "name": "Service B - Test",
          "master": "",
          "job": "",
          "feeds": "service_b_staging"
        },
        {
          "id": "service_c_test",
          "name": "Service C - Test",
          "master": "",
          "job": "",
          "feeds": "service_c_staging"
        }
      ]
    },
    {
      "id": "staging_deploy",
      "name": "Staging Deploy",
      "gates": [
        {
          "id": "service_a_staging",
          "name": "Service A - Staging Deploy",
          "master": "",
          "job": "",
          "feeds": "service_a_verification"
        },
        {
          "id": "service_b_staging",
          "name": "Service B - Staging Deploy",
          "master": "",
          "job": "",
          "feeds": "service_b_verification"
        },
        {
          "id": "service_c_staging",
          "name": "Service C - Staging Deploy",
          "master": "",
          "job": "",
          "feeds": "service_c_verification"
        }
      ]
    },
    {
      "id": "verification",
      "name": "Verification",
      "gates": [
        {
          "id": "service_a_verification",
          "name": "Service A - Staging Verification",
          "master": "",
          "job": "",
          "feeds": "service_a_prod"
        },
        {
          "id": "service_b_verification",
          "name": "Service B - Staging Verification",
          "master": "",
          "job": "",
          "feeds": "service_b_prod"
        },
        {
          "id": "service_c_verification",
          "name": "Service C - Staging Verification",
          "master": "",
          "job": "",
          "feeds": "service_c_prod"
        }
      ]
    },
    {
      "name": "Production Deploy",
      "id": "production_deploy",
      "gates": [
        {
          "id": "service_a_prod",
          "name": "Service A - Production Deploy",
          "master": "",
          "job": "",
          "feeds": null,
          "type": "deployment"
        },
        {
          "id": "service_b_prod",
          "name": "Service B - Production Deploy",
          "master": "",
          "job": "",
          "feeds": null,
          "type": "deployment"
        },
        {
          "id": "service_c_prod",
          "name": "Service C - Production Deploy",
          "master": "",
          "job": "",
          "feeds": null,
          "type": "deployment"
        }
      ]
    }
  ]
}

View tickets and commits

CloudBees DevOptics combines Git commit messages, Jira® tickets, and project results in the Value Stream view. Git commit message text refers to Jira® tickets by ticket ID. For example, a commit message referring to Jira® ticket EXAMPLE-12345 must include EXAMPLE-12345 in the text. Multiple Jira® tickets may be referenced in a single git commit.

Using the Git repository created earlier, create a commit message that references a Jira® ticket. Commit and push a change to the repository referring to your ticket (for example, EXAMPLE-12345) so that the ticket reference is detected in the commit message.

Return to the gate job page and click Build Now to start a build.

47 Build Now

The Value Stream view is updated to show the ticket progress. Click on the Build Gate to open the ticket panel on the right. The ticket panel shows a summary of ticket status and gate job status. Click the ticket in the ticket panel to see more details.

48 see completed gate and ticket

The ticket panel detail view shows:

  • Description of the ticket (expandable if the description is large)

  • Gate status (success or failure)

  • Number of commits referencing this ticket

  • Summary of each commit

The ticket panel detail view Find commits field filters commits based on text you enter.

49 ticket details and commit

Unticketed commits

CloudBees DevOptics surfaces Jira® tickets as they move through the value stream. Tickets are the main work artifact in CloudBees DevOptics. However, not all organizations follow strict processes to associate commits with tickets. To help you understand where all the work that moves through the software development lifecycle is located, CloudBees DevOptics also surfaces commits that are not associated with tickets in gates and in value streams.

Commits that are not associated with a ticket are aggregated and shown within a run so that you don’t lose track of work that was processed.

Unticketed commits overview

Unticketed commits details

View the Value Stream

CloudBees DevOptics shows progress of changes through the software delivery process. When a commit is built that references a Jira® ticket, that ticket is included in the Value Stream view for the gate associated with that project. When a build succeeds that includes an artifact from a preceding gate, tickets from the preceding gate move to the successful gate.

Gate Status

The condition of the gate is indicated by:

  • Number of tickets currently in that gate

  • Solid green for success

  • Pulsing blue for running

  • Red for failed

How to identify waste and low performance in a value stream

Gate waste insights provide a way for you to visualize waste and low performance in a value stream. This lets you compare performance metrics and identify where to focus your attention to improve your software delivery process.

When you select a value stream, the Gate waste insights section of the Value Stream Overview side panel lets you toggle between failing gates and slow gates. The Failing gates view shows the five gates that have the highest total failure time during the selected time period. The total failure time shows the total time spent queueing and processing runs that failed. The Slow gates view shows the five gates that have the highest total lead times during the selected time period. Available time periods include 24 hours, 48 hours, and 7, 14, 30, and 90 days. To view the details for a gate in the list, click the name of the gate.

failing gates

slow gates

Gate Job

The job associated with a gate can be opened by clicking on the "View gate job" drop down in the gate status pane.

Each gate is associated with a Jenkins project.

  • View the associated Jenkins project by clicking the gate in the Value Stream view.

  • When the "Build Gate" panel appears, click the three dots in the right hand pane.

  • Click View gate job…​ to open a new web browser on the project associated with this gate.

42 view gate job

The gate project is visible with its artifacts and build history.

43 gate job view

Ticket Status

Clicking a gate shows the tickets for that gate. Tickets can be opened by clicking the ticket in the gate status pane.

The Value Stream view time period can be adjusted from the drop down menu in the top right of the page. Multiple time periods are available, including:

  • 24 hours

  • 48 hours

  • 7 days

  • 14 days

  • 30 days

  • 90 days

00 delivery view overview

Instrument a Value Stream to track tickets and commits

DevOptics Deliver tracks “value” moving between upstream and downstream gates in a Value Stream. DevOptics tracks “value” between gates in two ways:

  1. By explicitly tracking and matching SCM commits checked out by the upstream and downstream gates. For example, both gates checkout different branches of the same SCM repository and see the same commits as the code is merged across branches. An example of this is GitFlow.

  2. By tracking artifacts produced upstream and consumed downstream (by unique ID). We call this “Artifact Tracking”.

Artifact Tracking is the subject of this section and sub-sections.

What is an artifact?

An artifact is typically thought of as being anything "produced" or "consumed" by a run of a CI/CD job e.g.

  • File based artifacts , such as .jar, .exe, .sh, etc.

  • Docker images.

  • Amazon Machine images (AIM).

  • VMWare images.

  • Installable packages, such as RPMs, Debian, etc.

  • NPM packages.

  • A shared "fact" instead of a concrete artifact, such as a file, an image etc. A shared fact does not fall into any of the categories listed above. It is any unique identifier/key that both the upstream and downstream gates can both easily create/resolve, allowing CloudBees DevOptics to link the appropriate runs of each job and promote value between the two gates.

One can break these into two general categories from a produces / consumes perspective:

  1. Artifacts that can be referenced using a file path.

  2. Artifacts that can’t be referenced using a file path, for example a Docker image.

Tracking the production/consumption of artifacts

Warning
Use of Jenkins fingerprinting for artifact tracking in DevOptics is deprecated. Support for it will soon be withdrawn. Instead, please use the Value Stream Artifact Tracking features described in this section.

In order to do artifact tracking, the upstream Jenkins job run needs to “tell” Deliver that it has produced a specific artifact (by unique ID), while the downstream Jenkins job run needs to tell Deliver that it has consumed a specific artifact (by unique ID). Once Deliver has the unique ID produced/consumed, it can determine what “value” (SCM commits/tickets delivered to the upstream gate by the upstream run that produced the artifact) can be “promoted” to the downstream gate via the Job run that consumed the artifact.

So as you can see, the important part of artifact tracking is the unique ID:

  1. The run that produces the artifact needs to derive a unique ID for the artifact and use that ID when “telling” Deliver that it produced the artifact.

  2. The run that consumes the artifact needs to derive a unique ID for the artifact and use that ID when “telling” Deliver that it consumed the artifact.

The most important point here is that the runs producing and consuming the artifacts need to have a scheme/mechanism that generates the same unique ID. The possibilities here depend largely on whether the artifacts can be referenced by file paths, or not (see What is an artifact?):

  • File based artifacts: Generating a unique ID for a file based artifact should be a simple process of computing a checksum of the file contents. For example, the CloudBees DevOptics Job hooks for Jenkins Pipeline and Freestyle job types use SHA-512. More on this in later sections.

  • Non-file based artifacts: Generating a unique ID for a non-file based artifact requires more consideration. See Unique Artifact ID construction.

DevOptics pipeline steps

The CloudBees DevOptics pipeline steps are an extension to the Jenkins Pipeline. They allow a Jenkins pipeline to explicitly declare that it produces artifacts, or consumes the changes made by other Jenkins jobs.

DevOptics gateProducesArtifact step

The CloudBees DevOptics gateProducesArtifact step is a pipeline step that allows a Jenkins pipeline to explicitly declare that it "produces" an artifact that can be "consumed" by a downstream Job (gate) e.g. via the gateConsumesArtifact pipeline step (for pipeline Jobs), or via the Freestyle Job build step.

This step allows your pipeline to explicitly define what artifacts it is producing, and that you are interested in for CloudBees DevOptics. Explicitly defining the artifacts you want CloudBees DevOptics to track allows you to more accurately follow work as it moves across your Value Streams.

Produce a specific artifact with a known ID

Use the step in this form as follows:

gateProducesArtifact id: '<id>', type: '<type>', label: '<label>'
id

The ID you have assigned to the artifact you want to produce. This ID should match the ID used in a gateConsumesArtifact step call. This ID can be whatever identification scheme you use for artifacts. The only requirement is that the ID is unique within the context of your CloudBees DevOptics Organization. See Unique Artifact ID construction.

type

The type of artifact you are producing. Common values are file, docker, rpm. This type value should match the type value in a gateConsumesArtifact step call. This type can be whatever name you use for classifying artifact types.

label

(Optional) A readable label, providing contextual information about the artifact produced. This label should be human readable as it will be used in the DevOptics UI.

Produce a specific file artifact

In order to notify CloudBees DevOptics that this run produces a file:

gateProducesArtifact file: '<file>', type: '<type>', label: '<label>'
file

The file within the workspace that you want to notify CloudBees DevOptics about. This will hash the file to produce an ID.

type

(Optional) The type of artifact you are producing. Common values are file, docker, rpm. This type value should match the type value in a gateConsumesArtifact step call. This type can be whatever name you use for classifying artifact types. If not defined, it defaults to file.

label

(Optional) A readable label, providing contextual information about the artifact produced. This label should be human readable as it will be used in the DevOptics UI.

Example

Here is an example Jenkinsfile scripted pipeline that produces a plugin-a.txt and notifies CloudBees DevOptics about it.

// Jenkinsfile scripted pipeline
node {
    stage ('checkout') {
        checkout scm
    }
    stage ('build') {
        // Creates a file called plugin-a.txt. Using git rev-parse HEAD
        // here because it will generate a new artifact when the HEAD ref
        // commit changes. You could also just echo a timestamp, or something else.
        sh "git rev-parse HEAD > plugin-a.txt"

        // Records plugin-a.txt as a produced artifact.
        archiveArtifacts artifacts: 'plugin-a.txt'
    }
    stage ('produce') {
        // Notify DevOptics that this run produced plugin-a.txt.
        gateProducesArtifact file: 'plugin-a.txt'
    }
}
DevOptics gateConsumesArtifact step

The CloudBees DevOptics consumes step is a pipeline extension that allows a Jenkins pipeline to explicitly declare that it consumes artifacts that have been marked as produced by the gateProducesArtifact step.

This step allows your pipeline to explicitly define what artifacts it is consuming, that you are interested in for CloudBees DevOptics. Explicitly defining the artifacts you want CloudBees DevOptics to track allows you to more accurately follow work as it moves across your Value Streams.

Consume a specific artifact with a known ID

Using the step in this form is as simple as follows:

gateConsumesArtifact id: '<id>', type: '<type>'
id

The ID you have assigned to the artifact you want to consume. This ID should match the ID used in a gateProducesArtifact step. This ID can be whatever identification scheme you use for artifacts. The only requirement is that the ID is unique within the context of your CloudBees DevOptics Organization. See Unique Artifact ID construction.

type

The type of artifact you are consuming. Common values are file, docker, rpm. This type value should match the type value in a gateProducesArtifact step call. This type can be whatever name you use for classifying artifact types.

Consume a specific file artifact

In order to consume a file within the workspace:

gateConsumesArtifact file: '<file>', type: '<type>'
file

The file within the workspace you want to consume. This will hash the file to produce an ID.

type

(Optional) The type of artifact you are consuming. Common values are file, docker, rpm. This type value should match the type value in a gateProducesArtifact step call. This type can be whatever name you use for classifying artifact types. If not defined, it defaults to file.

Example

Here is an example Jenkinsfile scripted pipeline that consumes a plugin-a.txt artifact, notifies CloudBees DevOptics about it, then produces a plugin-b.txt artifact and notifies CloudBees DevOptics about it.

// Jenkinsfile scripted pipeline
node {
    stage ('checkout') {
        checkout scm
    }
    stage ('build') {
        // Copies the artifacts of plugin-a/master (plugin-a.txt) in to this workspace.
        copyArtifacts projectName: 'plugin-a/master'

        // Notify DevOptics that this run consumed plugin-a.txt.
        gateConsumesArtifact file: 'plugin-a.txt'

        // Creates a file called plugin-a.txt. Using git rev-parse HEAD
        // here because it will generate a new artifact when the HEAD ref
        // commit changes. You could also just echo a timestamp, or something else.
        sh "git rev-parse HEAD > plugin-b.txt"

        // Records plugin-b.txt as a produced artifact.
        archiveArtifacts artifacts: 'plugin-b.txt'
    }
    stage ('produce') {
        // Notify DevOptics that this run produced plugin-b.txt.
        gateProducesArtifact file: 'plugin-b.txt'
    }
}
DevOptics consumes run step

This step is a CloudBees DevOptics pipeline extension that allows a Jenkins pipeline to explicitly declare that it consumes the changes (commits and Issue Tracker tickets) made by another Jenkins Job upstream from it in a CD pipeline process.

Before using this step, consider using the gateConsumesArtifact and gateProducesArtifact steps to track artifacts instead. This step is intended for use in those edge cases where artifact tracking is not easy/possible.

This step allows your pipeline to explicitly define a run of an upstream Jenkins job via the job name, run ID and master URL.

Consume a specific upstream job run

Using the step in this form is as simple as follows:

gateConsumesRun masterUrl: '<master-url>', jobName: '<job-name>', runId: '<run-id>'
masterUrl

(Optional) The exact URL of the Jenkins master hosting the upstream job. The same URL used on the upstream gate configuration in the CloudBees DevOptics Application. If not defined, the URL will default to the URL of the master running the pipeline i.e. it assumes the upstream job is on the same Jenkins master.

jobName

The exact name of the upstream job you want to consume from.

runId

(Optional) The ID of the upstream job run to be consumed. This can come from a job parameter or from a job trigger. If not defined, it defaults to the runId of the last successful run of the upstream job.

withMaven pipeline step

The CloudBees DevOptics plugin includes an integration with the Pipeline Maven plugin. This integration allows the CloudBees DevOptics plugin to automatically notify CloudBees DevOptics about the dependencies used and the artifacts produced by a Maven build, which is executed from within a withMaven pipeline step.

Important
To use this feature, Jenkins will need to have both the CloudBees DevOptics plugin and the Pipeline Maven plugin installed.
Example

Consider the following pom files. There is a plugin-a and a plugin-b. plugin-b uses plugin-a as a dependency:

<project xmlns="http://maven.apache.org/POM/4.0.0"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

	<modelVersion>4.0.0</modelVersion>
	<groupId>com.cloudbees.devoptics</groupId>
	<artifactId>plugin-a</artifactId>
	<packaging>jar</packaging>
	<version>1.0-SNAPSHOT</version>
	<name>plugin-a</name>

</project>
<project xmlns="http://maven.apache.org/POM/4.0.0"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

	<modelVersion>4.0.0</modelVersion>
	<groupId>com.cloudbees.devoptics</groupId>
	<artifactId>plugin-b</artifactId>
	<packaging>jar</packaging>
	<version>1.0-SNAPSHOT</version>
	<name>plugin-b</name>

	<dependencies>
		<dependency>
			<groupId>com.cloudbees.devoptics</groupId>
			<artifactId>plugin-a</artifactId>
			<version>1.0-SNAPSHOT</version>
		</dependency>
	</dependencies>

</project>

Plugin A can have a Jenkinsfile scripted pipeline like the following. Notice that there are no explict calls to CloudBees DevOptics needed:

// Plugin A Jenkinsfile scripted pipeline
node {
    stage ('checkout') {
        checkout scm
    }
    stage ('build') {
        withMaven() {
            sh "mvn clean install"
        }
    }
}

Running this will result in CloudBees DevOptics being notified about 2 events:

  1. plugin-a pom file as a produced artifact.

  2. plugin-a jar file as a produced artifact.

Plugin B can also have a Jenkinsfile scripted pipeline like the following. Again, notice that there are no explict calls to CloudBees DevOptics needed:

// Plugin B Jenkinsfile scripted pipeline
node {
    stage ('checkout') {
        checkout scm
    }
    stage ('build') {
        withMaven() {
            sh "mvn clean install"
        }
    }
}

This will result in CloudBees DevOptics being notified about 3 events:

  1. plugin-a jar file as a consumed artifact.

  2. plugin-b pom file as a produced artifact.

  3. plugin-b jar file as a produced artifact.

Disabling the integration

The integration with the Pipeline Maven plugin can be disabled in 2 ways:

  1. Disable the integration just for a specific withMaven pipeline step:

    // Pipeline Maven plugin integration disabled
    node {
        stage ('checkout') {
            checkout scm
        }
        stage ('build') {
            withMaven(options: [ gateArtifactPublisher(disabled: true) ]) {
                sh "mvn clean install"
            }
        }
    }
  2. Disable the integration globally by going to Jenkins  Manage Jenkins  Global Tool Configuration  Pipeline Maven Configuration  Options.

    If the CloudBees DevOptics Gate Artifact Publisher is already listed, tick the Disabled tickbox. If it is not already listed, first add it using the Add Publisher Options dropdown, then tick the Disabled tickbox.

    67 cloudbees publisher disable

Freestyle job build steps

These CloudBees DevOptics build steps allow a Freestyle job to explicitly declare that it produces artifacts, or consumes the changes made by other Jenkins jobs.

Freestyle job build step

The CloudBees DevOptics build step for Freestyle jobs is called Inform DevOptics of consumed artifact.

This step is a CloudBees DevOptics extension that allows a Jenkins job to explicitly declare that it consumes artifacts that have been marked as produced by the gateProducesArtifact step.

This step allows your job to explicitly define what artifacts it is consuming, that you are interested in for CloudBees DevOptics. Explicitly defining the artifacts you want CloudBees DevOptics to track allows you to more accurately follow work as it moves across your Value Streams.

68 consumed artifact build step

69 consumed artifact build step

Consume a specific artifact with a known ID

Using the step in this form requires filling out the following fields:

id

The ID you have assigned to the artifact you want to consume. This ID should match the ID used in a gateProducesArtifact step call. This ID can be whatever identification scheme you use for artifacts. The only requirement is that the ID is unique within the context of your CloudBees DevOptics Organization.

type

The type of artifact you are consuming. Common values are file, docker, rpm. This type value should match the type value in a gateProducesArtifact step call. This type can be whatever name you use for classifying artifact types.

Consume a specific file artifact

In order to consume a file within the workspace:

file

The file within the workspace you want to consume. This will hash the file to produce an ID. Note that the artifact must be in the workspace in order for this to work i.e. the pipeline script may need to "get" the artifact first e.g. by copying from another job run, or pulling from an artifact repository.

type

(Optional) The type of artifact you are consuming. Common values are file, docker, rpm. This type value should match the type value in a gateProducesArtifact step call. This type can be whatever name you use for classifying artifact types. If not defined, it defaults to file.

Freestyle job post-build action

The CloudBees DevOptics post-build action for Freestyle jobs is called Inform DevOptics of produced artifact.

This step is a CloudBees DevOptics extension that allows a Jenkins job to explicitly declare that it produces artifacts that can be consumed by the gateConsumesArtifact step.

This step allows your job to explicitly define what artifacts it is producing that you are interested in for CloudBees DevOptics. Explicitly defining the artifacts you want CloudBees DevOptics to track allows you to more accurately follow work as it moves across your Value Streams.

70 produced artifact post build action

71 produced artifact post build action

Produce a specific artifact with a known ID

Using the step in this form requires filling out the following fields:

id

The ID you have assigned to the artifact you are producing. This ID should match the ID used in a gateConsumesArtifact step call. This ID can be whatever identification scheme you use for artifacts. The only requirement is that the ID is unique within the context of your CloudBees DevOptics Organization.

type

The type of artifact you are producing. Common values are file, docker, rpm. This type value should match the type value in a gateConsumesArtifact step call. This type can be whatever name you use for classifying artifact types.

label

(Optional) A readable label, providing contextual information about the artifact produced. This label should be human readable as it will be used in the DevOptics UI.

Produce a specific file artifact

In order to notify CloudBees DevOptics that this run produces a file:

file

The file within the workspace that you want to notify CloudBees DevOptics about. This will hash the file to produce an ID.

type

(Optional) The type of artifact you are producing. Common values are file, docker, rpm. This type value should match the type value in a gateConsumesArtifact step call. This type can be whatever name you use for classifying artifact types. If not defined, it defaults to file.

label

(Optional) A readable label, providing contextual information about the artifact produced. This label should be human readable as it will be used in the DevOptics UI.

Unique Artifact ID construction

When producing/consuming an artifact that can’t be “referenced” using a file path (see What is an Artifact?), you need to supply an *id*.

gateProducesArtifact type: <type>, id: <id>

You can use the image ID, for example a Docker image ID, as the unique ID. Or, you can construct an ID based on a run environment variable or a build parameter, for example:

gateProducesArtifact type: “docker”, id: "acme-app-${env.BUILD_ID}"

In the purest sense, using the image ID is the most “correct” thing to do because of the lower risk of creating an *id* that clashes with an earlier ID.

However, using image IDs can also be troublesome/error-prone if you don’t use them consistently between the producer and the consumer. Getting the image ID can involve adding obscure code to your Jenkinsfile to execute commands to resolve the image ID, for example:

docker images --no-trunc --format='{{.ID}}' acme-image:latest

This code can very easily be executed inconsistently (different switches, etc.) on the produces and consumes side. This results in the creation of inconsistent IDs, and therefore the inability of DevOptics to track value.

If possible, use a scheme whereby they only execute “cryptic” *id* resolution commands in the upstream producer and then “pass” the *id* to the downstream consumer side via a mechanism that allows the consumer to get that *id* at low risk (no cryptic commands that can be executed inconsistently. For example, by passing as a build parameter.

For example, in the upstream producer Jenkinsfile:

// Get the Docker image Id for "acme-image:latest" from the Docker registry.
def digest = sh script: "docker images --no-trunc --format='{{.ID}}' acme-image:latest", returnStdout: true
def imageId = digest.split(":")[1]

// Tell Deliver that this job produced the image ...
gateProducesArtifact type: 'docker', id: imageId, label: "acme-image:${imageId}"

// Trigger deploy job (downstream consumer), passing the imageId
build job: 'acme-deploy-job', imageId : imageId

And then, in the downstream consumer Jenkinsfile (acme-deploy-job):

// Tell Deliver that this job consumed the image ...
gateConsumesArtifact type: "docker", id: imageId

The key point to note here is that all obscurity is in the upstream producer Jenkinsfile and none in the downstream consumer, reducing the risk of using inconsistent IDs upstream versus downstream.

You also have the option of constructing an ID based on a run environment variable or build parameter, for example:

def imageId = "acme-app-${env.BUILD_ID}"

// Tell Deliver that this job produced the image ...
gateProducesArtifact type: 'docker', id: imageId

// Trigger deploy job (downstream consumer)
build job: 'acme-deploy-job', imageId : imageId

If you can guarantee that the IDs produced in such an environment will be unique for every build (Jenkins build numbers not reset), then this can be an easier and more practical solution.

In conclusion, you have two choices and there are trade-offs with each approach; ease of use versus perceived purity.

DevOps Performance metrics

CloudBees DevOptics calculates and displays a set of key metrics, as popularized in the Annual State of DevOps Report. The State of DevOps report is the industry guide on CD and DevOps adoption and its correlation to improved organizational performance.

These DevOps performance metrics allow you to objectively and reliably measure and monitor improvements to your software delivery capability.

The availability of these metrics within CloudBees DevOptics brings several important benefits:

  • Continuously updated data to drive improvement across teams

  • Trustworthy dashboards to guide informed decisions for better business outcomes

  • Data-driven discovery and use of best practices across teams

The following metrics are available for both value streams and gates:

Metrics are calculated on a continual basis for value streams defined in CloudBees DevOptics.

Additionally, the following metrics are available for gates only:

You can select one of the following time periods to view metrics:

  • last 24 hours - this is the default

  • last 48 hours

  • last 7 days

  • last 14 days

  • last 30 days

  • last 90 days

Deployment frequency (DF)

This metric shows the frequency of successful runs of any gates identified as deploy gates in the value stream definition. Where multiple deploy gates exist in a value stream, the value stream metric is an aggregation.

Note that high performers deploy more often.

A gate has to be marked as a Deploy Gate before the deployment frequency can be calculated for that gate. You can achieve this by editing the gate and checking the option This is a deployment job

Deploy Gate

Deployment frequency is computed as follows:

Deployment frequency (DF) of a deploy gate = Count of successful deploys / number of days.

Deployment frequency (DF) of a value stream = Count of successful deploys of all deploy gates / number of days.

Mean lead time (MLT)

This metric shows the mean time for a ticket and associated commits to successfully flow through a gate. At the value stream level it is the mean time for a ticket and associated commits to successfully flow from their entry point in the value stream to their final gates.

Note that high performers have lower mean lead times.

Mean lead time is computed as follows:

For an individual gate, CloudBees DevOptics computes the lead time (LT) of commits in that gate. Lead time is computed as follows:

Lead time (LT) = Time when a commit exited the gate - Time when a commit entered the gate.

Mean lead time (MLT) of a value stream = Mean of all lead times in a value stream.

Mean time to recovery (MTTR)

This metric shows the mean time it takes for a gate to return to a successful state from when it enters an unsuccessful state. It is also aggregated to the value stream level.

Note that high performers recover from failure faster.

Mean time to recovery is computed as follows:

For an individual gate, CloudBees DevOptics computes the time to recovery (TTR) of failures in that gate. TTR is computed as follows:

Time to recovery (TTR) = End time of the most recent successful build - End time of the first of a consecutive sequence of recent failures.

A gate is considered in the failed state if the underlying job is in one of the following states:

  • Failure

  • Unstable

Mean time to recovery (MTTR) of a gate = Mean of all TTR’s of the gate.

Mean time to recovery (MTTR) of a value stream = Mean of MTTR’s of all gates in a value stream.

Change failure rate (CFR)

This metric shows the percentage of unsuccessful runs in a gate that are caused by new changes. It is also aggregated to the value stream level.

Note that high performers are less likely to introduce a failure with any change.

The change failure rate is computed as follows:

For an individual gate, CloudBees DevOptics computes the change failure rate as follows:

Change failure rate (CFR) = Total number of unsuccessful runs of a gate caused by new changes, as a percentage of the total number of runs of the gate.

Change failure rate (CFR) of a value stream = The total number of unsuccessful runs of the value stream as a percentage of the total number of runs of the value stream.

Mean queue time

This metric shows the amount of time commits spent in the queue before the job started running.

Note that high performers have lower mean queue times.

This metric is available only at the gate level. It is not available at the value stream level.

The mean queue time is computed as follows:

For an individual gate, CloudBees DevOptics computes the queue time of commits in that gate. Queue time is computed as follows:

Queue time (QT) = The time when the commit started processing - The time when the commit entered the gate. Mean queue time (MQT) = Mean of all queue times.

Mean processing time

This metric shows the amount of time it takes from when a job starts to when it completes successfully. The processing time can span multiple runs that fail until the job is part of a successful run.

Note that high performers have lower mean processing times.

This metric is available only at the gate level. It is not available at the value stream level.

The mean processing time is computed as follows:

For an individual gate, CloudBees DevOptics computes the process time of commits in that gate. Process time is computed as follows:

Process time (PT) = Time when the commit was processed successfully - The time when the commit started being processed. Mean process time (MQT) = Mean of all process times.

Mean idle time

This metric shows the amount of time a successful commit spends in a gate after it is processed and before it is picked up by a downstream gate. If the gate is the last gate in a value stream, the idle time is zero.

Note that high performers have lower idle process times.

This metric is available only at the gate level. It is not available at the value stream level.

The mean idle time is computed as follows:

For an individual gate, CloudBees DevOptics computes the idle time of commits in that gate. Idle time is computed as follows:

Idle time (IT) = Time when a commit passed to the next gate - The time when the commit was processed successfully. Mean idle time (MIT) = Mean of all idle times.

Viewing metrics for a value stream

You can view metrics for all value streams from the Value Streams screen. Or, you can select a specific value stream to see metrics, including sparkline graphics for each metric. By default, CloudBees DevOptics shows metrics for the last 24 hours. You can select a different time range from the drop-down list above the metrics.

  • To see an overview of the metrics for all of your value streams, go to the Value Streams screen. Click a column name to sort. Select the drop-down menu next to Aggregate metrics over to select a different time range.

  • To see an overview of a specific value stream, on the Value Streams screen, click the name of the value stream. Select the drop-down menu next to Metrics for this value stream over to select a different time frame.

Viewing metrics for a gate

You can view the individual metrics for any gate in a value stream. By default, CloudBees DevOptics shows metrics for the last 24 hours. You can select a different time frame from the drop-down list above the metrics.

  1. From the Value Streams screen, select the value stream that contains the gate for which you want to view metrics.

  2. Click the gate for which you want to view metrics. Select the drop-down menu next to Metrics for this value stream over to select a different time frame.

Downloading metrics to a CSV file

DevOptics lets you download value stream and gate level metrics to a .csv file for additional analysis and reporting. The metrics are grouped per day so you can analyze how these metrics changed over time.

  1. From the Value Streams screen, select the value stream for which you want to download metrics.

  2. Do one of the following:

    • To download metrics for the value stream, click Download Metrics (CSV) on the Value Stream Overview window.

    • To download metrics for a gate, select the gate, and then click Download Metrics on the Gate metrics window.

  3. Select the metrics and the date range for the metrics, and then click Export Metrics (CSV).

  4. Go to your Downloads folder to locate the file. The file name begins with devoptics-metrics.

CloudBees DevOptics surfaces how metrics changed over time. This allows you to analyze the impact of changes and initiatives that you made in your processes. You can view metrics trends at the value stream level and at the gate level.

CloudBees DevOptics computes the metrics trends based on the timeframe that you select.

  • 1 day, 2 days, and 7 days - The metrics trends are rolled up by the hour, so each data point is the value of the metric for that hour.

  • 14 days, 30 days, 90 days - The metrics trends are rolled up by the day, so each data point is the value of the metric for that hour.

sparklines