Sie sind auf Seite 1von 151

Table of contents

Introduction 6

Virtualisation : 7

Vagrant. 7

Containerisation: 8

Kubernetes : 9

Definition of columns 12

Ansible : # Containers are matched either by name (if provided) or by an exact match of 13

Puppet: 15

Installation 15

Hello world 16

Terraform: 17

Docker Provider 17

»Example Usage 17

»Registry Credentials 18

»Certificate information 19

»Argument Reference 20

CI CD Tools: 21

CI/CD defined 21

How continuous integration improves collaboration and quality22

Continuous testing goes beyond test automation 23

The CD pipeline automates changes to multiple environments 23

CI/CD enables more frequent code deployments 24

Jenkins vs Travis - Comparing Two Popular CI Tools 25

What Is Continuous Integration?# 25

How Do CI Servers Work?# 25

What Is Jenkins?# 26
What Is Travis CI?# 27

A Side-by-Side Comparison# 28

Jenkins vs Travis CI - In Summary# 28

Get started with Bitbucket Pipelines 29

In this section 29

Related content 29

Still need help? 29

Configure bitbucket-pipelines.yml 30

On this page 30

In this section 31

Related content 31

Still need help? 31

Key concepts 32

Keywords 32

pipelines 35

default 35

branches 35

tags 36

bookmarks 36

custom 37

pull-requests 38

parallel 39

step 40

name 40

image 41

Examples 41

trigger 42
deployment 43

size 43

Overriding the size of a single step 43

Increasing the resources for an entire pipeline 43

script 44

pipes 44

after-script 44

artifacts 45

options 45

max-time 45

clone 46

lfs 46

depth 46

definitions 47

services 47

caches 47

Glob patterns cheat sheet 48

Step 1: Choose a language template 49

Step 2: Configure your pipelines 50

Drone.io 53

Practical: Setting up Drone 53

Set up Infrastructure 53

Install Prerequisites 57

Generate Gitlab oAuth credentials 57

Create a repo on GitLab58

Configure Drone 58

Recap 61
Practical: Giving Drone access to Google Cloud 62

Practical: Finally, a pipeline!! 66

Conclusion 73

PS - Cleanup (IMPORTANT!) 73

Build matrix 75

Create a Pipeline in Blue Ocean 76

Prerequisites 76

Run Jenkins in Docker 76

On macOS and Linux 77

On Windows 77

Accessing the Jenkins/Blue Ocean Docker container 82

Setup wizard 83

Unlocking Jenkins 83

Customizing Jenkins with plugins 85

Creating the first administrator user 86

Stopping and restarting Jenkins 86

Fork the sample repository on GitHub 86

Create your Pipeline project in Blue Ocean 86

Create your initial Pipeline 88

Add a test stage to your Pipeline 95

Add a final deliver stage to your Pipeline 99

Follow up (optional) 106

Wrapping up 107

Application-release automation107

Contents 110

Relationship with DevOps[edit] 110

Relationship with Deployment[edit] 110


ARA Solutions[edit] 110

Deploy a Spring Boot application to Cloud Foundry with GitLab CI/CD 111

Introduction 111

Requirements 112

Create your project 112

Configure the deployment to Cloud Foundry 112

Configure GitLab CI/CD to deploy your application 112

Semaphore CI Publishing Docker images on DockerHub 114

Creating The Secret 114

Configuring the Pipeline 115

Managing GoCD pipelines 117

Creating a new pipeline 117

Add a new material to an existing pipeline 117

Blacklist 118

Add a new stage to an existing pipeline 118

Add a new job to an existing stage 118

Add a new task to an existing Job 119

Clone an existing pipeline 120

Delete an existing pipeline 120

Pipeline Templates 120

Creating Pipeline Templates 120

Example 121

Using Administration UI 121

Using XML 121

Editing Pipeline Templates 122

Viewing Pipeline Templates 122

See also... 122


Stage approvals in action 122

Managing pipeline groups 123

A.I serving Continuous Integration and Continuous Delivery:

Introduction

Since the year 1936 the technology continued to advance thanks to the Turing machine which was
invented by the big master of the same name Alain Turing.

We must not forget that a large part of this scientific art that makes our everyday life was initiated by
the algorithms that was introduced in science by the Andalusians.

Then all this was developed and adapted to the science of information processing via well-known
scientists such as the great Dijkstra not to mention them all.

At the arrival we see more AI artificial intelligence that takes the lead on humans, the quack is that it
is made by humans itself from where all the beauty of science that is illustrated in the trust we place
in the works and works that we realize in the heart of the universe of science to best serve future
generations.

Through bostonDynamics robots, industrial robots and autonomous cars,

Ia is proving itself more and more in our everyday life.

However, as for all other technological phenomena it does not lack detractors: Mass Destruction of
employment , road accidents.

IA was not born from the last rain but it still has a good way to go and progress.

This makes it the most popular tool and technology area of the moment.
Technically speaking Any algorithm for automating actions previously performed by humans can be
defined as an artificial intelligence as the name suggests.

This phenomenon, which does not cease to invade our hotels to replace inventory agents in
factories, has a multitude of secrets that can be focused on this pavement especially on the
conceptual and technical sides.

Virtualisation :

Vagrant.

Today we can deploy machines of virtual cacluls machines to the mass thanks to docker swarm or
kubernetes and even install all via vagrants:
explanations:

Vagrant virtual machine deployment :

Vagrant.configure("2") do |config|

config.vm.provision "shell", inline: "echo Hello"

config.vm.define "web" do |web|

web.vm.box = "apache"

end

config.vm.define "db" do |db|

db.vm.box = "mysql"

end

end

here we defined two virtual machines to be deployed after running the apache and mysql script.

Containerisation:
A container is a standard unit of software that packages up code and all its dependencies so the
application runs quickly and reliably from one computing environment to another. A Docker
container image is a lightweight, standalone, executable package of software that includes
everything needed to run an application: code, runtime, system tools, system libraries and settings.

Container images become containers at runtime and in the case of Docker containers - images
become containers when they run on Docker Engine. Available for both Linux and Windows-based
applications, containerized software will always run the same, regardless of the infrastructure.
Containers isolate software from its environment and ensure that it works uniformly despite
differences for instance between development and staging.
Docker containers that run on Docker Engine:

 Standard: Docker created the industry standard for containers, so they could be portable
anywhere

 Lightweight: Containers share the machine’s OS system kernel and therefore do not require
an OS per application, driving higher server efficiencies and reducing server and licensing
costs

 Secure: Applications are safer in containers and Docker provides the strongest default
isolation capabilities in the industry

Kubernetes :

Subsequently on these same virtual machines we can launch and install microservices through the
docker container and image management tool which can itself be mass-managed via the kubernetes
tool.

example:

version
: '2'

services:

nginx:

build: nginx

restart: always

ports:

- 8080:80

volumes_from:

- wordpress

wordpress:
image: wordpress:php7.1-fpm-
alpine

environment:

WORDPRESS_DB_HOST: mysql

WORDPRESS_DB_PASSWORD:
example

mysql:

image: mariadb

environment:

MYSQL_ROOT_PASSWORD:
example

volumes:

- ./demo-db:/var/lib/mysql

Here for example we see how we could define microservices with their environments to be deployed
later in the VM created before by our vagrant tool.

We can of course do without docker procedures to rely only on the vagrant tool, however this
process seems more complex and less used.

Another methodology is the use of the Kubernetes solution to manage all processes.

Example : nginx server deployment via kubernetes

apiVersion: v1

kind: ReplicationController

metadata:

name: nginx

spec:

replicas: 2
selector:

app: nginx

template:

metadata:

name: nginx

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx

ports:

- containerPort: 80

After creating this file, we call the command Kubctl create file_name.yaml

On a vm or server that contains a previously installed kubernetes.

Here is a list of different methodologies for exploiting kubernetes tools:

multi- Project (SIG-


any any any CNI docs
support cluster-lifecycle)

Google Kubernetes
GCE docs Commercial
Engine

Docker Enterprise custom multi-support multi-support docs Commercial

multi- Commercial and


IBM Cloud Private Ansible multi-support docs
support Community
Ansible & RHEL &
Red Hat OpenShift multi-support docs Commercial
CoreOS CoreOS

multi-
Stackpoint.io multi-support docs Commercial
support

AppsCode.com Saltstack Debian multi-support docs Commercial

Community
Madcore.Ai Jenkins DSL Ubuntu flannel docs
(@madcore-ai)

multi-
Platform9 multi-support docs Commercial
support

multi-
Kublr custom multi-support docs Commercial
support

multi-
Kubermatic multi-support docs Commercial
support

Ubuntu IBM Cloud Commercial


IBM Cloud
Networking + docs
Kubernetes Service
Calico

flannel and/or
Giant Swarm CoreOS docs Commercial
Calico

GCE Saltstack Debian GCE docs Project

Azure Kubernetes
Ubuntu Azure docs Commercial
Service

Community
Azure (IaaS) Ubuntu Azure docs
(Microsoft)

Bare-metal custom Fedora none docs Project

Bare-metal custom Fedora flannel Community


docs (@aveshagarwal
)

libvirt custom Fedora flannel docs Community


(@aveshagarwal
)

custom Fedora flannel Community


KVM docs (@aveshagarwal
)

Marathon custom Community


CoreOS/
DCOS docs (Kubernetes-
Alpine
Mesos Authors)

AWS CoreOS CoreOS flannel docs Community

Community
GCE CoreOS CoreOS flannel docs
(@pires)

CoreOS CoreOS flannel Community


Vagrant docs (@pires, @Anton
ioMeireles)

Community
CloudStack Ansible CoreOS flannel docs
(@sebgoa)

multi-
VMware vSphere any multi-support docs Community
support

Community
Bare-metal custom CentOS flannel docs
(@coolsvap)

lxd Juju Ubuntu flannel/canal docs


Commercial and
Community

flannel/calico/ Commercial and


AWS Juju Ubuntu docs
canal Community

flannel/calico/ Commercial and


Azure Juju Ubuntu docs
canal Community

flannel/calico/ Commercial and


GCE Juju Ubuntu docs
canal Community

Oracle Cloud Juju Ubuntu flannel/calico/ docs Commercial and


canal Community

flannel/calico/
Rackspace custom CoreOS docs Commercial
canal

flannel/calico/ Commercial and


VMware vSphere Juju Ubuntu docs
canal Community

flannel/calico/ Commercial and


Bare Metal Juju Ubuntu docs
canal Community

Community
AWS Saltstack Debian AWS docs
(@justinsb)

Community
AWS kops Debian AWS docs
(@justinsb)

Community
Bare-metal custom Ubuntu flannel docs (@resouer, @WI
ZARD-CXY)

Community
oVirt docs
(@simon3z)

Community
any any any any docs
(@erictune)

Commercial
any any any any docs and
Community

multi- Commercial and


any RKE flannel or canal docs
support Community

Gardener Project/
multi-
any Cluster- multi-support docs Community and
Operator support Commercial

Alibaba Cloud
Container Service ROS CentOS flannel/Terway docs Commercial
For Kubernetes
Agile Stacks Terraform CoreOS multi-support docs Commercial

IBM Cloud
Ubuntu calico docs Commercial
Kubernetes Service

Community
Digital Rebar kubeadm any metal docs
(@digitalrebar)

VMware Cloud PKS Photon OS Canal docs Commercial

Mirantis Cloud
Salt Ubuntu multi-support docs Commercial
Platform

Definition of columns

 IaaS Provider is the product or organization which provides the virtual or physical machines
(nodes) that Kubernetes runs on.

 OS is the base operating system of the nodes.

 Config. Mgmt. is the configuration management system that helps install and maintain
Kubernetes on the nodes.

 Networking is what implements the networking model. Those with networking


type none may not support more than a single node, or may support multiple VM nodes in a
single physical node.

Conformance indicates whether a cluster created with this configuration has passed the
project’s conformance tests for supporting the API and base features of Kubernetes v1.0.0.

Support Levels

Project: Kubernetes committers regularly use this configuration, so it usually works


with the latest release of Kubernetes.

Commercial: A commercial offering with its own support arrangements.

Community: Actively supported by community contributions. May not work with


recent releases of Kubernetes.

Inactive: Not actively maintained. Not recommended for first-time Kubernetes users,
and may be removed.

Notes has other relevant information, such as the version of Kubernetes used.

Il y a aussi la méthodologie vagrant qui permet directement de déployer des


container via vagrant up implicitement basé sur docker :

Exemple :

Vagrant.configure(2) do |config|

config.vm.box = "ubuntu/trusty64"

config.vm.provider "virtualbox" do |v|

v.customize ["modifyvm", :id, "--memory", "2048"]

v.customize ["modifyvm", :id, "--cpus", "2"]

end

config.vm.provision "docker" do |docker|

# Configure docker images on your machine

end

end

config.vm.provision "docker" do |docker|

docker.pull_images "progrium/consul"

docker.pull_images "progrium/registrator"

end

config.vm.provision "docker" do |docker|

docker.run "progrium/consul",

args: "-p 8500:8500",

cmd: "-server -bootstrap"

end
Here we create a docker via vagrant then we ask him to get specific images with the pull command
without leaving the vagrant script.

The different tools that allow configuraion of containers and vm:

Ansible :

# Containers are matched either by name (if provided) or by an exact match of

# the image they were launched with and the command they're running. The module

# can accept either a name to target a container uniquely, or a count to operate

# on multiple containers at once when it makes sense to do so.

# Ensure that a data container with the name "mydata" exists. If no container

# by this name exists, it will be created, but not started.

- name: data container

docker:

name: mydata

image: busybox

state: present

volumes:

- /data

# Ensure that a Redis server is running, using the volume from the data

# container. Expose the default Redis port.

- name: redis container

docker:
name: myredis

image: redis

command: redis-server --appendonly yes

state: started

expose:

- 6379

volumes_from:

- mydata

# Ensure that a container of your application server is running. This will:

# - pull the latest version of your application image from DockerHub.

# - ensure that a container is running with the specified name and exact image.

# If any configuration options have changed, the existing container will be

# stopped and removed, and a new one will be launched in its place.

# - link this container to the existing redis container launched above with

# an alias.

# - grant the container read write permissions for the host's /dev/sda device

# through a node named /dev/xvda

# - bind TCP port 9000 within the container to port 8080 on all interfaces

# on the host.

# - bind UDP port 9001 within the container to port 8081 on the host, only

# listening on localhost.

# - specify 2 ip resolutions.

# - set the environment variable SECRET_KEY to "ssssh".

- name: application container

docker:
name: myapplication

image: someuser/appimage

state: reloaded

pull: always

links:

- "myredis:aliasedredis"

devices:

- "/dev/sda:/dev/xvda:rwm"

ports:

- "8080:9000"

- "127.0.0.1:8081:9001/udp"

extra_hosts:
# exact image and command. If fewer than five are running, more will be launched;

# if more are running, the excess will be stopped.

- name: load-balanced containers

docker:

state: reloaded

count: 5

image: someuser/anotherappimage

command: sleep 1d

# Unconditionally restart a service container. This may be useful within a

# handler, for example.

- name: application service

docker:

name: myservice

image: someuser/serviceimage

state: restarted

# Stop all containers running the specified image.

- name: obsolete container

docker:

image: someuser/oldandbusted

state: stopped

# Stop and remove a container with the specified name.


- name: obsolete container

docker:

name: ohno

image: someuser/oldandbusted

state: absent

# Example Syslogging Output

- name: myservice container

docker:

name: myservice

image: someservice/someimage

state: reloaded

log_driver: syslog

log_opt:

syslog-address: tcp://my-syslog-server:514

syslog-facility: daemon

syslog-tag: myservice

Puppet:

Installation
Packaged as a Puppet module, image_build is available on the Forge. You can install it with the usual
tools, including the puppet module command:

puppet module install puppetlabs/image_build


You also need to install Docker, for which I'd recommend the excellent Docker for Mac or Docker for
Windows. Or you can just install Docker from packages if you're on Linux.

After installing the module, you can use some new Puppet commands, including puppet docker,
which in turn has two subcommands: one triggers a build of an image, while the other outputs the
intermediary Dockerfile. The examples directory contains a set of examples for experimenting with.
Let’s look at one of those now.

Hello world
We'll create a Docker image running Nginx and serving a simple text file. This is a realistic but
obviously simplistic example; it could be any application, such as a custom Java or Ruby or other
application.

First, let’s use a few Puppet modules from the Forge. We'll use the existing Nginx module and specify
its dependencies. We'll also use the dummy_service module to ignore service resources in the Nginx
module. We do this by creating a standard Puppetfile.

$ cat Puppetfile

forge 'https://forgeapi.puppetlabs.com'

mod 'puppet/nginx'

mod 'puppetlabs/stdlib'

mod 'puppetlabs/concat'

mod 'puppetlabs/apt'

mod 'puppetlabs/dummy_service'

Next, let’s write a simple manifest. Disabling nginx daemon mode isn't supported by the module just
yet (but the folks maintaining the module have just merged this capability), so let’s drop a file in
place with an exec. Have a look at manifests/init.pp:

include 'dummy_service'

class { 'nginx': }

nginx::resource::vhost { 'default':

www_root => '/var/www/html',

}
file { '/var/www/html/index.html':

ensure => present,

content => 'Hello Puppet and Docker' ,

exec { 'Disable Nginx daemon mode':

path => '/bin',

command => 'echo "daemon off;" >> /etc/nginx/nginx.conf',

unless => 'grep "daemon off" /etc/nginx/nginx.conf' ,

Let’s also provide some metadata for the image we intend to build. Take a look at metadata.yaml.
It’s worth noting that this is the only new bit so far.

cmd: nginx

expose: 80

image_name: puppet/nginx

Finally, we can build a Docker image with the following command:

puppet docker build

That’s it. You should see the build output and a new image being saved locally. We’ve aimed for a
user experience that’s at least as simple as running docker build.

Let’s run the resulting image and confirm it’s serving the content we added. We expose the
webserver on port 8080 to the local host to make that easier.

$ docker run -d -p 8080:80 puppet/nginx

83d5fbe370e84d424c71c1c038ad1f5892fec579d28b9905cd1e379f9b89e36d

$ curl http://0.0.0.0:8080

Hello Puppet and Docker%

The image could be run via the Puppet Docker module, or atop any of the container schedulers.

Chef and Habitat:


Continuous Delivery is a software development discipline where you build software in such a way
that the software can be released to production at any time.The primary goal of the process is to be
production ready anytime and anywhere.

Martin Fowler

In this simple and amazing piece of article we are going to discuss and explore some new amazing
and rather interesting pieces technology.One i.e. Habitat,an Automation tool that Automates your
process to build and publish Docker Images and Second i.e. Automate, which is a new chef CI/CD tool
with a cool new dashboard & better features.As an added bonus I am also going to share some nice
tips that I use to make my life easier while handling the CI/CD pipelines.So let’s get started,

Habitat
Introduction to Habitat

Habitat is a new amazing tool introduced by Chef.It basically tries to serve one motive i.e. to
automate the process of making a container image as easily as possible.You can think of it as
Dockerfile for the docker except that it has some new features for building images and process to
publish it in CI/CD perspective. The tool has been introduced in 2016 & is still into development
phase. It is written in rust and reactive by nature. Now let’s do some installation:

First, visit https://github.com/habitat-sh/habitat#install :

$ curl
https://raw.githubusercontent.com/habitat-sh/habitat/master/componen
ts/hab/install.sh | sudo bash

After the installation, try running it on the command line using the below command:

$ hab

hab 0.51.0/20171219021329

Authors: The Habitat Maintainers <humans@habitat.sh>

"A Habitat is the natural environment for your services" - Alan


Turing

USAGE:

hab [SUBCOMMAND]
FLAGS:

-h, --help Prints help information

-V, --version Prints version information

SUBCOMMANDS:

bldr Commands relating to Habitat Builder

cli Commands relating to Habitat runtime config

config Commands relating to Habitat runtime config

file Commands relating to Habitat files

help Prints this message or the help of the given


subcommand(s)

origin Commands relating to Habitat origin keys

pkg Commands relating to Habitat packages

plan Commands relating to plans and other app-specific


configuration.

ring Commands relating to Habitat rings

studio Commands relating to Habitat Studios

sup Commands relating to the Habitat Supervisor

svc Commands relating to Habitat services

user Commands relating to Habitat users

ALIASES:

apply Alias for: 'config apply'

install Alias for: 'pkg install'

run Alias for: 'sup run'

setup Alias for: 'cli setup'

start Alias for: 'svc start'

stop Alias for: 'svc stop'


term Alias for: 'sup term'

If you receive the above output, then you have successfully installed habitat.

Habitat Architecture

Now upon closely looking at its architecture and how to write it. You can clearly observe the various
files one has to write in order to bring up the container/image. The main file in this section is the
plan.sh file which is responsible for the deployment strategy/dependencies/package name of the
habitat image. It is mandatory to make this file and configure it properly in order to achieve the best
results.

Next, is the default.toml file. This file contains the information about the ports and external
configurations of your application that you have. It is similar to having nginx.conf for nginx or
apache2conf for apache, which I believe is an interesting and good idea.

For the hooks part, I observed its usage while exploring some of the samples provided by habitat
team in there docs.In simple terms, it is basically breaking down your requirements as per your
application into multiple stages each having its priority in different order while running your
application. Like we have Entrypoint in Dockerfile. For example, Here the file Init in scripts contains
your initialization commands.
Some sample examples:

# Default.toml

port = 8090

important_message = 'RUN NOW!!'

view rawdefault.toml hosted with ❤ by by GitHub

# Hooks

## Init

cd {{pkg.svc_path}}

if(Test-Path var) { Remove-Item var -Recurse -Force }

New-Item -Name var -ItemType Junction -target "{{pkg.path}}/www" | Out-Null

## Run

cd "{{pkg.svc_var_path}}"

$env:HAB_CONFIG_PATH="{{pkg.svc_config_path}}"
view rawhooks.sh hosted with ❤ by by GitHub

# Plan.sh

pkg_origin=ramitsurana

pkg_name=hadoop

pkg_version="0.0.1"

pkg_license=('MIT')

pkg_maintainer="Ramit Surana <ramitsurana@gmail.com>"

pkg_upstream_url=https://github.com/ramitsurana/chef-automate-habitat

pkg_deps=(core/vim core/jre8)

pkg_binds=(

return 0

do_build() {

return 0

do_install() {

return 0

}
view rawplan.sh hosted with ❤ by by GitHub

Habitat Builder

The Habitat Builder is a place similar to Docker Hub/Quay.io. It is a place where you can
automatically check in you code with habitat and build a variety of different container images. It also
enables you to publish your docker images on docker hub by connecting your docker hub account.To
get started sign up at Habitat Builder.

The term origin here can be defined as a namespace which is created by the user/organization to
build one’s own packages. It is similar to defining your name in the dockerhub account.
As you can observe from above Habitat asks you to connect your GitHub account and specify the
path to which your plan.sh file is placed. It has a by default path under the habitat folder in which it
searches your plan.sh file. You can specify your path and use the dockerhub integration if you wish to
publish your images to dockerhub.

Similar to DockerHub, you can also connect your ECR Registry on your AWS account by visiting the
Integrations section.

After creating a package/build you can observe the dependencies by scrolling down the page:
Here you can observe that it consists of 2 sections, labelled as Transitive dependencies and
Dependencies.In simple terms, the transitive dependencies can be labelled as a basic set of packages
that are required by every application that you wish to build using docker. These are provisioned and
managed by the Habitat Team. You can also treat it similar to the FROM Section when writing a
Dockerfile.

On the other hand, Dependencies label is used to signify the extra packages you are using/mentioned
in your plan.sh file being used by your application.

Habitat Studio

Habitat Studio is a another important feature of Habitat that allows you to test and run you
application in simulation to like a real enviornment before you publish it. If you are familiar with
python, you can think it as similar as virtualenv. So let’s try out hab studio.

$ hab studio setup


In the setup default origin, choose your name for origin. In my case I am taking it as ramitsurana.
In order to achieve our objective we are going to use this github feature

In case you are wondering how to create a new access Github token, please open the following url
Copy the generated token in the cli tool for hab & you are good to go.

Do make sure to save this token. We will use it in the next part of the article.

Docker Vs Habitat

Chef Automate
Introduction to Chef Automate

Chef automate is a CI/CD Based solution provided by Chef to complete your end to end delivery
requirements. It provides you with necessary tools to make your life easier and simple. It has by
default integration for features & tools like Inspec for Compliance, LDAP/SAML Support, Slack
Integration for Notifications etc.
Trying Out on Local System:
Chef Automate can be easily tried on your local system by downloading Chef Automate from here

For the cli, Download the package from here .

In order to check, try running:

ramit@ramit-Inspiron-3542:~$ automate-ctl

I don't know that command.

omnibus-ctl: command (subcommand)

create-enterprise

Create a new enterprise

create-user

Create a new user

create-users

Create new users from a tsv file

delete-enterprise

Deletes an existing enterprise

delete-project

Deletes an existing project

delete-runner

.....
Check if everything is good or not:

ramit@ramit-Inspiron-3542:~$ sudo automate-ctl preflight-check

[sudo] password for ramit:

Running Preflight Checks:

Checking for required resources...

✔ [passed] CPU at least 4 cores [passed] CPU at least 4 cores

✖ [failed] memory at least 16GB [failed] memory at least 16GB

Checking for required directories...

✔ [passed] CPU at least 4 cores [passed] /var

✔ [passed] CPU at least 4 cores [passed] /var has at least 80GB free

✔ [passed] CPU at least 4 cores [passed] /etc

Checking for required umask...

✔ [passed] CPU at least 4 cores [passed] 0022

....

For Authenticating License:

(Grab your free License from here)

// Setup License & organization

$ automate-ctl setup --license /$PATH/automate.license --server-url


https://localhost/organizations/$org_name --enterprise default --
configure --no-build-node

$ automate-ctl reconfigure

//Create a default user and password

$ automate-ctl create-user default $user --password admin --roles


"admin"

Try opening http://127.0.0.1:80 to interact with the Web UI.

Chef Automate Setup on AWS EC2


Prerequisites
In order for the Chef Automate Setup to work, we will use a minimal setup in order to proceed. Here
are the configuration details:

Category Inbound Security Ports Access Operating System & Instance Size

22 (SSH), 80 (HTTP), 443 (HTTPS), 10000- Ubuntu 16.04(ami-21766642) &


Chef Server
10003 (push jobs) t2.micro

Chef Automate Ubuntu 16.04(ami-21766642) &


22 (SSH), 80 (HTTP), 443 (HTTPS), 8989 (Git)
Server t2.large

Do make sure to install the License file required for running Chef Automate from here.Its a 30 day
free trial. As per its current pricing page the fee for Chef Automate on AWS is $0.0155 node/hour.

Also, we will be using fully-qualified domain names (FQDNs) as recommended by Chef.

Please make sure to note the FQDN for both your Chef server and Chef Automate server.

$CHEF_SERVER_FQDN="Public DNS NAME of Chef Server EC2 Instance"

$CHEF_AUTOMATE_FQDN="Public DNS NAME of Chef Automate EC2 Instance"

Setup
Let’s get started:

Using the aws console,we can start 2 EC2 instances with ( t2.large ) instance type. Make sure to
configre your security groups like shown below:

Make sure to add Port 8989 for Git with Chef-Automate Server.

After bringing up the chef-server machine, please log into the machine and use git to clone the
following repo:

$ git clone https://github.com/ramitsurana/chef-automate-habitat

Run scripts/install-chef-server.sh :

// Make sure to set the variable with proper DNS Name


$ CHEF_AUTOMATE_FQDN="Public DNS NAME of Chef Automate EC2 Instance"

// Add permissions to execute

$ chmod +x $HOME/chef-automate-habitat/scripts/install-chef-
server.sh

// Run the script to install chef

$ sudo $HOME/chef-automate-habitat/scripts/install-chef-server.sh
$CHEF_AUTOMATE_FQDN ramit

Successfully Copy Files to Chef Server from local machine using scp:

// License File

$ scp -i ~/.ssh/private_key -o StrictHostKeyChecking=no -o


UserKnownHostsFile=/dev/null ~/Downloads/automate.license
ubuntu@ec2-52-91-162-254.compute-1.amazonaws.com:/tmp

Successfully Copy Files to Chef Automate from Chef Server using scp:

//Copy New PEM File from Chef Server to your local machine

$ scp -i <YOUR-EC2>.pem -o StrictHostKeyChecking=no -o


UserKnownHostsFile=/dev/null
ubuntu@<YOUR-CHEF-SERVER-DNS>:/drop/delivery.pem /tmp

//Upload the delivery.pem file to Chef Automate instance

$ scp -i <YOUR-CHEF-AUTOMATE>.pem -o StrictHostKeyChecking=no -o


UserKnownHostsFile=/dev/null /tmp/delivery.pem ubuntu@<YOUR-CHEF-
AUTOMATE-DNS>:/home/ubuntu

//Upload the your license file to Chef Automate instance

$ scp -i <YOUR-CHEF-AUTOMATE>.pem -o StrictHostKeyChecking=no -o


UserKnownHostsFile=/dev/null /<PATH-TO-AUTOMATE-LICENSE>.license
ubuntu@<YOUR-CHEF-AUTOMATE-DNS>:/home/ubuntu

Now your Chef Server is fully up and ready. We now move onto Chef-Automate Sever, after getting
into it using ssh. Follow these below steps:

Use git to clone the following repo:

$ git clone https://github.com/ramitsurana/chef-automate-habitat


// Make sure to set the variable with proper DNS Name

$CHEF_SERVER_FQDN="Public DNS NAME of Chef Server EC2 Instance"

// Add permissions to execute

$ chmod +x $HOME/chef-automate-habitat/scripts/install-chef-
automate.sh

// Run the script to install chef

$ sudo $HOME/chef-automate-habitat/scripts/install-chef-automate.sh
$CHEF_SERVER_FQDN ramit

After completing the above steps, you can proceed to open the DNS/IP for Chef Automate Server

Hoorah ! You have successfully configures chef automate and now you are ready to login.

Let’s start exploring some new features of the Chef Automate Dashboard:

With your user name and password admin, try to login. You will observe the following screen:
For shutting down chef automate:

$ sudo automate-ctl stop

Chef Automate Internals

Some of the chef automate internals that I observed while exploring this tool are as follows:
 Elasticsearch

 Logstash

 Nginx

 Postgresql

 RabbitMq

We will be discussing more on this in the next article :)

Tips & Tricks on CI/CD


As a bonus, sharing some tips on building & managing CI/CD in a better way:

 Automate Liveness Agent

You can also use chef automate liveness agent for sending keepalive messages to Chef Automate,
which prevents nodes that are up but not frequently running Chef Client from appearing as “missing”
in the Automate UI. At the time of writing, it is currently in development.

 Using Syntax Checker


One of the most primary starting point in CI/CD is to write a file using which we describe how our
jobs are handled in the pipeline. For various tools, there are many syntax validators. I found them
really useful. Some of there tools are:

1. Jenkins

2. Gitlab

3. Travis

 Use CI Web Pages for better output in Web Development Related Projects

You can use this script in Gitlab (.gitlab-ci.yml) to obtain the output at http://<-USERNAME-OF-
GITLAB->.gitlab.io/<-PROJECT-NAME->/

pages:

stage: deploy

script:

- mkdir .public

- cp -r * .public

- mv .public public

artifacts:

paths:

- public

only:

- master

For GitHub use the below script in (_config.yml) to obtain the output at http://<-USERNAME-OF-
GITHUB->.github.io/<-PROJECT-NAME->/

theme: jekyll-theme-cayman

 Avoid using Polling using GitHub Hooks

As correctly said by Koshuke, it is important that we adopt new methods to trigger the pipelines.

 Use proper checkout strategy

One of the mistakes that one can do while checking out multiple repositories in a pipeline is the fact
that unintended commit on other repo might be triggering the pipeline. The best command to
checkout the repo is this :

checkout(
poll: false,

scm: [

$class: 'GitSCM', branches: [[name: '*/master']],

userRemoteConfigs: [[

url: MY_URL.git,

credentialsId: CREDENTIALS_ID]],

extensions: [

[$class: 'DisableRemotePoll'],

[$class: 'PathRestriction', excludedRegions: '',


includedRegions: '*']]

])

 Try Using Python for writing automation scripts

Python is a super amazing and fun language to work with. One of the cool reasons why I recommend
it is because of the awesome libraries it has support to like dictionary, json, csv etc.

Terraform:

Docker Provider
The Docker provider is used to interact with Docker containers and images. It uses the Docker API to
manage the lifecycle of Docker containers. Because the Docker provider uses the Docker API, it is
immediately compatible not only with single server Docker but Swarm and any additional Docker-
compatible API hosts.

Use the navigation to the left to read about the available resources.

»Example Usage
# Configure the Docker provider

provider "docker" {

host = "tcp://127.0.0.1:2376/"
}

# Create a container

resource "docker_container" "foo" {

image = "${docker_image.ubuntu.latest}"

name = "foo"

resource "docker_image" "ubuntu" {

name = "ubuntu:latest"

»Registry Credentials
Registry credentials can be provided on a per-registry basis with the registry_auth field, passing
either a config file or the username/password directly.

Note The location of the config file is on the machine terraform runs on, nevertheless if the specified
docker host is on another machine.

provider "docker" {

host = "tcp://localhost:2376"

registry_auth {

address = "registry.hub.docker.com"

config_file = "~/.docker/config.json"

registry_auth {

address = "quay.io:8181"

username = "someuser"

password = "somepass"
}

data "docker_registry_image" "quay" {

name = "myorg/privateimage"

data "docker_registry_image" "quay" {

name = "quay.io:8181/myorg/privateimage"

Note When passing in a config file make sure every repo in the auths object should have a
corresponding authstring.

In this case, either use username and password directly or set the enviroment
variables DOCKER_REGISTRY_USER and DOCKER_REGISTRY_PASS or add the string manually via

echo -n "user:pass" | base64

# dXNlcjpwYXNz=

and paste it into ~/.docker/config.json:

"auths": {

"repo.mycompany:8181": {

"auth": "dXNlcjpwYXNz="

»Certificate information
Specify certificate information either with a directory or directly with the content of the files for
connecting to the Docker host via TLS.

provider "docker" {
host = "tcp://your-host-ip:2376/"

# -> specify either

cert_path = "${pathexpand("~/.docker")}"

# -> or the following

ca_material = "${file(pathexpand("~/.docker/ca.pem"))}" # this can be omitted

cert_material = "${file(pathexpand("~/.docker/cert.pem"))}"

key_material = "${file(pathexpand("~/.docker/key.pem"))}"

»Argument Reference
The following arguments are supported:

 host - (Required) This is the address to the Docker host. If this is blank,
the DOCKER_HOST environment variable will also be read.

 cert_path - (Optional) Path to a directory with certificate information for connecting to the
Docker host via TLS. It is expected that the 3 files {ca, cert, key}.pem are present in the
path. If the path is blank, the DOCKER_CERT_PATHwill also be checked.

 ca_material, cert_material, key_material, - (Optional) Content


of ca.pem, cert.pem, and key.pem files for TLS authentication. Cannot be used together
with cert_path. If ca_material is omitted the client does not check the servers certificate
chain and host name.

 registry_auth - (Optional) A block specifying the credentials for a target v2 Docker


registry.

o address - (Required) The address of the registry.

o username - (Optional) The username to use for authenticating to the registry.


Cannot be used with the config_file option. If this is blank,
the DOCKER_REGISTRY_USER will also be checked.

o password - (Optional) The password to use for authenticating to the registry.


Cannot be used with the config_file option. If this is blank,
the DOCKER_REGISTRY_PASS will also be checked.
o config_file - (Optional) The path to a config file containing credentials for
authenticating to the registry. Cannot be used with
the username/password options. If this is blank, the DOCKER_CONFIG will also be
checked.

NOTE on Certificates and docker-machine: As per Docker Remote API documentation, in any
docker-machine environment, the Docker daemon uses an encrypted TCP socket (TLS) and
requires cert_path for a successful connection. As an alternative, if using docker-machine,
run eval $(docker-machine env <machine-name>) prior to running Terraform, and the host
and certificate path will be extracted from the environment.

CI CD Tools:
Continuous integration (CI) and continuous delivery (CD) embody a culture, set of operating
principles, and collection of practices that enable application development teams to deliver code
changes more frequently and reliably. The implementation is also known as the CI/CD pipeline and is
one of the best practices for devops teams to implement.

Table of Contents

 CI/CD defined

 How continuous integration improves collaboration and quality

 Continuous testing goes beyond test automation

 The CD pipeline automates changes to multiple environments

 CI/CD enables more frequent code deployments

CI/CD defined
Continuous integration is a coding philosophy and set of practices that drive development teams to
implement small changes and check in code to version control repositories frequently. Because most
modern applications require developing code in different platforms and tools, the team needs a
mechanism to integrate and validate its changes.

The technical goal of CI is to establish a consistent and automated way to build, package, and test
applications. With consistency in the integration process in place, teams are more likely to commit
code changes more frequently, which leads to better collaboration and software quality.

Continuous delivery picks up where continuous integration ends. CD automates the delivery of
applications to selected infrastructure environments. Most teams work with multiple environments
other than the production, such as development and testing environments, and CD ensures there is
an automated way to push code changes to them. CD automation then performs any necessary
service calls to web servers, databases, and other services that may need to be restarted or follow
other procedures when applications are deployed.

Continuous integration and delivery requires continuous testing because the objective is to deliver
quality applications and code to users. Continuous testing is often implemented as a set of
automated regression, performance, and other tests that are executed in the CI/CD pipeline.

A mature CI/CD practice has the option of implementing continuous deployment where application
changes run through the CI/CD pipeline and passing builds are deployed directly to production
environments. Teams practicing continuous delivery elect to deploy to production on daily or even
hourly schedule, though continuous delivery isn’t always the optimal for every business application.

How continuous integration improves collaboration and


quality
Continuous integration is a development philosophy backed by process mechanics and some
automation. When practicing CI, developers commit their code into the version control repository
frequently and most teams have a minimal standard of committing code at least daily. The rationale
behind this is that it’s easier to identify defects and other software quality issues on smaller code
differentials rather than larger ones developed over extensive period of times. In addition, when
developers work on shorter commit cycles, it is less likely for multiple developers to be editing the
same code and requiring a merge when committing.

Teams implementing continuous integration often start with version control configuration and
practice definitions. Even though checking in code is done frequently, features and fixes are
implemented on both short and longer time frames. Development teams practicing continuous
integration use different techniques to control what features and code is ready for production.

One technique is to use version-control branching. A branching strategy such as Gitflow is selected to
define protocols over how new code is merged into standard branches for development, testing and
production. Additional feature branches are created for ones that will take longer development
cycles. When the feature is complete, the developers can then merge the changes from feature
branches into the primary development branch. This approach works well, but it can become difficult
to manage if there are many features being developed concurrently.

There are other techniques for managing features. Some teams also use feature flags, a
configuration mechanism to turn on or off features and code at run time. Features that are still under
development are wrapped with feature flags in the code, deployed with the master branch to
production, and turned off until they are ready to be used.

The build process itself is then automated by packaging all the software, database, and other
components. For example, if you were developing a Java application, CI would package all the static
web server files such as HTML, CSS, and JavaScript along with the Java application and any database
scripts.

CI not only packages all the software and database components, but the automation will also execute
unit tests and other testing. This testing provides feedback to developers that their code changes
didn’t break any existing unit tests.

Most CI/CD tools let developers kick off builds on demand, triggered by code commits in the version
control repository, or on a defined schedule. Teams need to discuss the build schedule that works
best for the size of the team, the number of daily commits expected, and other application
considerations. A best practice to ensure that commits and builds are fast, otherwise, it may impede
the progress of teams trying to code fast and commit frequently.

Continuous testing goes beyond test automation


Automated testing frameworks help quality assurance engineers define, execute, and automate
various types of tests that can help development teams know whether a software build passes or
fails. They include functionality tests that are developed at the end of every sprint and aggregated
into a regression test for the entire application. These regression tests then inform the team whether
a code change failed one or more of the tests developed across all functional areas of the application
where there is test coverage.

A best practice is to enable and require developers to run all or a subset of regressions tests in their
local environments. This step ensures that developers only commit code to version control after
regression tests pass on the code changes.

Regression tests are just the start. Performance testing, API testing, static code analysis, security
testing, and other testing forms can also be automated. The key is to be able to trigger these tests
either through command line, webhook, or web service and that they respond with success or fail
status codes.

Once testing is automated, continuous testing implies that the automation is integrated into the
CI/CD pipeline. Some unit and functionality tests can be integrated into CI that flags issues before or
during the integration process. Test that require a full delivery environment such as performance and
security testing are often integrated into CD and performed after builds are delivered to target
environments.

The CD pipeline automates changes to multiple


environments
Continuous delivery is the automation that pushes applications to delivery environments. Most
development teams typically have one or more development and testing environments where
application changes are staged for testing and review. A CI/CD tool such as Jenkins or Travis CI is used
to automate the steps and provide reporting.
A typical CD pipeline includes many of these steps:

 Pulling code from version control and executing a build.

 Executing any required infrastructure steps that are automated as code to stand up or tear
down cloud infrastructure..

 Moving code to the target compute environment.

 Managing the environment variables and configuring them for the target environment.

 Pushing application components to their appropriate services, such as web servers, API
services, and database services.

 Executing any steps required to restarts services or call service endpoints that are needed for
new code pushes.

 Executing continuous tests and rollback environments if tests fail.

 Providing log data and alerts on the state of the delivery.

More sophisticated CD may have other steps such as performing data synchronizations, archiving
information resources, or performing some application and library patching.

Once a CI/CD tool is selected, development teams must make sure that all environment variables are
configured outside the application. CI/CD tools allow setting these variables, masking variables such
as passwords and account keys, and configuring them at time of deployment for the target
environment.

Many teams implementing CI/CD pipelines on cloud environments also use containers such as
Docker and Kubernetes. Containers allow packaging and shipping applications in standard, portable
ways. The containers can then be used to scale up or tear down environments that have variable
workloads.

CD tools also provide dashboard and reporting functions. If builds or deliveries fail, they alert
developers with information on the failed builds. They integrate with version control and agile tools,
so they can be used to look up what code changes and user stories made up a build.

CI/CD enables more frequent code deployments


To recap, CI packages and tests software builds and alerts developers if their changes failed any unit
tests. CD is the automation that delivers changes to infrastructure and executes additional tests.

CI/CD pipelines are designed for businesses that want to improve applications frequently and require
a reliable delivery process. The added effort to standardize builds, develop tests, and automate
deployments is the manufacturing process for deploying code changes. Once in place, it enables
teams to focus on the process of enhancing applications and less on the system details of delivering it
to computing environments.
CI/CD is a devops best practice because it addresses the misalignment between developers who want
to push changes frequently, with operations that want stable applications. With automation in place,
developers can push changes more frequently. Operations teams see greater stability because
environments have standard configurations, there is continuous testing in the delivery process,
environment variables are separated from the application, and rollback procedures are automated.

Getting started with CI/CD requires development teams and operational teams to collaborate on
technologies, practices, and priorities. Teams need to develop consensus on the right approaches for
their business and technologies so that once CI/CD is in place the team is onboard with following
practices consistently.

CI/CD TOOLS

Jenkins vs Travis - Comparing Two Popular CI Tools


Continuous integration is now common practice, but not all CI tools are created equally. To be more
precise, certain CI tools are better suited for certain types of projects. This guide will compare two of
the most popular CI tools, Jenkins and Travis CI, to help you decide which one is preferable in specific
situations.

What Is Continuous Integration?#


Continuous integration, or CI, is the practice of integrating individual developers’ code and making
builds several times each day. The concept was introduced over two decades ago to avoid
“integration hell,” or the inevitable onslaught of problems that occur if integration is put off until the
end of a project. Besides saving programmers from headaches, CI also saves developers time and
money. Addressing individual problems as they arise is far more efficient than putting everything off
until the last minute and trying to figure out what went wrong.

Programmers used to be solely responsible for integrating their own code, but now CI tools running
on a server handle integration automatically. Such tools can be set up to build at scheduled intervals
or as new code enters the repository. Test scripts are then automatically run to make sure everything
behaves as it should. After that, builds can be easily deployed on a testing server and made available
for demo or release. Some continuous integration tools even automatically generate documentation
to assist with quality control and release management.

How Do CI Servers Work?#


Continuous integration servers can be configured to integrate every ten minutes, once a day or every
time a developer interacts with the server. Whenever changes are committed to the repository, the
CI server runs tests against the master code. If the CI tool identifies problems during integration, it
can stop the build before everything breaks and generate a report pointing out specific problems for
developers to fix. Ongoing integration and testing helps teams better gauge their progress on
meeting deadlines and thus avoid last-minute delays. The goal of continuous integration is to ensure
that all of the code is stable and ready to be deployed at all times.

Of course, different development teams have different needs, which is why there are dozens of CI
tools to choose from today. There is rarely a one-size-fits-all solution in software development. The
best CI tool for open source projects might not be ideal for enterprise software. For example, let’s
compare two CI tools intended for different types of jobs: Jenkins vs Travis CI.

What Is Jenkins?#

Jenkins is a self-contained, Java-based CI tool. Jenkins CI is open source, and the software offers a lot
of flexibility in terms of when and how frequently integration occurs. Developers can also specify
conditions for customized builds. Jenkins is supported by a massive plugin archive, so developers can
alter how the software looks and operates to their liking. For example, the Jenkins Pipeline suite of
plugins comes with tools that let developers model simple-to-complex delivery pipelines as code
using the Pipeline DSL. There are also plugins that extend functionality for things like authentication
and alerts. If you want to run Jenkins in collaboration with Kubernetes and Docker, there are also
plugins for that.

While the software’s high level of customizability is seen as a benefit to many, Jenkins can take a
while to configure to your liking. Unlike tools like Travis CI that are ready to use out-of-the-box,
Jenkins can require hours or even days of set up time depending on your needs.

Jenkins Features:

 Available for Windows, Mac OS X, and other Unix-like systems

 Supported by hundreds of plugins available via the Jenkins Update Center

 Plugin architecture allows developers to add their own extensions

 Integrates with most tools in the continuous integration and delivery toolchain

 Includes various job modes

 Lets you launch builds with various conditions

 Compatible with Libvirt, Kubernetes, Docker and many other programs

 Boasts a RESTful API that is ideal for cloud-based web services

Jenkins Pros:
 Free to download and use

 Practically endless options for customization

 An ever-growing collection of plugins

Jenkins Cons:

 Requires a dedicated server, which may entail an extra expense

 Can take a while to configure and customize

What Is Travis CI?#

Travis CI is another CI tool that’s free to download, but unlike Jenkins, it also comes with free
hosting. Therefore, developers don’t need to provide their own dedicated server. While Travis CI can
be used for open source projects at no cost, developers must purchase an enterprise plan for private
projects.

Since the Travis CI server resides on the cloud, it’s easy to test projects in any environment, device or
operating system. Such testing can be performed synchronously on Linux and macOS machines.
Another benefit of the hosted environment is that the Travis CI community handles all server
maintenance and updates. With Jenkins, those responsibilities are left to the development team.

Of course, teams working on highly-sensitive projects may be wary of sharing everything with a third-
party, so many large corporations and government agencies would rather run continuous integration
on their own servers so that they have complete control.

Travis CI supports Docker and dozens of languages, but it pales in comparison to Jenkins when it
comes to options for customization. Travis CI also lacks the immense archive of plugins that Jenkins
boasts. Consequently, Travis CI offers less functionality, but it’s also much easier to configure; you
can have Travis CI set up and running within minutes rather than hours or days.

Another selling point of Travis CI is the build matrix feature, which allows you to accelerate the
testing process by breaking it into parts. For example, you can split unit tests and integration tests
into separate build jobs that run parallel to take advantage of your account’s full build capacity. For
more information about the build matrix option, see the official Travis CI docs.

Travis CI Features:

 Comes with free cloud-based hosting that requires no maintenance or administration

 Capable of running tests on Linux and Mac OS X simultaneously

 Supports the following languages: Android, C, C#, C++, Clojure, Crystal, D, Dart, Erlang, Elixir,
F#, Go, Groovy, Haskell, Haxe, Java, JavaScript (with Node.js), Julia, Objective-C, Perl, Perl6,
PHP, Python, R, Ruby, Rust, Scala, Smalltalk and Visual Basic
Travis CI Pros:

 Lightweight and easy to set up

 Free for open source projects

 No dedicated server needed

 Build matrix feature

Travis CI Cons:

 Enterprise plans come with a cost

 Limited options for customization

A Side-by-Side Comparison#
From a cost perspective, both Jenkins and Travis CI are free to download and use for open source
projects, but Jenkins requires developers to run and maintain their own dedicated server, so that
could be considered an extra expense. If you can’t or don’t want to configure Jenkins on your own
server, there are cloud hosting services specifically for Jenkins. If you need help setting up a Jenkins
server, there are plenty of tutorials online to help you perform any type of set up you need.

Travis CI offers hosting for free, but you’ll have to pay for an enterprise plan if your project is private.
Travis CI enterprise plans start at $129 per month and go up based on the level of support you
require. Fortunately, they don’t charge per project, so if you have multiple projects that need
hosting, then you can really get your money’s worth. Travis CI’s maintenance-free hosting is a major
plus since they take care of server updates and the like. All developers must do is maintain a config
file. If you host Jenkins on your own server, then you are of course responsible for maintaining it.
Fortunately, Jenkins itself requires little maintenance, and it comes with a built-in GUI tool to
facilitate easy updates.

If you want a CI tool that you can quickly set up and begin using right away, then Travis CI won’t
disappoint. It takes very little effort to get started; just create a config file and start integrating.
Jenkins, on the other hand, requires extensive setup, so you’ll be disappointed if you were hoping to
just dive right in. How long Jenkins takes to configure will depend on the complexity of your project
and the level of customization you desire.

As far as performance goes, Jenkins and Travis CI are pretty evenly matched. Which one will work
best for you project depends on your preferences. If you’re looking for a CI tool with seemingly
unlimited customizability, and you have the time to set it up, then Jenkins will certainly meet your
expectations. If you’re working on an open source project, Travis CI may be the better fit since
hosting is free and requires minimal configuration. If you’re developing a private enterprise project,
and you already have a server for hosting, then Jenkins may be preferable. Since they are free to
download, you have nothing to lose by experimenting with both and performing your own Jenkins vs
Travis CI comparison; you may end up using both tools for different jobs.
Jenkins vs Travis CI - In Summary#
When it comes to comparing Jenkins vs Travis CI, there is no absolute “winner”. Travis CI is ideal for
open source projects that require testing in multiple environments, and Jenkins is better suited
for larger projects that require a high degree of customization.

Therefore, professional developers can benefit from familiarizing themselves with both tools. If
you’re working on an open source project with a small team, Travis CI is probably a good choice since
it’s free and easy to set up; however, if you find yourself working for a large company, you’re more
likely to work with tools like Jenkins.

Bitbucket pipeline

Get started with Bitbucket Pipelines


In this section
 Limitations of Bitbucket Pipelines

Related content
 Configure bitbucket-pipelines.yml

 Ruby with Bitbucket Pipelines

 Troubleshooting Bitbucket Pipelines

 PHP with Bitbucket Pipelines

 Python with Bitbucket Pipelines

Still need help?


The Atlassian Community is here for you.

Ask the community

Bitbucket Pipelines is an integrated CI/CD service, built into Bitbucket. It allows you to automatically
build, test and even deploy your code, based on a configuration file in your repository. Essentially, we
create containers in the cloud for you. Inside these containers you can run commands (like you might
on a local machine) but with all the advantages of a fresh system, custom configured for your needs.

To set up Pipelines you need to create and configure the bitbucket-pipelines.yml file in the
root directory of your repository. This file is your build configuration, and using configuration-as-code
means it is versioned and always in sync with the rest of your code.
The bitbucket-pipelines.yml file holds all the build configurations for your repository. YAML is
a file format that is easy to read, but writing it requires care. Indenting must use spaces, as tab
characters are not allowed.

There is a lot you can configure in the bitbucket-pipelines.yml file, but at its most basic the
required keywords are:

pipelines: contains all your pipeline definitions.

default: contains the steps that run on every push.

step: each step starts a new Docker container with a clone of your repository, then runs the
contents of your script section.

script: a list of commands that are executed in sequence.

Configure bitbucket-pipelines.yml
On this page
 Key concepts

 Keywords

 pipelines

 default

 branches

 tags

 bookmarks

 custom

 pull-requests

 parallel

 step

 name

 image

 trigger

 deployment

 size

 script
 pipes

 after-script

 artifacts

 options

 max-time

 clone

 lfs

 depth

 definitions

 services

 caches

 Glob patterns cheat sheet

In this section
 YAML anchors

Related content
 Get started with Bitbucket Pipelines

 View your pipeline

 Troubleshooting Bitbucket Pipelines

 Pull changes from your Git repository on Bitbucket Cloud

 Maintaining a Git Repository

Still need help?


The Atlassian Community is here for you.

Ask the community

At the center of Pipelines is the bitbucket-pipelines.yml file. It defines all your build
configurations (pipelines) and needs to be created in the root of your repository. With 'configuration
as code', your bitbucket-pipelines.yml is versioned along with all the other files in your
repository, and can be edited in your IDE. If you've not yet created this file, you might like to read Get
started with Bitbucket Pipelines first.

YAML is a file format that is easy to read, but writing it requires care. Indenting must use spaces, as
tab characters are not allowed.
There is a lot you can configure your pipelines to do, but at its most basic the required keywords in
your YAML file are:

pipelines:marks the beginning of all your pipeline definitions.

default: contains the steps that will run on every push.

step : each step starts a new Docker container that includes a clone of your repository, and then
runs the contents of your script section inside it.

script : a list of commands that are executed in sequence.

Pipelines can contain any software language that can be run on Linux. We have some examples, but
at its most basic a bitbucket-pipelines.yml file could look like this:

pipelines:

default:

- step:

script:

- echo "I made a pipeline!"

You can then build on this using the keywords listed below.

If you have a complex configuration there are a couple of techniques that you might find useful:

 You can use pipes which which simplify common multi-step actions.

 If you have multiple steps performing similar actions, you can add YAML anchors to easily
reuse sections of your configuration.

Key concepts
A pipeline is made up of a set of steps.

 Each step in your pipeline runs a separate Docker container. If you want, you can use
different types of container for each step, by selecting different images.

 The step runs the commands you provide in the environment defined by the image.

 A single pipeline can have up to 10 steps.


A commit signals that a pipeline should run. Which pipeline runs depends on which section it's in:

 default - All commits trigger this pipeline, unless they match one of the other sections

 branches - Specify the name of a branch, or use a glob pattern.

 tags (Git only) or bookmarks (Mercurial only) - Specify the name of a tag or bookmark, or use
a glob pattern.

 custom - will only run when manually triggered.

 pull-requests - Specify the name of a branch, or use a glob pattern, and the pipeline will only
run when there is a pull request on this branch.

Illustration of how indenting creates logical 'sections'

Keywords
You can define your build pipelines by using a selection of the following keywords. They are arranged
in this table in the order in which you might use them, with highlighted rows to show keywords that
define a logical section.

Keyword Description

pipelines Contains all your pipeline definitions.

default
Contains the pipeline definition for all branches that don't match a
pipeline definition in other sections.

branches
Contains pipeline definitions for specific branches.

tags Contains pipeline definitions for specific Git tags and annotated
tags.

bookmarks Contains pipeline definitions for specific Mercurial bookmarks.

custom Contains pipelines that can be triggered manually from the


Bitbucket Cloud GUI.

pull-requests Contains pipeline definitions that only run on pull requests.

parallel Contains steps to run concurrently.

step
Defines a build execution unit. This defines the commands executed
and settings of a unique container.

name Defines a name for a step to make it easier to see what each step is
doing in the display.

image The Docker image to use for a step. If you don't specify the image,
your pipelinesrun in the default Bitbucket image. This can also be
defined globally to use the same image type for every step.

trigger Specifies whether the step is manual or automatic. If you don't


specify a trigger type, it defaults to automatic.

deployment
Sets the type of environment for your deployment step.

Valid values are: test, staging, or production.

size
Used to provision extra resources for pipelines and steps.

Valid values are: 1x or 2x

script Contains the list of commands that are executed to perform the
build.

pipe To specify that you'd like to use a particular pipe.


after-script A set of commands that will run when your step succeeds or fails

artifacts Defines files that are produced by a step, such as reports and JAR
files, that you want to share with a following step.

options Contains global settings that apply to all your pipelines.

max-time
The maximum time (in minutes) a step can execute for.

Use a whole number greater than 0 or less than 120. If you don't
specify a max-time, it defaults to 120.

clone Contains settings for when we clone your repository into a


container

lfs Enables the download of LFS files in your clone. This defaults
to false if not specified.

depth
Defines the depth of Git clones for all pipelines.

Use a whole number greater than zero to specify the depth.


Use full for a full clone. If you don't specify the Git clone depth, it
defaults to 50.

Note: This keyword is supported only for Git repositories.

definitions Defines resources, such as services and custom caches, that you
want to useelsewhere in your pipeline configurations.

services Define services you would like to use with you build, which are run
in separate but linked containers.

caches Define dependencies to cache on our servers to reduce load time.

pipelines
The start of your pipelines definitions. Under this keyword you must define your build pipelines using
at least one of the following:

 default (for all branches that don't match any of the following)

 branches (Git and Mercurial)

 tags (Git)

 bookmarks (Mercurial)

image: node:10.15.0

pipelines:

default:

- step:

name: Build and test

script:

- npm install

- npm test

tags: # add the 'tags' section

release-*: # specify the tag

- step: # define the build pipeline for the tag

name: Build and release

script:

- npm install

- npm test

- npm run release

branches:

staging:

- step:

name: Clone

script:

- echo "Clone all the things!"

default
The default pipeline runs on every push to the repository, unless a branch-specific pipeline is defined.
You can define a branch pipeline in the branches section.

Note: The default pipeline doesn't run on tags or bookmarks.

branches

Defines a section for all branch-specific build pipelines. The names or expressions in this section are
matched against:

 branches in your Git repository

 named branches in your Mercurial repository

You can use glob patterns for handling the branch names.

See Branch workflows for more information about configuring pipelines to build specific branches in
your repository.

tags

Defines all tag-specific build pipelines. The names or expressions in this section are matched against
tags and annotated tags in your Git repository. You can use glob patterns for handling the tag names.

bookmarks

Defines all bookmark-specific build pipelines. The names or expressions in this section are matched
against bookmarks in your Mercurial repository. You can use glob patterns for handling the tag
names.

image: node:10.15.0

pipelines:

default:

- step:

name: Build and test

script:

- npm install
- npm test

bookmarks: # add the 'bookmarks' section

release-*: # specify the bookmark

- step: # define the build pipeline for the


bookmark

name: Build and release

script:

- npm install

- npm test

- npm run release

branches:

staging:

- step:

name: Clone

script:

- echo "Clone all the things!"

custom

Defines pipelines that can only be triggered manually or scheduled from the Bitbucket Cloud
interface.

image: node:10.15.0

pipelines:

custom: # Pipelines that are triggered manually

sonar: # The name that is displayed in the list in the Bitbucket Cloud
GUI

- step:

script:

- echo "Manual triggers for Sonar are awesome!"

deployment-to-prod: # Another display name


- step:

script:

- echo "Manual triggers for deployments are awesome!"

branches: # Pipelines that run automatically on a commit to a branch

staging:

- step:

script:

- echo "Automated pipelines are cool too."

With a configuration like the one above, you should see the following pipelines in the Run
pipelinedialog in Bitbucket Cloud:

For more information, see Run pipelines manually.

pull-requests
A special pipeline which only runs on pull requests. Pull-requests has the same level of
indentation as branches.

This type of pipeline runs a little differently to other pipelines. When it's triggered, we'll merge the
destination branch into your working branch before it runs. If the merge fails we will stop the
pipeline.

This only applies to pull requests initiated from within your repository; pull requests from a forked
repository will not trigger the pipeline.

pipelines:

pull-requests:

'**': #this runs as default for any branch not elsewhere defined

- step:

script

- ...

feature/*: #any branch with a feature prefix

- step:

script:

- ...

branches: #these will run on every push of the branch

staging:

- step:

script:

- ...

Tip: If you already have branches in your configuration, and you want them all to only run on pull
requests, you can simply replace the keyword branches with pull-requests (if you already
have a pipeline for default you will need to move this under pull-requests and change the
keyword from default to '**' to run).

Pull request pipelines run in addition to any branch and default pipelines that are defined, so if the
definitions overlap you may get 2 pipelines running at the same time!

parallel
Parallel steps enable you to build and test faster, by running a set of steps at the same time.

The total number of build minutes used by a pipeline will not change if you make the steps parallel,
but you'll be able to see the results sooner.

There is a limit of 10 for the total number of steps you can run in a pipeline, regardless of whether
they are running in parallel or serial.

Indent the steps to define which steps run concurrently:

pipelines:

default:

- step: # non-parallel step

name: Build

script:

- ./build.sh

- parallel: # these 2 steps will run in parallel

- step:

name: Integration 1

script:

- ./integration-tests.sh --batch 1

- step:

name: Integration 2

script:

- ./integration-tests.sh --batch 2

- step: # non-parallel step

script:

- ./deploy.sh

Learn more about parallel steps.

step

Defines a build execution unit. Steps are executed in the order that they appear in the bitbucket-
pipelines.yml file. You can use up to 10 steps in a pipeline.

Each step in your pipeline will start a separate Docker container to run the commands configured in
the script. Each step can be configured to:
 Use a different Docker image.

 Configure a custom max-time.

 Use specific caches and services.

 Produce artifacts that subsequent steps can consume.

Steps can be configured to wait for a manual trigger before running. To define a step as manual,
add trigger: manual to the step in your bitbucket-pipelines.yml file. Manual steps:

 Can only be executed in the order that they are configured. You cannot skip a manual step.

 Can only be executed if the previous step has successfully completed.

 Can only be triggered by users with write access to the repository.

 Are triggered through the Pipelines web interface.

If your build uses both manual steps and artifacts, the artifacts are stored for 7 days following the
execution of the step that produced them. After this time, the artifacts expire and any manual steps
in the pipeline can no longer be executed. For more information, see Manual steps and artifact
expiry.

Note: You can't configure the first step of the pipeline as a manual step.

name

You can add a name to a step to make displays and reports easier to read and understand.
image

Bitbucket Pipelines uses Docker containers to run your builds.

 You can use the default image (atlassian/default-image:latest) provided by


Bitbucket or define a custom image. You can specify any public or private Docker image that
isn't hosted on a private network.

 You can define images at the global or step level. You can't define an image at the branch
level.

To specify an image, use

image: <your_account/repository_details>:<tag>

For more information about using and creating images, see Use Docker images as build
environments.

Examples

image: openjdk Uses the image with the latest openjdk


version

image: openjdk:8 Uses the image with openjdk version 8


image: nodesource/node:iojs-2.0.2 Uses the non-official node version with
version iojs-2.0.2

image: openjdk #this image will be used by all


steps unless overridden

pipelines:

default:

- step:

image: nodesource/node:iojs-2.0.2 #override the global image for this


step

script:

- npm install

- npm test

- step: #this step will use the global


image

script:

- npm install

- npm test

trigger

Specifies whether a step will run automatically or only after someone manually triggers it. You can
define the trigger type as manual or automatic. If the trigger type is not defined, the step defaults
to running automatically. The first step cannot be manual. If you want to have a whole pipeline only
run from a manual trigger then use a custom pipeline.

pipelines:

default:

- step:

name: Build and test

image: node:10.15.0

script:
- npm install

- npm test

- npm run build

artifacts:

- dist/**

- step:

name: Deploy

image: python:3.5.1

trigger: manual

script:

- python deploy.py

deployment

Sets the type of environment for your deployment step, used in the Deployments dashboard.

Valid values are test, staging, or production.

The following step will display in the test environment in the Deployments view:

- step:

name: Deploy to test

image: aws-cli:1.0

deployment: test

script:

- python deploy.py test

size

You can allocate additional resources to a step, or to the whole pipeline. By specifying the size of 2x,
you'll have double the resources available (eg. 4GB memory → 8GB memory).

At this time, valid sizes are 1x and 2x.

2x pipelines will use twice the number of build minutes.

Overriding the size of a single step


pipelines:
default:

- step:

script:

- echo "All good things..."

- step:

size: 2x # Double resources available for this step.

script:

- echo "Come to those who wait."

Increasing the resources for an entire pipeline


Using the global size, all steps will inherit the '2x' size.

options:

size: 2x

pipelines:

default:

- step:

name: Step with more memory

script:

- echo "I've got double the memory to play with!"

script

Contains a list of commands that are executed in sequence. Scripts are executed in the order in
which they appear in a step. We recommend that you move large scripts to a separate script file and
call it from the bitbucket-pipelines.yml.

pipes

We are gradually rolling out this feature, so if you don't see pipes in your editor yet, you can edit the
configuration directly, or join our alpha group which has full access.

Pipes make complex tasks easier, by doing a lot of the work behind the scenes. This means you can
just select which pipe you want to use, and supply the necessary variables. You can look at the
repository for the pipe to see what commands it is running. Learn more about pipes.

A pipe to send a message to Opsgenie might look like:


pipelines:

default:

- step:

name: Alert Opsgenie

script:

- pipe: atlassian/opsgenie-send-alert:0.2.0

variables:

GENIE_KEY: $GENIE_KEY

MESSAGE: "Danger, Will Robinson!"

DESCRIPTION: "An Opsgenie alert sent from Bitbucket


Pipelines"

SOURCE: "Bitbucket Pipelines"

PRIORITY: "P1"

after-script

Commands inside an after-script section will run when the step succeeds or fails. This could be useful
for clean up commands, test coverage, notifications, or rollbacks you might want to run, especially if
your after-script uses the value of BITBUCKET_EXIT_CODE.

Note: If any commands in the after-script section fail:

 we won't run any more commands in that section

 it will not effect the reported status of the step.

pipelines:

default:

- step:

name: Build and test

script:

- npm install

- npm test

after-script:

- echo "after script has run!"


artifacts

Defines files to be shared from one step to a later step in your pipeline. Artifacts can be defined
using glob patterns.

An example showing how to define artifacts:

pipelines:

default:

- step:

name: Build and test

image: node:10.15.0

script:

- npm install

- npm test

- npm run build

artifacts:

- dist/**

- step:

name: Deploy to production

image: python:3.5.1

script:

- python deploy-to-production.py

For more information, see using artifacts in steps

options

Contains global settings that apply to all your pipelines. Currently the only option to define is max-
time.

max-time

You can define the maximum time a step can execute for (in minutes) at the global level or step level.
Use a whole number greater than 0 and less than 120.

If you don't specify a max-time, it defaults to 120.


options:

max-time: 60

pipelines:

default:

- step:

name: Sleeping step

script:

- sleep 120m # This step will timeout after 60 minutes

- step:

name: quick step

max-time: 5

script:

- sleep 120m #this step will timeout after 5 minutes

clone

Contains settings for when we clone your repository into a container. Settings here include:

 lfs - Support for Git lfs

 depth - the depth of the Git clone.

lfs

A global setting that specifies that Git LFS files should be downloaded with the clone.

Note: This keyword is supported only for Git repositories.

clone:

lfs: true

pipelines:

default:

- step:

name: Clone and download

script:
- echo "Clone and download my LFS files!"

depth

This global setting defines how many commits we clone into the pipeline container. Use a whole
number greater than zero or if you want to clone everything (which will have a speed impact)
use full.

If you don't specify the Git clone depth, it defaults to the last 50, to try and balance the time it takes
to clone and how many commits you might need.

Note: This keyword is supported only for Git repositories.

clone:

depth: 5 # include the last five commits

pipelines:

default:

- step:

name: Cloning

script:

- echo "Clone all the things!"

definitions

Define resources used elsewhere in your pipeline configuration. Resources can include:

 services that run in separate Docker containers – see Use services and databases in Bitbucket
Pipelines.

 caches – see Caching dependencies.

 YAML anchors - a way to define a chunk of your yaml for easy re-use - see YAML anchors.

services

Rather than trying to build all the resources you might need into one large image, we can spin up
separate docker containers for services. This will tend to speed up the build, and makes it very easy
to change a single service without having to redo your whole image.
So if we want a redis service container we could add:

definitions:

services:

redis:

image: redis

caches

Re-downloading dependencies from the internet for each step of a build can take a lot of time. Using
a cache they are downloaded once to our servers and then locally loaded into the build each time.

An example showing how to define a custom bundler cache:

definitions:

caches:

bundler: vendor/bundle

Glob patterns cheat sheet


Glob patterns don't allow any expression to start with a star. Every expression that starts with a star
needs to be put in quotes.

feature/*  Matches with feature/<any_branch>.

 The glob pattern doesn't match the slash (/), so Git


branches like feature/<any_branch>/<my_branch> are not
matched for feature/*.

feature/bb-  If you specify the exact name of a branch, a tag, or a


123-fix-links
bookmark, the pipeline defined for the specific branch
overrides any more generic expressions that would match
that branch.

 For example, let's say that you specified a pipeline


for feature/* and feature/bb-123-fix-links. On a commit to
the feature/bb-123-fix-links branch Pipelines will execute
steps defined for feature/bb-123-fix-linksand will not
execute the steps defined in feature/*.
'*'

 Matches all branches, tags, or bookmarks. The star symbol


(*) must be in single quotes.

 This glob pattern doesn't match the slash (/), so Git


branches like feature/bb-123-fix-links are not matched for
'*'. If you need the slash to match, use '**' instead of '*'.

'**'  Matches all branches, branches, tags, or bookmarks. For


example, it includes branches with the slash (/) like feature/
bb-123-fix-links. The ** expression must be in quotes.

'*/feature'
 This expression requires quotes.

 Names in quotes are treated the same way as names


'master'and without quotes. For example, Pipelines
duplicate branch sees master and 'master' as the same branch names.
names
 In the situation described above, Pipelines will match only
against one name (master or'master', never both).

 We recommend that you avoid duplicate names in


your bitbucket-pipelines.yml file.

Step 1: Choose a language template


Pipelines can build and test anything that can run on Linux as we run your builds in Docker
containers. We've provided templates in Bitbucket to help you get started with popular languages
and platforms: PHP, JavaScript/Node.js (npm), Java (Maven/Gradle), Python, Ruby, C#, C++ (make)
and more.
1. Select Pipelines from the left hand navigation bar.

2. Select the language template you'd like to use.

Don't see your language? Don't worry, there is always the Other option in the More menu if you
can't see what you need. This uses our default Docker image that contains many popular build tools,
and you can add your own using apt-get commands in your script.

Step 2: Configure your pipelines


You'll need to edit the bitbucket-pipelines.yml file to include the commands required to build
your software.
1. Use the editor to customize the file to suit your needs. There are some basic examples
below.

Browse some example configurations

2. Select Commit file when you are happy with your edit and ready to run your first Pipeline.

You've now created your default pipeline. Wahoo!

Pipelines will now automatically trigger whenever you push changes to your repository, running the
default pipeline.

To get the most out of pipelines, you can add more to the bitbucket-pipelines.yml file. For
example, you can define which Docker image you'd like to use for your build, create build
configurations for specific branches, tags, and bookmarks, make sure any test reports are displayedor
define which artifacts you'd like to pass between steps.

Drone

Continuous Integration and Continuous Deployment are the next step to go through if you want a
production-grade microservice application. Let's revisit Webflix to make this point clear.
Webflix is running a whole lot of services in K8s. Each of these services is associated with some code
stored in a repository somewhere. Let's say Webflix wisely chooses to use Git to store their code and
they follow a feature branching strategy.

Branching strategies are a bit (a lot) outside the scope of this article, but basically, what this means is
that if a developer wants to make a new feature, they code up that feature on a new Git branch.

Once the developer is confident that their feature is complete, they request that their feature branch
gets merged into the master branch. Once the code is merged to the master branch, it should mean
that it is ready to be deployed into production.

If all of this sounds pretty cryptic, I would suggest you take some time to learn about Git. Git is
mighty. Long live the Git.

The process of deploying code to production is not so straightforward. First, we should make sure all
of the unit tests pass and have good coverage. Then, since we are working with microservices, there
is probably a Docker image to build and push.

Once that is done, it would be good to make sure that the Docker image actually works by doing
some tests against a live container (maybe a group of live containers). It might also be necessary to
measure the performance of the new image by running some tests with a tool like Locust. The
deployment process can get very complex if there are multiple developers working on multiple
services at the same time, since we would need to keep track of version compatibility for the various
services.

CI/CD is all about automating this sort of thing.

There are loads of CI/CD tools around and they have their own ways of configuring their pipelines (a
pipeline is a series of steps code needs to go through when it is pushed). There are many good books
(like this one) dedicated to designing deployment pipelines, but, in general, you'll want to do
something like this:

1. unit test the code

2. build the container

3. set up a test environment where the new container can run within a realistic context

4. run some integration tests

5. maybe run a few more tests, e.g., saturation tests with locust.io or similar

6. deploy to the production system

7. notify team of success/failure

If, for example, one of the test steps fails, then the code will not get deployed to production. The
pipeline will skip to the end of the process and notify the team that the deployment was a failure.
You can also set up pipelines for merge/pull requests, e.g., if a developer requests a merge, execute
the above pipeline but LEAVE OUT STEP 6 (deploying to production).

Drone.io

Drone is a container based Continuous Delivery system. It's open source, highly configurable (every
build step is executed by a container!) and has a lot of pluginsavailable. It's also one of the easier
CI/CD systems to learn.

Practical: Setting up Drone

In this section, we're going to set up drone on a VM in Google Cloud and get it to play nice with
GitLab. It works fine with GitHub and other popular Git applications as well. I just like Gitlab.

Now I'll be working on the assumption that you have been following along since part 1 of this series.
We already have a K8s cluster set up on Google Cloud, and it is running a deployment containing a
really simple web app. Thus far, we've been interacting with our cluster via the Google Cloud shell, so
we're going to keep doing that. If any of this stuff bothers you, please take a look at part 2.

Set up Infrastructure

The first thing we'll do is set up a VM (Google Cloud calls this a compute instance) with a static IP
address. We'll make sure that Google's firewall lets in HTTP traffic.

When working with compute instances, we need to continuously be aware of regions and zones. It's
not too complex. In general, you just want to put your compute instances close to where they will be
accessed from.

I'll be using europe-west1-d as my zone, and europe-west1 as my region. Feel free to just copy me
for this tutorial. Alternatively, take a look at Google's documentation and pick what works best
for you.

The first step is to reserve a static IP address. We have named ours drone-ip.

gcloud compute addresses create drone-ip --region europe-west1

This outputs:

Created [https://www.googleapis.com/compute/v1/projects/codementor-tutorial/regions/europe-
west1/addresses/drone-ip].

Now take a look at it and take note of the actual IP address. We'll need it later:

gcloud compute addresses describe drone-ip --region europe-west1

This outputs something like:

address: 35.233.66.226

creationTimestamp: '2018-06-21T02:40:37.744-07:00'
description: ''

id: '431436906006760570'

kind: compute#address

name: drone-ip

region: https://www.googleapis.com/compute/v1/projects/codementor-tutorial/regions/europe-
west1

selfLink: https://www.googleapis.com/compute/v1/projects/codementor-tutorial/regions/europe-
west1/addresses/drone-ip

status: RESERVED

So the IP address that I just reserved is 35.233.66.226. Yours will be different.

Alright, now create a VM:

gcloud compute instances create drone-vm --zone=europe-west1-d

This outputs:

Created [https://www.googleapis.com/compute/v1/projects/codementor-tutorial/zones/europe-
west1-d/instances/drone-vm].

NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS

drone-vm europe-west1-d n1-standard-1 10.132.0.6 35.195.196.332 RUNNING

Alright, now we have a VM and a static IP. We need to tie them together:

First let's look at the existing configuration for our VM:

gcloud compute instances describe drone-vm --zone=europe-west1-d

This outputs a whole lot of stuff. Most importantly:

networkInterfaces:

- accessConfigs:

- kind: compute#accessConfig

name: external-nat

natIP: 35.195.196.332

type: ONE_TO_ONE_NAT

A VM can, at most, have one of accessConfig. We'll need to delete the existing one and replace it
with a static IP address config. First, we delete it:
gcloud compute instances delete-access-config drone-vm \

--access-config-name "external-nat" --zone=europe-west1-d

This outputs:

Updated [https://www.googleapis.com/compute/v1/projects/codementor-tutorial/zones/europe-
west1-d/instances/drone-vm].

Now we add new network configuration:

gcloud compute instances add-access-config drone-vm \

--access-config-name "drone-access-config" --address 35.233.66.226 --zone=europe-west1-d

This outputs:

Updated [https://www.googleapis.com/compute/v1/projects/codementor-tutorial/zones/europe-
west1-d/instances/drone-vm].

And now we need to configure the firewall to allow HTTP traffic. Google's firewall rules can be added
and removed from specific instances through use of tags.

gcloud compute instances add-tags drone-vm --tags http-server --zone=europe-west1-d

This outputs:

Updated [https://www.googleapis.com/compute/v1/projects/codementor-tutorial/zones/europe-
west1-d/instances/drone-vm].

If you wanted to allow HTTPS traffic, there is a tag for that too, but setting up HTTPS is a bit outside
the scope of this article.

Awesome! Now we have a VM with a static IP address and it can talk to the outside world via HTTP.

Install Prerequisites

In order to get Drone to run, we need to install Docker and Docker-Compose. Let's do that now:

SSH into our VM from your Google Cloud shell like so:

export PROJECT_ID="$(gcloud config get-value project -q)"

gcloud compute --project ${PROJECT_ID} ssh --zone "europe-west1-d" "drone-vm"

When it asks for passphrases, you can leave them blank for the purpose of this tutorial. That said, it's
not really good practice.

Okay, now you have a shell into your new VM. Brilliant.

Now enter:
uname -a

This will output something like

Linux drone-vm 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux

Next up, we install Docker and docker-compose

Generate Gitlab oAuth credentials

In GitLab, go to your user settings and click on applications. You want to create a new application.
Enter drone as the name. As a callback URL, use http://35.233.66.226/authorize. The IP address there
is the static IP address we just generated.

GitLab will now output an application ID and secret and some other stuff. Take note of these values,
Drone is going to need them.

Create a repo on GitLab

You'll also need to create a Git repo that you own. Thus far, we've been
using https://gitlab.com/sheena.oconnell/tutorial-codementor-deploying-microservices.git. You
would want something like https://gitlab.com/${YOUR_GITLAB_USER}/tutorial-codementor-
deploying-microservices.git to exist. Don't put anything inside your repo just yet, we'll get to that a
bit later.

Configure Drone

You can refer to the Drone documentation for full instructions.

First, we'll need to set up some environmental variables. Let's make a new file
called .drone_secrets.sh

nano .drone_secrets.sh

This opens an editor. Paste the following into the editor

#!/bin/sh

export DRONE_HOST=http://35.233.66.226

export DRONE_SECRET="some random string of your choosing"

export DRONE_ADMIN="sheena.oconnell"

export DRONE_GITLAB_CLIENT=<client>

export DRONE_GITLAB_SECRET=<secret>

You'll need to update is a little bit:


DRONE_HOST: this should contain your static IP address.
DRONE_SECRET: any random string will work. Just make something up or use a random password
generator like this one
DRONE_ADMIN: your GitLab username. Mine is sheena.oconnell.
DRONE_GITLAB_CLIENT: copy your GitLab application ID here
DRONE_GITLAB_SECRET: copy your GitLab client secret here

Once you have finished editing the file, press: Ctrl+x then y then enter to save and exit.

Now make it executable:

chmod +x .drone_secrets.sh

And load the secrets into your environment

source .drone_secrets.sh

Cool, now let's make another file:

nano Docker-compose.yml

Paste in the following:

version: '2'

services:

drone-server:

image: drone/drone:0.8

ports:

- 80:8000

- 9000

volumes:

- /var/lib/drone:/var/lib/drone/

restart: always

environment:

- DRONE_HOST=${DRONE_HOST}

- DRONE_SECRET=${DRONE_SECRET}
- DRONE_ADMIN=${DRONE_ADMIN}

- DRONE_GITLAB=true

- DRONE_GITLAB_CLIENT=${DRONE_GITLAB_CLIENT}

- DRONE_GITLAB_SECRET=${DRONE_GITLAB_SECRET}

drone-agent:

image: drone/agent:0.8

command: agent

restart: always

depends_on:

- drone-server

volumes:

- /var/run/Docker.sock:/var/run/Docker.sock

environment:

- DRONE_SERVER=drone-server:9000

- DRONE_SECRET=${DRONE_SECRET}

Now this compose file requires a few environmental settings to be available. Luckily, we've already
set those up. Save and exit just like before.

Now run

docker-compose up

There will be a whole lot of output. Open a new browser window and navigate to your Drone host. In
my case, that is: http://35.233.66.226. You will be redirected to an oAuth authorization page on
GitLab. Choose to authorize access to your account. You will then be redirected back to your Drone
instance. After a little while, you will see a list of your repos.

Each repo will have a toggle button on the right of the page. Toggle whichever one(s) you want to set
up CI/CD for. If you have been following along, there should be a repo called ${YOUR_GITLAB_USER}/
tutorial-codementor-deploying-microservices. Go ahead and activate that one.

Recap

Alright! So far so good. We've got Drone.ci all set up and talking to GitLab.

Practical: Giving Drone access to Google Cloud


Before we get into the meat of actually running a pipeline with Drone, we'll need a way for Drone to
authenticate with our Google project. Before, we were just interacting as ourselves via
the gcloud tooling built into the Google Cloud shell (or installed locally if you wanted to do things
that way). We want Drone to have a subset of our user rights.

We start off by creating a service account. This is sort of like a user. Like users, service accounts have
credentials and rights and they can authenticate with Google Cloud. To learn all about service
accounts, you can refer to Google's official docs.

Open up another Google Dloud shell and do the following:

gcloud iam service-accounts create drone-sa \

--display-name "drone-sa"

This outputs:

Created service account [drone-sa].

Now we want to give that service account permissions. It will need to push images to the Google
Cloud container registry (which is based on Google Storage), and it will need to roll out upgrades to
our application deployment.

gcloud projects add-iam-policy-binding ${PROJECT_ID} \

--member serviceAccount:drone-sa@${PROJECT_ID}.iam.gserviceaccount.com --role


roles/storage.admin

gcloud projects add-iam-policy-binding ${PROJECT_ID} \

--member serviceAccount:drone-sa@${PROJECT_ID}.iam.gserviceaccount.com --role


roles/container.developer

These commands output stuff like:

bindings:

- members:

- serviceAccount:service-241386104325@compute-system.iam.gserviceaccount.com

role: roles/compute.serviceAgent

- members:

- serviceAccount:drone-sa@codementor-tutorial.iam.gserviceaccount.com

role: roles/container.developer

- members:
- serviceAccount:service-241386104325@container-engine-robot.iam.gserviceaccount.com

role: roles/container.serviceAgent

- members:

- serviceAccount:241386104325-compute@developer.gserviceaccount.com

- serviceAccount:241386104325@cloudservices.gserviceaccount.com

- serviceAccount:service-241386104325@containerregistry.iam.gserviceaccount.com

role: roles/editor

- members:

- user:yourname@gmail.com

role: roles/owner

- members:

- serviceAccount:drone-sa@codementor-tutorial.iam.gserviceaccount.com

role: roles/storage.admin

etag: BwVvOooDQaI=

version: 1

If you wanted to give your Drone instance access to other Google Cloud functionality (for example, if
you needed it to interact with App Engine), you can get a full list of available roles like so:

gcloud iam roles list

Now we create some credentials for our service account. Any device with this key file will have all of
the rights given to our service account. You can invalidate key files at any time. We are going to name
our key key.json

gcloud iam service-accounts keys create ~/key.json \

--iam-account drone-sa@${PROJECT_ID}.iam.gserviceaccount.com

This outputs:

created key [7ce29ec3d260c55c5ff1b32aad40a331f15edc63] of type [json] as


[/home/sheenaprelude/key.json] for [drone-sa@codementor-tutorial.iam.gserviceaccount.com]

Now we need to make the key available to Drone. We'll do this by using Drone's front-end. Point
your browser at the Drone front-end (in my case http://35.233.66.226). Navigate to the repository
that you want to deploy. Click on the menu button on the top right of the screen and select secrets.

Now create a new secret called GOOGLE_CREDENTIALS


Now in your Google Cloud shell:

cat key.json

The output should be something like this:

"type": "service_account",

"project_id": "codementor-tutorial",

"private_key_id": "111111111111111111111111",

"private_key": "-----BEGIN PRIVATE KEY-----\n lots and lots of stuff =\n-----END PRIVATE KEY-----\n",

"client_email": "drone-sa@codementor-tutorial.iam.gserviceaccount.com",

"client_id": "xxxxxxxxxxxxxxxx",

"auth_uri": "https://accounts.google.com/o/oauth2/auth",

"token_uri": "https://accounts.google.com/o/oauth2/token",

"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",

"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/drone-sa
%40codementor-tutorial.iam.gserviceaccount.com"

Copy it and paste it into the secret value field and click on save.

Practical: Finally, a pipeline!!

Now Drone has access to our Google Cloud resources (although we still need to tell it how to access
the key file), and it knows about our repo. Now we need to tell Drone what exactly we need done
when we push code to our project. We do this by specifying a pipeline in a file named .drone.yml in
the root of our Git repo. .drone.yml is written in YAML format. Here is a cheat-sheet that I've found
quite useful.

It's time to put something in your tutorial-codementor-deploying-microservices repo. We'll just copy
over everything from the repo created for this series of articles. In a terminal somewhere (your local
computer, Google Cloud shell, or wherever):

# clone the repo if you havent already and cd in

git clone https://gitlab.com/sheena.oconnell/tutorial-codementor-deploying-microservices.git

cd tutorial-codementor-deploying-microservices
# set the git repo origin to your very own repo

git remote remove origin

git remote add origin https://gitlab.com/${YOUR_GITLAB_USER}/tutorial-codementor-deploying-


microservices.git

# and push your changes. This will trigger the deployment pipeline already specified by
my .drone.yml

git checkout master

git push --set-upstream origin master

This will kick off our pipeline. You can watch it happen in the Drone web front-end.

Here is our full pipeline specification:

pipeline:

unit-test:

image: python:3

commands:

- pip install -r requirements.txt

- python -m pytest

gcr:

image: plugins/gcr

registry: eu.gcr.io

repo: codementor-tutorial/codementor-tutorial

tags: ["commit_${DRONE_COMMIT}","build_${DRONE_BUILD_NUMBER}", "latest"]

secrets: [GOOGLE_CREDENTIALS]

when:

branch: master

deploy:
image: Google/cloud-sdk:latest

environment:

PROJECT_ID: codementor-tutorial

COMPUTE_ZONE: europe-west1-d

CLUSTER_NAME: hello-codementor

secrets: [GOOGLE_CREDENTIALS]

commands:

- yes | apt-get install python3

- python3 generate_key.py key.json

- gcloud config set project $PROJECT_ID

- gcloud config set compute/zone $COMPUTE_ZONE

- gcloud auth activate-service-account --key-file key.json

- gcloud container clusters get-credentials $CLUSTER_NAME

- kubectl set image deployment/codementor-tutorial codementor-tutorial=eu.gcr.io/$


{PROJECT_ID}/app:${DRONE_BUILD_NUMBER}

when:

branch: master

As pipelines go, it's quite a small one. We have specified three steps: unit-test, gcr, and deploy. It
helps to keep Docker-compose in mind when working with Drone. Each step is run as a Docker
container. So each step is based on a Docker image. For the most part, you get to specify exactly
what happens on those containers through use of commands.

Let's start from the top.

unit-test:

image: python:3

commands:

- pip install -r requirements.txt

- python -m pytest
This step is fairly straightforward. Whenever any changes are made to the repo (on any branch) then
the unit tests are run. If the tests pass, Drone will proceed to the next step. In our case, all of the rest
of the steps only happen on the master branch, so if you are in a feature branch, the only thing this
pipeline will do is run unit tests.

gcr:

image: plugins/gcr

registry: eu.gcr.io

repo: codementor-tutorial/codementor-tutorial

tags: ["commit_${DRONE_COMMIT}","build_${DRONE_BUILD_NUMBER}", "latest"]

secrets: [GOOGLE_CREDENTIALS]

when:

branch: master

The gcr step is all about building our application Docker image and pushing it into the Google Cloud
Registry (GCR). It is a special kind of step as it is based on a plugin. We won't go into detail on how
plugins work here. Just think of it as an image that takes in special parameters. This one is configured
to push images to eu.gcr.io/codementor-tutorial/codementor-tutorial.

The tags argument contains a list of tags to be applied. Here, we make use of some variables supplied
by Drone. DRONE_COMMIT is the Git commit hash. Each build of each repo is numbered, so we use
that as a tag too. Drone supplies a whole lot of variables, take a look here for a nice list.

The next thing is secrets. Remember that secret we copy-pasted into Drone just a few minutes ago?
It's name was GOOGLE_CREDENTIALS. This line makes sure that the contents of that secret are
available to the step's container in the form of an environmental variable
named GOOGLE_CREDENTIALS.

The last step is a little more complex:

deploy:

image: Google/cloud-sdk:latest

environment:

PROJECT_ID: codementor-tutorial

COMPUTE_ZONE: europe-west1-d

CLUSTER_NAME: hello-codementor

secrets: [GOOGLE_CREDENTIALS]

commands:
- yes | apt-get install python3

- python3 generate_key.py key.json

- gcloud config set project $PROJECT_ID

- gcloud config set compute/zone $COMPUTE_ZONE

- gcloud auth activate-service-account --key-file key.json

- gcloud container clusters get-credentials $CLUSTER_NAME

- kubectl set image deployment/codementor-tutorial codementor-tutorial=eu.gcr.io/$


{PROJECT_ID}/app:${DRONE_BUILD_NUMBER}

when:

branch: master

Here our base image is supplied by Google. It gives us gcloud and a few bells and whistles.

environment lets us set up environmental variables that will be accessible in the running container,
and the secrets work as before.

Now we have a bunch of commands. These execute in order and you should recognize most of it. The
only really strange part is how we authenticate as our service account (drone-sa). The line that does
the actual authentication is gcloud auth activate-service-account --key-file key.json. It requires a key
file. Now, ideally we would just do something like this:

- echo $GOOGLE_CREDENTIALS > key.json

- gcloud auth activate-service-account --key-file key.json

But, unfortunately, Drone completely mangles the whitespace of our secret.


Thus generate_key.py exists to de-mangle the key so it is actually useful (gee, thanks Drone!). And of
course Python needs to be available so we can run that script. Thus yes | apt-get install python3.

Now that everything is set up, if you make a change to your code and push it to master, you will be
able to watch the pipeline get executed by keeping an eye on the Drone front-end.

Once the pipeline is complete, you will be able to make sure that your deployment is updated by
taking a look at the pods on the gcloud command line:

kubectl get pods

Outputs:

NAME READY STATUS RESTARTS AGE

codementor-tutorial-77d67cbfb8-flfxl 1/1 Running 0 49s

codementor-tutorial-77d67cbfb8-gfl7z 1/1 Running 0 49s


codementor-tutorial-77d67cbfb8-p9bcd 1/1 Running 0 40s

Now pick one and describe it:

kubectl describe pod codementor-tutorial-77d67cbfb8-flfxl

We'll get a whole lot of output here. The part that is interesting to us is:

Containers:

codementor-tutorial:

Image: eu.gcr.io/codementor-tutorial/codementor-
tutorial:commit_cb5d5ca61661954d7d139b2a1d60060cba5c4f2f

Now, if you were to check your Git log, the last commit to master that you pushed would have the
commit SHA cb5d5ca61661954d7d139b2a1d60060cba5c4f2f. Isn't that neat?

Conclusion

Wow, we made it! If you've worked through all of the practical examples, you've accomplished a lot.

You are now acquainted with Docker — you built an image and instantiated a container for that
image. Then, you got your images running on a Kubernetes cluster that you set up yourself. You then
manually scaled and rolled out updates to your application.

In this part, you got a simple CI/CD pipeline up and running from scratch by provisioning a VM,
installing Drone and its prerequisites, and getting it to play nice with GitLab and Google Kurbenetes
Engine.

PS - Cleanup (IMPORTANT!)

Clusters cost money, so it would be best to shut it down if you aren't using it. Go back to the Google
Cloud Shell and do the following:

kubectl delete service hello-codementor-service

## now we need to wait a bit for Google to delete some forwarding rules for us. Keep an eye on them
by executing this command:

gcloud compute forwarding-rules list

## once the forwarding rules are deleted then it is safe to delete the cluster:

gcloud container clusters delete hello-codementor


## and delete the drone VM and it's ip address

yes | gcloud compute instances delete drone-vm --zone=europe-west1-d

yes | gcloud compute addresses delete drone-ip --region=europe-west1

This post contains affiliate links to books that I really enjoy, which means I may receive a commission
if you purchase something through these links.

Circle CI
Travis CI and CircleCI are almost the same

Both of them:

 Have YAML file as a config

 Are cloud-based

 Have support of Docker to run tests

What does TravisCI offer that CircleCI doesn’t?

 Option to run tests on Linux and Mac OS X at same time

 Supports more languages out of the box:

Android, C, C#, C++, Clojure, Crystal, D, Dart, Erlang, Elixir, F#, Go, Groovy, Haskell, Haxe, Java,
JavaScript (with Node.js), Julia, Objective-C, Perl, Perl6, PHP, Python, R, Ruby, Rust, Scala, Smalltalk,
Visual Basic

 Support of build matrix

Build matrix

language: python
python:
- "2.7"
- "3.4"
- "3.5"
env:
- DJANGO='django>=1.8,<1.9'
- DJANGO='django>=1.9,<1.10'
- DJANGO='django>=1.10,<1.11'
- DJANGO='https://github.com/django/django/archive/master.tar.gz'
matrix:
allow_failures:
- env: DJANGO='https://github.com/django/django/archive/master.tar.gz'

Build matrix is a tool that gives an opportunity to run tests with different versions of language and
packages. You may customize it in different ways. For example, fails of some environments can
trigger notifications but don’t fail all the build ( that’s helpful for development versions of packages)

Jenkins / blueOcean

Create a Pipeline in Blue Ocean


Table of Contents

 Prerequisites

 Run Jenkins in Docker

o On macOS and Linux

o On Windows

o Accessing the Jenkins/Blue Ocean Docker container

o Setup wizard

o Stopping and restarting Jenkins

 Fork the sample repository on GitHub

 Create your Pipeline project in Blue Ocean

 Create your initial Pipeline

 Add a test stage to your Pipeline

 Add a final deliver stage to your Pipeline

 Follow up (optional)

 Wrapping up

This tutorial shows you how to use the Blue Ocean feature of Jenkins to create a Pipeline that will
orchestrate building a simple application.

Before starting this tutorial, it is recommended that you run through at least one of the initial set of
tutorials from the Tutorials overview page first to familiarize yourself with CI/CD concepts (relevant
to a technology stack you’re most familiar with) and how these concepts are implemented in Jenkins.
This tutorial uses the same application that the Build a Node.js and React app with npm tutorial is
based on. Therefore, you’ll be building the same application although this time, completely through
Blue Ocean. Since Blue Ocean provides a simplified Git-handling experience, you’ll be interacting
directly with the repository on GitHub (as opposed to a local clone of this repository).

Duration: This tutorial takes 20-40 minutes to complete (assuming you’ve already met
the prerequisites below). The exact duration will depend on the speed of your machine and whether
or not you’ve already run Jenkins in Docker from another tutorial.

You can stop this tutorial at any point in time and continue from where you left off.

If you’ve already run though another tutorial, you can skip the Prerequisites and Run Jenkins in
Docker sections below and proceed on to forking the sample repository. If you need to restart
Jenkins, simply follow the restart instructions in Stopping and restarting Jenkins and then proceed
on.

Prerequisites

For this tutorial, you will require:

 A macOS, Linux or Windows machine with:

o 256 MB of RAM, although more than 512MB is recommended.

o 10 GB of drive space for Jenkins and your Docker images and containers.

 The following software installed:

o Docker - Read more about installing Docker in the Installing Docker section of
the Installing Jenkins page.
Note: If you use Linux, this tutorial assumes that you are not running Docker
commands as the root user, but instead with a single user account that also has
access to the other tools used throughout this tutorial.

Run Jenkins in Docker

In this tutorial, you’ll be running Jenkins as a Docker container from


the jenkinsci/blueocean Docker image.

To run Jenkins in Docker, follow the relevant instructions below for either macOS and
Linux or Windows.

You can read more about Docker container and image concepts in the Docker and Downloading and
running Jenkins in Dockersections of the Installing Jenkins page.

On macOS and Linux


1. Open up a terminal window.
2. Run the jenkinsci/blueocean image as a container in Docker using the following docker
run command (bearing in mind that this command automatically downloads the image if this
hasn’t been done):

3. docker run \

4. --rm \

5. -u root \

6. -p 8080:8080 \

7. -v jenkins-data:/var/jenkins_home \

8. -v /var/run/docker.sock:/var/run/docker.sock \

9. -v "$HOME":/home \

jenkinsci/blueocean

Maps the /var/jenkins_home directory in the container to the Docker volume with the
name jenkins-data. If this volume does not exist, then this docker run command will
automatically create the volume for you.

Maps the $HOME directory on the host (i.e. your local) machine (usually
the /Users/<your-username> directory) to the /homedirectory in the container.

Note: If copying and pasting the command snippet above doesn’t work, try copying and pasting this
annotation-free version here:

docker run \

--rm \

-u root \

-p 8080:8080 \

-v jenkins-data:/var/jenkins_home \

-v /var/run/docker.sock:/var/run/docker.sock \

-v "$HOME":/home \

jenkinsci/blueocean

10. Proceed to the Setup wizard.

On Windows
1. Open up a command prompt window.
2. Run the jenkinsci/blueocean image as a container in Docker using the following docker
run command (bearing in mind that this command automatically downloads the image if this
hasn’t been done):

3. docker run ^

4. --rm ^

5. -u root ^

6. -p 8080:8080 ^

7. -v jenkins-data:/var/jenkins_home ^

8. -v /var/run/docker.sock:/var/run/docker.sock ^

9. -v "%HOMEPATH%":/home ^

jenkinsci/blueocean

For an explanation of these options, refer to the macOS and Linux instructions above.

10. Proceed to the Setup wizard.

Accessing the Jenkins/Blue Ocean Docker container


If you have some experience with Docker and you wish or need to access the Jenkins/Blue Ocean
Docker container through a terminal/command prompt using the docker exec command, you can
add an option like --name jenkins-tutorials (with the docker run above), which would give
the Jenkins/Blue Ocean Docker container the name "jenkins-tutorials".

This means you could access the Jenkins/Blue Ocean container (through a separate
terminal/command prompt window) with a docker exec command like:

docker exec -it jenkins-tutorials bash

Setup wizard
Before you can access Jenkins, there are a few quick "one-off" steps you’ll need to perform.

Unlocking Jenkins
When you first access a new Jenkins instance, you are asked to unlock it using an automatically-
generated password.

1. After the 2 sets of asterisks appear in the terminal/command prompt window, browse
to http://localhost:8080 and wait until the Unlock Jenkins page appears.
2. From your terminal/command prompt window again, copy the automatically-generated
alphanumeric password (between the 2 sets of asterisks).
3. On the Unlock Jenkins page, paste this password into the Administrator password field and
click Continue.

Customizing Jenkins with plugins


After unlocking Jenkins, the Customize Jenkins page appears.
On this page, click Install suggested plugins.

The setup wizard shows the progression of Jenkins being configured and the suggested plugins being
installed. This process may take a few minutes.

Creating the first administrator user


Finally, Jenkins asks you to create your first administrator user.

1. When the Create First Admin User page appears, specify your details in the respective fields
and click Save and Finish.

2. When the Jenkins is ready page appears, click Start using Jenkins.
Notes:

o This page may indicate Jenkins is almost ready! instead and if so, click Restart.

o If the page doesn’t automatically refresh after a minute, use your web browser to
refresh the page manually.

3. If required, log in to Jenkins with the credentials of the user you just created and you’re
ready to start using Jenkins!

Stopping and restarting Jenkins


Throughout the remainder of this tutorial, you can stop the Jenkins/Blue Ocean Docker container by
typing Ctrl-C in the terminal/command prompt window from which you ran the docker run
… command above.

To restart the Jenkins/Blue Ocean Docker container:

1. Run the same docker run … command you ran for macOS, Linux or Windows above.
Note: This process also updates the jenkinsci/blueocean Docker image, if an updated
one is available.

2. Browse to http://localhost:8080.

3. Wait until the log in page appears and log in.

Fork the sample repository on GitHub

Fork the simple "Welcome to React" Node.js and React application on GitHub into your own GitHub
account.

1. Ensure you are signed in to your GitHub account. If you don’t yet have a GitHub account, sign
up for a free one on the GitHub website.

2. Fork the creating-a-pipeline-in-blue-ocean on GitHub into your local GitHub


account. If you need help with this process, refer to the Fork A Repo documentation on the
GitHub website for more information.
Note: This is a different repository to the one used in the Build a Node.js and React app with
npm tutorial. Although these repositories contain the same application code, ensure you fork
and use the correct one before continuing on.

Create your Pipeline project in Blue Ocean

1. Go back to Jenkins and ensure you have accessed the Blue Ocean interface. To do this, make
sure you:

o have browsed to http://localhost:8080/blue and are logged in


or

o have browsed to http://localhost:8080/, are logged in and have clicked Open


Blue Ocean on the left.

2. In the Welcome to Jenkins box at the center of the Blue Ocean interface, click Create a new
Pipeline to begin the Pipeline creation wizard.
Note: If you don’t see this box, click New Pipeline at the top right.

3. In Where do you store your code?, click GitHub.

4. In Connect to GitHub, click Create an access key here. This opens GitHub in a new browser
tab.
Note: If you previously configured Blue Ocean to connect to GitHub using a personal access
token, then Blue Ocean takes you directly to step 9 below.

5. In the new tab, sign in to your GitHub account (if necessary) and on the GitHub New
Personal Access Token page, specify a brief Token description for your GitHub access token
(e.g. Blue Ocean).
Note: An access token is usually an alphanumeric string that respresents your GitHub
account along with permissions to access various GitHub features and areas through your
GitHub account. This access token will have the appropriate permissions pre-selected, which
Blue Ocean requires to access and interact with your GitHub account.

6. Scroll down to the end of the page (leaving all other Select scopes options with their default
settings) and click Generate token.

7. On the resulting Personal access tokens page, copy your newly generated access token.

8. Back in Blue Ocean, paste the access token into the Your GitHub access token field and
click Connect.
Jenkins now has access to your GitHub account (provided by your access token).

9. In Which organization does the repository belong to?, click your GitHub account (where you
forked the repository above).

10. In Choose a repository, click your forked repository creating-a-pipeline-in-blue-ocean.

11. Click Create Pipeline.


Blue Ocean detects that there is no Jenkinsfile at the root level of the
repository’s master branch and proceed to help you create one. (Therefore, you’ll need to
click another Create Pipeline at the end of the page to proceed.)
Note: Under the hood, a Pipeline project created through Blue Ocean is actually
"multibranch Pipeline". Therefore, Jenkins looks for the presence of at least one Jenkinsfile in
any branch of your repository.
Create your initial Pipeline

1. Following on from creating your Pipeline project (above), in the Pipeline editor,
select docker from the Agent dropdown in the Pipeline Settings panel on the right.
2. In the Image and Args fields that appear, specify node:6-alpine and -p
3000:3000 respectively.

Note: For an explanation of these values, refer to annotations 1 and 2 of the Declarative Pipeline in
the ``Create your initial Pipeline…'' section of the Build a Node.js and React app tutorial.

3. Back in the main Pipeline editor, click the + icon, which opens the new stage panel on the
right.

4. In this panel, type Build in the Name your stage field and then click the Add Step button
below, which opens the Choose step type panel.
5. In this panel, click Shell Script near the top of the list (to choose that step type), which opens
the Build / Shell Script panel, where you can enter this step’s values.

Tip: The most commonly used step types appear closest to the top of this list. To find other steps
further down this list, you can filter this list using the Find steps by name option.
6. In the Build / Shell Script panel, specify npm install.

Note: For an explanation of this step, refer to annotation 4 of the Declarative Pipeline in the Create
your initial Pipeline…section of the Build a Node.js and React app tutorial.

7. ( Optional ) Click the top-left back arrow icon to return to the main Pipeline editor.

8. Click the Save button at the top right to begin saving your new Pipeline with its "Build" stage.

9. In the Save Pipeline dialog box, specify the commit message in the Description field (e.g. Add
initial Pipeline (Jenkinsfile)).
10. Leaving all other options as is, click Save & run and Jenkins proceeds to build your Pipeline.

11. When the main Blue Ocean interface appears, click the row to see Jenkins build your Pipeline
project.
Note: You may need to wait several minutes for this first run to complete. During this time,
Jenkins does the following:

a. Commits your Pipeline as a Jenkinsfile to the only branch (i.e. master) of your
repository.

b. Initially queues the project to be built on the agent.

c. Downloads the Node Docker image and runs it in a container on Docker.

d. Executes the Build stage (defined in the Jenkinsfile) on the Node container.
(During this time, npm downloads many dependencies necessary to run your Node.js
and React application, which will ultimately be stored in the
local node_modules directory within the Jenkins home directory).
12. The Blue Ocean interface turns green if Jenkins built your application successfully.
13.
14. Click the X at the top-right to return to the main Blue Ocean interface.

Note: Before continuing on, you can check that Jenkins has created a Jenkinsfile for you at the
root of your forked GitHub repository (in the repository’s sole master branch).

Add a test stage to your Pipeline


1. From the main Blue Ocean interface, click Branches at the top-right to access your
respository’s branches page, where you can access the master branch.

2. Click the master branch’s "Edit Pipeline" icon to open the Pipeline editor for this
branch.
3. In the main Pipeline editor, click the + icon to the right of the Build stage you
created above to open the new stage panel on the right.

4. In this panel, type Test in the Name your stage field and then click the Add Step button
below to open the Choose step typepanel.

5. In this panel, click Shell Script near the top of the list.

6. In the resulting Test / Shell Script panel, specify ./jenkins/scripts/test.sh and then

click the top-left back arrow icon to return to the Pipeline stage editor.

7. At the lower-right of the panel, click Settings to reveal this section of the panel.

8. Click the + icon at the right of the Environment heading (for which you’ll configure an
environment directive).

9. In the Name and Value fields that appear, specify CI and true, respectively.
Note: For an explanation of this directive and its step, refer to annotations 1 and 3 of the Declarative
Pipeline in the Add a test stage… section of the Build a Node.js and React app tutorial.

10. ( Optional ) Click the top-left back arrow icon to return to the main Pipeline editor.

11. Click the Save button at the top right to begin saving your Pipeline with with its new "Test"
stage.

12. In the Save Pipeline dialog box, specify the commit message in the Description field (e.g. Add
'Test' stage).

13. Leaving all other options as is, click Save & run and Jenkins proceeds to build your amended
Pipeline.

14. When the main Blue Ocean interface appears, click the top row to see Jenkins build your
Pipeline project.
Note: You’ll notice from this run that Jenkins no longer needs to download the Node Docker
image. Instead, Jenkins only needs to run a new container from the Node image downloaded
previously. Therefore, running your Pipeline this subsequent time should be much faster.
If your amended Pipeline ran successfully, here’s what the Blue Ocean interface should look
like. Notice the additional "Test" stage. You can click on the previous "Build" stage circle to
access the output from that stage.
15. Click the X at the top-right to return to the main Blue Ocean interface.

Add a final deliver stage to your Pipeline


1. From the main Blue Ocean interface, click Branches at the top-right to access your
respository’s master branch.

2. Click the master branch’s "Edit Pipeline" icon to open the Pipeline editor for this
branch.

3. In the main Pipeline editor, click the + icon to the right of the Test stage you created above to
open the new stage panel.

4. In this panel, type Deliver in the Name your stage field and then click the Add Step button
below to open the Choose step type panel.

5. In this panel, click Shell Script near the top of the list.

6. In the resulting Deliver / Shell Script panel, specify ./jenkins/scripts/deliver.sh and

then click the top-left back arrow icon to return to the Pipeline stage editor.
Note: For an explanation of this step, refer to the deliver.sh file itself located in
the jenkins/scripts of your forked repository on GitHub.

7. Click the Add Step button again.

8. In the Choose step type panel, type input into the Find steps by name field.
9. Click the filtered Wait for interactive input step type.

10. In the resulting Deliver / Wait for interactive input panel, specify Finished using the
web site? (Click "Proceed" to continue) in the Message field and then click the

top-left back arrow icon to return to the Pipeline stage editor.

Note: For an explanation of this step, refer to annotation 4 of the Declarative Pipeline in the Add a
final deliver stage…section of the Build a Node.js and React app tutorial.

11. Click the Add Step button (last time).

12. Click Shell Script near the top of the list.

13. In the resulting Deliver / Shell Script panel, specify ./jenkins/scripts/kill.sh.


Note: For an explanation of this step, refer to the kill.sh file itself located in
the jenkins/scripts of your forked repository on GitHub.

14. ( Optional ) Click the top-left back arrow icon to return to the main Pipeline editor.

15. Click the Save button at the top right to begin saving your Pipeline with with its new "Deliver"
stage.

16. In the Save Pipeline dialog box, specify the commit message in the Description field (e.g. Add
'Deliver' stage).
17. Leaving all other options as is, click Save & run and Jenkins proceeds to build your amended
Pipeline.

18. When the main Blue Ocean interface appears, click the top row to see Jenkins build your
Pipeline project.
If your amended Pipeline ran successfully, here’s what the Blue Ocean interface should look
like. Notice the additional "Deliver" stage. Click on the previous "Test" and "Build" stage
circles to access the outputs from those stages.
19. Ensure you are viewing the "Deliver" stage (click it if necessary), then click the
green ./jenkins/scripts/deliver.sh step to expand its content and scroll down until
you see the http://localhost:3000 link.
20. Click the http://localhost:3000 link to view your Node.js and React application running
(in development mode) in a new web browser tab. You should see a page/site with the
title Welcome to React on it.

21. When you are finished viewing the page/site, click the Proceed button to complete the
Pipeline’s execution.
22. Click the X at the top-right to return to the main Blue Ocean interface, which lists your
previous Pipeline runs in reverse chronological order.
Follow up (optional)

If you check the contents of the Jenkinsfile that Blue Ocean created at the root of your
forked creating-a-pipeline-in-blue-oceanrepository, notice the location of
the environment directive. This directive’s location within the "Test" stage means that the
environment variable CI (with its value of true) is only available within the scope of this "Test"
stage.

You can set this directive in Blue Ocean so that its environment variable is available globally
throughout Pipeline (as is the case in the Build a Node.js and React app with npm tutorial). To do this:

1. From the main Blue Ocean interface, click Branches at the top-right to access your
respository’s master branch.

2. Click the master branch’s "Edit Pipeline" icon to open the Pipeline editor for this
branch.

3. In the main Pipeline editor, click the Test stage you created above to begin editing it.

4. In the stage panel on the right, click Settings to reveal this section of the panel.

5. Click the minus (-) icon at the right of the CI environment directive (you created earlier) to
delete it.

6. Click the top-left back arrow icon to return to the main Pipeline editor.

7. In the Pipeline Settings panel, click the + icon at the right of the Environment heading (for
which you’ll configure a globalenvironment directive).

8. In the Name and Value fields that appear, specify CI and true, respectively.

9. Click the Save button at the top right to begin saving your Pipeline with with its relocated
environment directive.

10. In the Save Pipeline dialog box, specify the commit message in the Description field
(e.g. Make environment directive global).

11. Leaving all other options as is, click Save & run and Jenkins proceeds to build your amended
Pipeline.

12. When the main Blue Ocean interface appears, click the top row to see Jenkins build your
Pipeline project.
You should see the same build process you saw when you completed adding the final deliver
stage (above). However, when you inspect the Jenkinsfile again, you’ll notice that
the environment directive is now a sibling of the agent section.

Wrapping up

Well done! You’ve just used the Blue Ocean feature of Jenkins to build a simple Node.js and React
application with npm!

The "Build", "Test" and "Deliver" stages you created above are the basis for building other
applications in Jenkins with any technology stack, including more complex applications and ones that
combine multiple technology stacks together.
Because Jenkins is extremely extensible, it can be modified and configured to handle practically any
aspect of build orchestration and automation.

To learn more about what Jenkins can do, check out:

 The Tutorials overview page for other introductory tutorials.

 The User Handbook for more detailed information about using Jenkins, such as Pipelines (in
particular Pipeline syntax) and the Blue Ocean interface.

 The Jenkins blog for the latest events, other tutorials and updates.

CA ARA

Application-release automation
Application-release automation (ARA) refers to the process of packaging and deploying
an application or update of an application from development, across various environments, and
ultimately to production.[1] ARA solutions must combine the capabilities of deployment automation,
environment management and modeling, and release coordination.[2]

Contents
 1Relationship with DevOps

 2Relationship with Deployment

 3ARA Solutions

 4References

Relationship with DevOps[edit]


ARA tools help cultivate DevOps best practices by providing a combination of automation,
environment modeling and workflow-management capabilities. These practices help teams deliver
software rapidly, reliably and responsibly. ARA tools achieve a key DevOps goal of
implementing continuous delivery with a large quantity of releases quickly. [3]

Relationship with Deployment[edit]


ARA is more than just software-deployment automation – it deploys applications using structured
release-automation techniques that allow for an increase in visibility for the whole team.[4] It
combines workload automation and release-management tools as they relate to release packages, as
well as movement through different environment within the DevOps pipeline.[5] ARA tools help
regulate deployments, how environments are created and deployed, and how and when releases are
deployed.[6]

ARA Solutions[edit]
Gartner and Forrester have published lists of ARA tools in their ARA Magic Quadrant and Wave
reports respectively.[7] [8]All ARA solutions must include capabilities in automation, environment
modeling, and release coordination. Additionally, the solution must provide this functionality without
reliance on other tools. [9]

Solution Released by

BuildMaster Inedo

CA Release Automation and Automic CA Technologies

OpenMake
DeployHub
Software

Deployment Automation (formerly Serena Deployment


Micro Focus
Automation)

ElectricFlow Electric Cloud

FlexDeploy Flexagon

Hybrid Cloud Management (Ultimate Edition) Micro Focus

IBM UrbanCode Deploy IBM

Puppet
Puppet Enterprise

Release Lifecycle Management BMC Software

Visual Studio Release Management Microsoft

XL Deploy & XL Release XebiaLabs

Gitlab CI

Deploy a Spring Boot application to Cloud Foundry with GitLab CI/CD


Article written by Dylan Griffith on 2018-06-07 • Type: tutorial • Level: intermediary
Introduction
In this article, we’ll demonstrate how to deploy a Spring Boot application to Cloud Foundry (CF) with
GitLab CI/CD using the Continuous Deployment method.

All the code for this project can be found in this GitLab repo.

In case you’re interested in deploying Spring Boot applications to Kubernetes using GitLab CI/CD,
read through the blog post Continuous Delivery of a Spring Boot application with GitLab CI and
Kubernetes.

Requirements
We assume you are familiar with Java, GitLab, Cloud Foundry, and GitLab CI/CD.

To follow along with this tutorial you will need the following:

 An account on Pivotal Web Services (PWS) or any other Cloud Foundry instance

 An account on GitLab

Note: You will need to replace the api.run.pivotal.io URL in the all below commands with
the API URLof your CF instance if you’re not deploying to PWS.

Create your project


To create your Spring Boot application you can use the Spring template in GitLab when creating a
new project:

Configure the deployment to Cloud Foundry


To deploy to Cloud Foundry we need to add a manifest.yml file. This is the configuration for the CF
CLI we will use to deploy the application. We will create this in the root directory of our project with
the following content:

---

applications:

- name: gitlab-hello-world

random-route: true

memory: 1G

path: target/demo-0.0.1-SNAPSHOT.jar
Configure GitLab CI/CD to deploy your application
Now we need to add the GitLab CI/CD configuration file ( .gitlab-ci.yml) to our project’s root.
This is how GitLab figures out what commands need to be run whenever code is pushed to our
repository. We will add the following .gitlab-ci.yml file to the root directory of the repository,
GitLab will detect it automatically and run the steps defined once we push our code:

image: java:8

stages:

- build

- deploy

build:

stage: build

script: ./mvnw package

artifacts:

paths:

- target/demo-0.0.1-SNAPSHOT.jar

production:

stage: deploy

script:

- curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx

- ./cf login -u $CF_USERNAME -p $CF_PASSWORD -a api.run.pivotal.io

- ./cf push

only:

- master

We’ve used the java:8 docker image to build our application as it provides the up-to-date Java 8
JDK on Docker Hub. We’ve also added the only clause to ensure our deployments only happen when
we push to the master branch.
Now, since the steps defined in .gitlab-ci.yml require credentials to login to CF, you’ll need to
add your CF credentials as environment variables on GitLab CI/CD. To set the environment variables,
navigate to your project’sSettings > CI/CD and expand Variables. Name the
variables CF_USERNAME and CF_PASSWORD and set them to the correct values.

Once set up, GitLab CI/CD will deploy your app to CF at every push to your repository’s default
branch. To see the build logs or watch your builds running live, navigate to CI/CD > Pipelines.

Caution: It is considered best practice for security to create a separate deploy user for your
application and add its credentials to GitLab instead of using a developer’s credentials.

To start a manual deployment in GitLab go to CI/CD > Pipelines then click on Run Pipeline. Once the
app is finished deploying it will display the URL of your application in the logs for
the production job like:

requested state: started

instances: 1/1

usage: 1G x 1 instances

urls: gitlab-hello-world-undissembling-hotchpot.cfapps.io

last uploaded: Mon Nov 6 10:02:25 UTC 2017

stack: cflinuxfs2

buildpack: client-certificate-mapper=1.2.0_RELEASE container-security-


provider=1.8.0_RELEASE java-
buildpack=v4.5-offline-https://github.com/cloudfoundry/java-
buildpack.git#ffeefb9 java-main java-opts jvmkill-agent=1.10.0_RELEASE
open-jdk-like-jre=1.8.0_1...

state since cpu memory disk


details

#0 running 2017-11-06 09:03:22 PM 120.4% 291.9M of 1G 137.6M of 1G

You can then visit your deployed application (for this example, https://gitlab-hello-world-
undissembling-hotchpot.cfapps.io/) and you should see the “Spring is here!” message.
Semaphore CI
Publishing Docker images on DockerHub
Pushing images to the official registry is straightforward. You'll need to create a secret for the login
username and password. Then, call docker login with the appropriate environment variables. The
first step is create a secret for DOCKER_USERNAME and DOCKER_PASSWORD with the sem tool.

Creating The Secret


Open a new secret.yml file:

# secret.yml

apiVersion: v1alpha

kind: Secret

metadata:

name: docker

data:

env_vars:

- name: DOCKER_USERNAME

value: "REPLACE WITH YOUR USERNAME"

- name: DOCKER_PASSWORD

value: "REPLACE WITH YOUR PASSWORD"

Then create it:

sem create -f secret.yml

Now add the secret to your pipeline and authenticate.

Configuring the Pipeline


This simple example authenticates in the prologue. This is not strictly required, it's just an example
of covering all jobs in authentication.

# .semaphore/pipeline.yml

version: "v1.0"

name: Build and push

agent:
machine:

type: e1-standard-2

os_image: ubuntu1804

blocks:

- name: Build and push

task:

# Pull in environment variables from the "docker" secret

secrets:

- name: docker

prologue:

commands:

# Authenticate to the registry for all jobs in the block

- echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin

jobs:

- name: Build and push

commands:

- checkout

- docker-compose build

- docker-compose push

GoCD
In my second stint at ThoughtWorks I spent a little over a year working on their Continuous Delivery
tool GoCD, now Open Source!. Most of the time working with customers was more about
helping them understand and adopt Continuous Delivery rather than the tool itself (as it should be).

The most common remark I got though was “Jenkins does CD pipelines too” and my reply would
invariably be “on the surface, but they are not a first class built-in concept” and a blank stare usually
followed
I use Jenkins as an example since it’s the most widespread CI tool but this is really just an excuse to
talk about concepts central to Continuous Delivery regardless of tools.

It’s often hard to get the message across because it assumes people are comfortable with at least 3
concepts:

WHAT PIPELINES REALLY ARE AND WHY THEY ARE KEY TO A SUCCESSFUL CD INITIATIVE

THE POWER OF THE RIGHT ABSTRACTIONS

WHAT FIRST CLASS BUILT-IN CONCEPT MEANS AND WHY IT’S KEY

What CD and Pipelines are

I’ll assume everyone agrees with the definitions Martin posted on his site. If you haven’t seen them
yet here they are: Continuous Delivery and (Deployment) Pipelines. In particular on Deployment
Pipelines he writes (emphasis mine):

“A deployment pipeline is a way to deal with this by breaking up your build into stages […] to detect
any changes that will lead to problems in production. These can include performance, security, or
usability issues […] should enable collaboration between the various groups involved in delivering
software and provide everyone visibility about the flow of changes in the system, together with a
thorough audit trail.”

If you prefer I can categorically say what a pipeline is NOT: just a graphic doodle.

Why Pipelines are important to a successful CD initiative

Since Continuous Integration (CI) mainly focuses on development teams and much of the waste in
releasing software comes from its progress through testing and operations, CD is all about:

much of the waste in releasing software comes from its progress through testing and operations

1. Finding and removing bottlenecks, often by breaking the sequential nature of the cycle. No
inflexible monolithic scripts, no slow sequential testing, no flat and simplistic workflows, no single
tool to rule them all

2. Relentless automation, eliminating dull work and the waste of human error, shortening feedback
loops and ensuring repeatability. When you do fail (and you will) the repeatable nature of
automated tasks allows you to easily track down the problem

3. Optimising & Visualising, making people from different parts of the organisation Collaborate on
bringing value (the software) to the users (production) as quickly and reliably as possible

scripting and automated testing are mostly localised activities that often create local maxima with
manual gatekeepers

Commitment to automation is not enough: scripting and automated testing are mostly localised
activities that often create local maxima with manual gatekeepers – the infamous “throwing over the
wall” – to the detriment of the end-to-end value creating process resulting in wasted time and longer
cycle-times.

GoCD Pipelines vs Jenkins Pipelines

Jenkins and GoCD Pipelines are so hard to compare because their premises are completely different:
Jenkins pipelines are somewhat simplistic and comparing the respective visualisations is in fact
misleading (Jenkins top, GoCD bottom):

If there are only


two of things you take away from this post let them be these:

1. An entire row of boxes you see in the Jenkins visualisation is a pipeline as per the original
definition in the book (that Jez Humble now kind of regrets :-)) each box you see in the Jenkins
pipeline is the equivalent of a single Task in GoCD

2. in GoCD each box you see is an entire pipeline in itself that usually is chained to other pipelines
both upstream and downstream. Furthermore each can contain multiple Stages that can contain
multiple Jobs which in turn can contain multiple Tasks

I hear you: “ok, cool but why is this significant?” and this is where it’s important to understand…

The power of (the right) abstractions

You might have seen this diagram already in the GoCD documentation: Although it really is a
simplification (here a more accurate but detail-dense one), it tries to convey visually 2 very important
and often misunderstood/ignored characteristics of GoCD:
1. its 4 built-in powerful abstractions and their relationship: Tasks inside Jobs inside Stages inside
Pipelines

2. the fact that some are executed in parallel (depending on agents availability) while others
sequentially:

 Multiple Pipelines run in parallel

 Multiple Stages within a Pipeline run sequentially

 Multiple Jobs within a Stage run in parallel

 Multiple Tasks within a Job run sequentially

Without geeking out into Barbara Liskov’s “The Power of Abstraction”-level of details we can say that
a good design is one that finds powerful yet simple abstractions, making a complex problem
tractable.

a good design is one that finds powerful yet simple abstractions, making a complex problem tractable

Indeed that’s what a tool that strives to support you in your CD journey should do (because the
journey is yours and a tool can only help or get in the way, either way it won’t transform you
magically overnight): make your complex and often overcomplicated path from check-in to
production tractable. At the same time “all non-trivial abstractions, to some degree, are leaky” as
Joel Spolsky so simply put it in “The Law of Leaky Abstractions” therefore the tricky balance to
achieve here is:

“to have powerful enough abstractions (the right ones) to make it possible to model your path to
production effectively and, importantly, remodel it as you learn and evolve it over time while, at the
same time, resist the temptation to continuously introduce new, unnecessary abstractions that are
only going to make things more difficult in the long run because they will be leaky”

And of course we believed (and I still do) we struck the right balance since we’ve been exploring,
practicing and evolving the practice of Continuous Delivery from before its formal definition.

you are supposed to model your end to end Value Stream Map connecting multiple pipelines

This is the reason why you are supposed to model your end to end Value Stream Map connecting
multiple pipelines together in both direction – upstream and downstream – while everyone seems to
still be stuck at the definition by the book that (seems to) indicate you should have one single, fat
pipeline that covers the entire flow. To some extent this could be easily brushed off as just semantics
but it makes a real difference when it’s not about visual doodles but about real life. It may appear
overkill to have four levels of abstraction for work execution but the moment you start doing more
than single team Continuous Integration (CI), they become indispensable.

Jobs and Stages are primitives, they can and should be extended to achieve higher order abstractions
For instance, it is trivial in GoCD to set up an integration pipeline that feeds off multiple upstream
component pipelines and also feeds off an integration test repository. It is also easy to define
different triggering behaviours for Pipelines and Stages: if we had only two abstractions, say Jobs and
Stages, they’d be overloaded with different behaviour configurations for different contexts. Jobs and
Stages are primitives, they can and should be extended to achieve higher order abstractions. By
doing so, we avoid primitive obsession at an architectural level. Also note that the alternating
execution behaviour of the four abstractions (parallel, sequential, parallel, sequential) is designed
deliberately so that you have the ability to parallelise and sequentialise your work as needed at two
different levels of granularity.

What First Class Built-in Concept means

In order for Pipelines to be considered true first class built-in concepts rather than merely visual
doodles it must be possible to:

 Trigger a Pipeline as a unit

 Make one Pipeline depend on another

 Make artifacts flow through Pipelines

 Have access control at the level of a Pipeline

 Associate Pipelines to environments

 Compare changes between two instances of a Pipeline

Not all Pipelines are created equal, let’s see why the above points are important by looking at how
they are linked to the CD best practices.

CD Best Practices (from the CD book) First Class Built-in Concepts

Only build your binary once Pipeline support for: dependency and Fetch Artifact

Deploy same way to every


Pipeline support for: Environments, Templates, Parameters
environment

Each Change should propagate Pipeline support for: SCM Poll, Post Commit, multi instance
instantly pipeline runs

If any part fails – stop the line Basic Pipeline modeling & Lock Pipelines

Deploy into production Manual pipelines, Authorization, Stage approvals

Traceability from Binaries to Version


Compare pipeline + Pipeline Labels
Control

Provide fast and useful feedback Pipeline Visualization + VSM + Compare pipelines

Do not check-in binaries into version


Recreate using Trigger with options
control
Model your release process Value Stream Maps

Support Component and Dependency


Pipeline Dependency Graphs and Fanin
Graph

Fan-in

Last but not least Pipelines as first class built-in concepts are part of the reason why we were able to
release the first ever (and at the moment still only, AFAIK) intelligent dependency management to
automatically address the Dreaded Diamond Dependency problem and avoid wasted builds,
inconsistent results, incorrect feedback, and running code with the wrong tests: in GoCD we called
it Fan-in Dependency Management. GoCD’s Fan-in material resolution ensures that a pipeline
triggers only when all its upstream pipelines have triggered off the same version of an ancestor
pipeline or material. This will be the case when you have multiple components building in separate
pipelines which all have the same ancestor and you want downstream pipelines to all use the same
version of the artifact from the ancestor pipeline.

Further reading

If you haven’t read the Continuous Delivery book you should but chapter 5 ‘Anatomy of the
Deployment Pipeline’ is available for free, get it now. Time ago a concise but exhaustive 5-part series
on “How do I do CD with Go?” was published on Studios blog and I still highly recommended it:

1. Domain model, concepts & abstractions

2. Pipelines and value streams

3. Traceability with upstream pipeline labeling

4. GoCD environments

5. The power of pipeline templates and parameters

and last but not least take a look at how a Value Stream Map visualisation helps the Mingle team day
in, day out in Tracing our path to production.

P.S.: yes, Jenkins is getting better

Managing GoCD pipelines


GoCD can be configured using Administration Tab. You can perform operations like add/edit
Pipelines, Stages, Jobs, Tasks, Templates and Pipeline group. You can also configure GoCD by editing
the full XML file if you wish, by clicking on the Config XML section of the Administration tab. GoCD
will check the syntax of the configuration before it saves it again
Creating a new pipeline
To create a new pipeline, go to the Pipelines sub-tab of the Administration tab and click on
the "Create a new pipeline within this group" link as shown in the screen shot below.

Add a new material to an existing pipeline


Now that you have a pipeline, lets add another material to it.

 Navigate to the new pipeline you created by clicking on the Edit link under the Actions
against it. You can also click on the name of the pipeline.

 Click on the Materials tab.

 You will notice an existing material . Click on the "Add new material" link.

 You will get the following message

 Edit the existing material and specify the destination directory

 Click "Save".

Blacklist
Often you do want to specify a set of files that Go should ignore when it checks for changes.
Repository changesets which contain only these files will not automatically trigger a pipeline. These
are detailed in the ignore section of the configuration reference.

 Enter the items to blacklist using ant-style syntax below

 Click "Save".
Add a new stage to an existing pipeline
Now that you have a pipeline with a single stage, lets add more stages to it.

 Navigate to the new pipeline you created by clicking on the Edit link under the Actions
against it. You can also click on the name of the pipeline.

 Click on the Stages tab.

 You will notice that a defaultStage exists. Click on the "Add new stage" link.

 Fill stage name and trigger type.

 Fill in the details for the first job and first task belonging to this job. You can add more
jobs and add more tasks to the jobs.

 Click on help icon next to the fields to get additional details about the fields you are editing.

 Click "Save".

Add a new job to an existing stage


Now that we have a pipeline with stage(s), we can add more jobs to any of the existing stages. You
can now use the tree navigation on the left side of the screen to edit a stage or a job under a
pipeline.

 Click on the stage name that you want to edit on the tree as shown below. The
"defaultStage" is being edited.

 Click on the Jobs tab

 Click on "Add new job"

 Fill job name and job details


 Fill in the details for the initial task belonging to this job. You can edit this job later to add
more tasks

 You can choose the type of the task as required.

 For task types Ant, Nant and Rake, the build file and target will default as per the tool used.
For example, Ant task, would look for build.xml in the working directory, and use the default
task if nothing is mentioned.

 Click on help icon next to the fields to get additional details about the fields you are editing.

 Click "Save"

Add a new task to an existing Job


Now that we have a pipeline with stage(s) containing job(s) we can add tasks to any of the existing
jobs. You can now use the tree navigation on the left side of the screen to edit a job under a stage.

 Click on the job name that you want to edit on the tree as shown below. The "defaultJob" is
being edited.

 Click on "Add new task". You can choose the task type from Ant, Nant, Rake and Fetch
Artifact. Or you can choose "More..." to choose a command from command repository or
specify your own command

 Fill the basic settings for the task

 Click on help icon next to the fields to get additional details about the fields you are editing.

 Click "Save"

 Advanced Options section allows you to specify a Task in which you can provide the actions
(typically clean up) that needs to be taken when users chooses to cancel the stage.

Clone an existing pipeline


Clone pipeline functionality helps you create a new pipeline from an existing pipeline by giving it a
new name. Typically when setting up a pipeline for a new branch, it is very useful to take an existing
pipeline and clone it.

If the user is a pipeline group admin, she can clone the new pipeline into a group that she has access
to. If the user is an admin she can clone the pipeline into any group or give a new group name, in
which case the group gets created.

 Navigate to the Admin tab

 Locate the pipeline that needs to be cloned

 In that row, click on the "Clone" icon.

 Fill in the name of the new pipeline

 Select a pipeline group. If you are an admin, you will be able to enter the name of the
pipeline group using the auto suggest or enter a new group name

 Click "Save"

Delete an existing pipeline


Deleting a pipeline removes an existing pipeline from the config.

Warning: Pipeline history is not removed from the database and artifacts are not removed from
artifact storage, which may cause conflicts if a pipeline with the same name is later re-created.

 Navigate to the Admin tab

 Locate the pipeline that needs to be deleted

 In that row, click on the "Delete" icon.

Pipeline Templates
Templating helps to create reusable workflows in order to make tasks like creating and maintaining
branches, and managing large number of pipelines easier.

Creating Pipeline Templates

Pipeline Templates can be managed from the Templates tab on the Administration Page.
Clicking on the "Add New Template" brings up the following form which allows you to create a fresh
template, or extract it from an existing pipeline. Once saved, the pipeline indicated will also start
using this newly created template.

A template can also be extracted from a pipeline using the "Extract Template" link. This can be found
on the "Pipelines" tab in the Administration page.

Example

As an example, assume that there is a pipeline group called "my-app" and it contains a pipeline called
"app-trunk" which builds the application from trunk. Now, if we need to create another pipeline
called "app-1.0-branch" which builds 1.0 version of the application, we can use Pipeline Templates as
follows

Using Administration UI
 Create a template "my-app-build" by extracting it from the pipeline "app-trunk", as shown in
the previous section.

 Create a new pipeline "app-1.0-branch" which defines SCM material with the branch url and
uses the template "my-app-build".

Using XML
Power users can configure the above as follows:

<pipelines group="my-app">

<pipeline name="app-trunk" template="my-app-build">

<materials>

<svn url="http://my-svn-url/trunk" />

</materials>

</pipeline>

<pipeline name="app-1.0-branch" template="my-app-build">

<materials>

<svn url="http://my-svn-url/branches/1.0" />

</materials>
</pipeline>

</pipelines>

<templates>

<pipeline name="my-app-build">

<stage name="build">

<jobs>

<job name="compile">

<tasks>

<ant target="compile" />

</tasks>

</job>

</jobs>

</stage>

</pipeline>

</templates>

Editing Pipeline Templates

Go Administrators can now enable any Go user to edit a template by making them a template
administrator.

Template administrators can view and edit the templates to which they have permissions, on the
template tab of the admin page. Template Administrators, will however not be able to add, delete or
change permissions for a template. They will also be able to see the number of pipelines in which the
template is being used, but not the details of those pipelines.

Viewing Pipeline Templates

Pipeline Templates can now be viewed by Administrators and Pipeline Group Administrators while
editing or creating a Pipeline.

Clicking on the icon indicated by arrow will display the following:


The pop-up shows the extract of the template "Services-Template" configured for the pipeline
"Service_1".

1. Shows the details of the job "compile-job" configured for the stage "compile".

2. Indicates that the working directory set for the task is "go/service_1", which is followed by
the "$" symbol and then the command.

3. If any "On Cancel Task" has been configured, it will be indicated like this.

4. Shows the "Run If Condition" for this task.

See also...
 Templates - Configuration Reference

Stage approvals in action


By default, when one stage completes successfully, the next stage is automatically triggered by Go.
However sometimes you don't want the next stage to be triggered automatically. This might be the
case if you have a stage that deploys your application to a testing, staging or production
environment. Another case can be when you don't want your pipeline to be automatically triggered
by changes in version control. In these situations, you want the stage triggered by manual
intervention. This can be done through manual approvals.

If you add a manual approval to the first stage in a pipeline, it will prevent the pipeline from being
triggered from version control. Instead, it will only pick up changes when you trigger the pipeline
manually (this is sometimes known as "forcing the build").

You can control who can trigger manual approvals. See the section on Adding authorization to
approvalsfor more details.

Managing pipeline groups


There is support for collecting multiple pipelines into a single named group. See the section
on Specifying who can view and operate pipeline groups for more details.

Bibliography :
 https://ramitsurana.github.io/myblog/

 https://www.docker.com/ressources

 https://highops.com/insights/

 https://www.codementor.io/sheena/

Das könnte Ihnen auch gefallen