Sie sind auf Seite 1von 87

Docker Fundamentals

Neil Cresswell
Founder, CloudInovasi
neil.cresswell@cloudinovasi.id
www.cloudinovasi.id
Wireless
SSID: MAJAPAHIT
Password: ballroom
Goal
You should leave today with the confidence to be able to deploy a
Docker environment, with clustered applications, data persistence, and
have a good understanding of the Docker technologies that make up a
container landscape.
ASK QUESTIONS, EXPERIMENT, THIS IS NOT A PRESENTATION ☺
Agenda
• Deploying (and updating) the Docker Engine for • Creating a Cluster using the Docker Swarm
Linux and access via Named Pipes and TCP Orchestrator
• Plus an example of Docker for Windows • Kubernetes Orchestrator Comparison
Server 2016+
• Managing Docker and Docker Swarm from a UI
• Managing Docker from CLI
• Docker Swarm Networking Options
• Deploying Stateless Containers from Docker Hub • Kubernetes Networking Comparison
Images
• Docker Stacks and Stack Files
• Using Docker Persistent Volumes and Deploying
• Docker Services
Stateful Containers
• Managing Docker Stacks/Services
• Creating your own Docker Images and using a
private Image Repo • Docker Configs and Secrets, and using them in
Services/Stacks
But first, a short overview
Containers vs VM’s
Assigned Hardware Assigned Hardware
Resources (static) per VM Resources (static) No static hardware
Inefficient and Expensive per Docker Host. assignment.
8x Greater efficiency Dramatic Cost
savings

Virtual Machines Docker Running inside VMs Docker running on Bare Metal
Look at this another way… with RAM
VM’s Containers

4GB Used 2GB Free 4GB Used

5GB Used 1GB Free 5GB Used

3.5GB Used 2.5GB Free 3.5GB Used

1GB
4GB Used Free 4GB Used

6GB Used 4GB Free 6GB Used

Total RAM assigned to VMs: 33GB If RAM is Rp.100,000 per GB Total RAM actually used: 13.5GB
per Month, you are wasting
Total RAM actually used in the VM: 13.5GB Rp.1,9jt/b – now imagine
RAM available for other containers: 19.5GB, or
RAM wasted, 19.5GB this at scale! can assign just 20GB RAM to the Docker Host
What can go into a container…
• Linux OS – Pretty much ANYTHING…. Only limitation are apps that
need to interact with physical hardware; but even then there are
ways

• Windows OS – Any application that can be installed/run headless


(ie does not need a local UI on the server to install/manage). As
long as you can install the app via CLI/PowerShell, and manage via
a remote browser or remote app, all good.
The Micro-Service Landscape
Self-Service Portals
Portainer.io UI Container Manager Docker UCP Container Manager

Rancher Container Manager

Redhat OpenShift Container Manager


OK, lets go…
The Training Environment
• 100 Virtual Servers
• CentOS 7 Minimal
• 1CPU, 1GB RAM, 20GB HDD
• DHCP
• SSH Enabled;
• uid: root
• password:training

• Very resource constrained, please do not deploy any large


containers
Installing Docker
Installing the Docker Engine
Install the pre-reqs
• Login to your VM via SSH as root (root/training)
• yum install -y yum-utils device-mapper-persistent-data lvm2
• yum-config-manager --add-repo Add docker as a source for yum
https://download.docker.com/linux/centos/docker-ce.repo
• yum install -y docker-ce
Install docker-ce, defaults to
• systemctl enable docker latest version
• systemctl start docker Make docker auto-start on
boot
• systemctl status docker
Start docker now
Docker Engine File Locations
• /var/lib/docker
Base directory for all Docker elements
• /var/lib/docker/volumes
• Directory where all volumes are created/stored
• /var/lib/docker/overlay2
• Directory where all non-persistent volumes are stored
• /var/lib/docker/image
• Directory where all image cache files are stored (note this is not human
readable)
• /var/lib/docker/swarm
• Directory where the Swarm DB is stored; this location is performance
sensitive and is recommended to reside on SSD disks for large clusters
• /etc/docker
• Directory where the UUID for the Docker engine is stored (this must be
unique across the cluster, if cloning a Docker host, you must delete this
key.json and restart docker
• Directory where any daemon.conf file must exist (if needed to customise
the daemon from defaults)
• /usr/bin
• Directory where the docker engine binaries are located
• /var/run/docker.sock
• The docker daemon socket, which is how the Docker client connects to
the Docker daemon
Upgrade Docker
• via yum
• yum upgrade docker-ce

• Alternatively, via binaries


• cd /tmp
• wget
https://download.docker.com/linux/static/stable/x86_64/docker-
18.09.0.tgz
• tar xzvf docker-18.09.0.tgz
• systemctl stop docker
• cp docker/* /usr/bin/
Test Docker Engine is working
• Docker is currently only
listening via “docker socket”
which means you must
interact via SSH on the
console of the host.
• Run the following commands
to check Docker is working:
• docker info
• docker version
• docker run hello-world
Docker Daemon Advanced Config
• Daemon over TCP with TLS (secure) • Daemon over TCP (not secure)
Edit /etc/docker/daemon.json and add: edit the docker service
{ /usr/lib/systemd/system/docker.service
"debug": true, Add -H tcp://0.0.0.0:2375 after unix:// on the “ExecStart” line
"tls": true, Save and exit.
"tlscert": "/var/docker/server.pem", Systemctl daemon-reload
"tlskey": "/var/docker/serverkey.pem", Systemctl restart docker
}
Then save and exit.
Then edit the docker service • Insecure Registry (locally hosted
/usr/lib/systemd/system/docker.service
Add -H tcp://0.0.0.0:2376 after unix:// on the “ExecStart” line docker registry)
Save and exit. Edit /etc/docker/daemon.json and add:
Systemctl daemon-reload {
Systemctl restart docker "insecure-registries": [“registry.mine.com:5000”]
}
Open Docker required Ports with FirewallD
Daemon over TCP
• firewall-cmd --add-port=2375/tcp --permanent
• firewall-cmd --add-port=2376/tcp --permanent
• firewall-cmd --reload Daemon over TCP w TLS
Or with IPTables
• iptables -A INPUT -p tcp --dport 2375 –j ACCEPT
• iptables -A INPUT -p tcp --dport 2376 –j ACCEPT
• /sbin/service iptables save
Note, docker commands
Common Docker CLI commands are CaSe sensitive

Command Function
docker info Show information on the docker engine
docker version Show the version of docker running
docker run Start / deploy new containers
docker ps (and ps –a) Show running (or stopped) containers
docker stats <containerid> Show performance info for a container
docker logs <containerid> Show logs for a container
docker rename (because we always forget to name our containers) Rename a deployed container
docker start, docker stop Stop and start an existing container
docker kill Hard stop (kill) a running container
docker rm (and rmi) Delete a container or image
docker network Work with docker networks
docker volume Work with docker volumes
docker exec / docker attach Interact with a running container
docker system prune Delete all “unused” containers/volumes/images
Common docker run options
Command Function
--name Give a friendly name to the container
Note double dashes
-d Run the container in the background (detached)
-i -t Start the container interactive, and with tty support
--restart= always/on-failure/unless-stopped/no Restart the container automatically when the condition is met (note this
is not a cluster restart policy, only on a single host)
Note double dashes
-v Map a volume to a container
-p ip:port:port / -P Publish ports externally (hostport:containerport)
• –P means publish all ports defined in the dockerfile.
• Using –p :80 means randomly assign a host port
• Using –p 192.168.1.20:80:80 means expose as port 80 on the hosts ip
address 192.168.1.20 (if the host had more than 1 IP)
-e Pass an environment variable into the container
Deploying and managing
stateless containers via CLI
Definition

Stateless Container – means to not retain any


personalisation/configurations or content beyond
the temporary life of the container
Deploy a simple “stateless” container
• docker run –d –p 80:80 nginx
Name of the image
Map Port 80 on the
host (left) to port 80
Run “detached”
in the container
(right)
Checking container logs

• docker ps (see the


container id)

• docker logs <containerid>


Docker Attach
• Get your container id again
• Docker attach <containerid>
• Note you just get a flashing cursor – you
are inside the container, however there is
no shell..
• Press ctrl c to exit
• Now do a docker ps
• What happened to your container ? It got
killed as you send the kill command whilst
attached to the container
• Do docker ps –a to see stopped
containers
• Do a docker start <containerid> to
restart
Docker Exec

• Docker run –d –it alpine


Allow interaction

• Docker ps to get container id


• Docker exec –it <containerid>
“/bin/sh”
• Type “exit” and then press
enter
• docker ps.. Note container is
still running
Showing temporary data persistence

• Create a container
• Manipulate its file
contents
• Stop it, start it.
• File contents remain
• Now, delete and
recreate the
container… file
contents gone.
Storage and Stateful containers
Container Persistent Data Storage

• Containers are only ever READ


ONLY; For data persistence, a My App: /app
Persistent Volume is needed Docker Persistent Volume

• Persistent Volumes are separate Lib: /opt DB: /data


named reusable entities with
data persistence Base Image: /
• Presents as a directory inside Volume Plug-in

the container Docker Engine Vol

• Unless a directory is declared as


a persistent volume, its deemed Host OS
non-persistent
writes its data)
(so be sure you know where you app
Storage Mount Options
• Bind Mounts
• “punch a hole from the host filesystem to the container”
• Considered insecure
• Locked to a single host, so no good for Swarm Cluster deployments
• Local Driver
• Create a named volume redirect (path inside a container is redirected to an
external mount point) – akin to mount or subst
• Supports NFS for Cluster-wide persistent storage
• External Drivers (vendor supplied)
• Support Block Storage, Object Storage, NAS Storage

Note that whilst persistent volumes can be assigned to more than one container concurrently; you need to manage data change
collisions/conflicts as there is no locking mechanism in Docker
External Storage Drivers
• REX-Ray • Pure Storage
• AWS EBS. EFS, S3 • iSCSI
• Dell EMC ScaleIO, Isilon, ECS • Nimble Storage
• Azure Blob • iSCSI
• OpenStack Cinder
• Redhat Ceph • NetApp Storage
• VirtualBox • iSCSI, NFS
• VMware vSphere • HP Storage
• VMDK on VMFS or NFS • 3Par
• VSAN • S3FS
• S3 Storage
• Azure (built by Docker for MS)
• CloudStor for Azure Files
Persistence with a bind mount
• docker run –d –p 80:80 –p
443:443 –v
/html:/usr/share/nginx/html --
name mynginx nginx:latest
• Open a browser to the nginx
container, see no HTML content
• go into the /html directory on
your host
• create a simple index.html
• Go back to your browser and
refresh; see it now displays.
• Delete the container, and then
recreate it, note that the “this is a
test” page remains
Optional, go into /var/lib/docker/volumes/mycriticaldata on
your host and look around

Persistence with named volume


• docker volume create mycriticaldata
• docker run –d –it –v
mycriticaldata:/mydata alpine
• docker exec –it <containerid>
/bin/sh
• cd /mydata
• touch testfile.txt
• exit
• delete container, and recreate using
same command as above
• exec in, go into /mydata and see
your files are still there
Persistence with a NFS volume
• Yum install nfs-utils
• docker volume create --driver local --
opt type=nfs --opt o=addr=x.x.x.x,rw
--opt device=:/export/container --
name nfsvol
• Create a container using that
volume, console into the container
and create a file in that path
• Delete the container and recreate,
see files remain.
• Do a mount command to see nfs is
mounted
Images
What is a Docker image
• TAR (archive) file containing all the binary and configuration
files needed to run a container
• Made up of image layers for auditability
• Hosted in an image repository
• Are built of layers, with each layer containing one change
Image Layers
• Each dockerfile instruction
generates a new image layer
• FROM busybox:latest 8c2e06607696

• MAINTAINER Neil Cresswell 5bd9073989ff

• COPY myfiles.tar 0437ee5cf42c


350e4f999b25
• CMD [“/bin/sh”]
Example image layers

• A single image (mysql), with twelve layers


(note zero byte changes do not generate a
FS layer)..
Image Linking

IMAGE 3 FROM IMAGE 2 MYIMAGE from APACHE


• CHANGES • Add my HTML files

IMAGE 2, FROM IMAGE 1 APACHE, FROM CENTOS


• CHANGES • Add Apache Binaries

IMAGE 1, FROM SCRATCH CENTOS, FROM SCRATCH


• CHANGES • Add CENTOS Binaries

SCRATCH SCRATCH
Single Instance Storage
• Reuses image components that
are in common, meaning only MYAPP1 MYAPP2 MYAPP3 MYDB1 MYDB2 MYDB3

stored on disk once.


• This is why its important to get APACHE NGINX MYSQL

image linking correct, and not


to build images from scratch CENTOS
SCRATCH
Building Docker images
• On your host, create a directory called “/myimage”
• In that directory, create a simple html file called index.html
• Create a file called dockerfile and populate it with the
below:
• FROM nginx
• COPY index.html /usr/share/nginx/html/index.html

• Go back to the root directory and type the command:


• Docker build myimage –t myimage
-t means TAG or image name
Deploy your image
• Docker run –d –p 80:80 myimage
• Open your browser and go to IP of
your docker host; see your
embedded HTML displayed.
• Go back into your myimage directory,
change the html, then rebuild the
image, this time using:
• Docker build myimage –t
myimage:new
• Now deploy the new version as a The tag is used as a version
indicator; no tag defaults to a
container using: version of “latest”
• Docker run –d –p 81:80 myimage:new
Dockerfile commands
Command Function
FROM The base image to build your image off (mandatory)
RUN Run this command in the container being built
COPY (supports COPY –chown=user:group) Copy this/these files from the host into the container
ADD (also supports ADD –chown=user:group) Add this/these files into the container (add supports URLs)
CMD The command the container should run on starting (if no CMD
given, container will simply start and then stop; unless -it
specified, or a command parsed to container at deployment
MAINTAINER Clear text to list the creator of the image
EXPOSE Declare which ports to automatically expose when the –P flag is
used on container creation
VOLUME Declare a mountpoint inside the container that should be
created as a persistent volume during container creation
USER Sets the user that the container runs as (default is root)
More complex images
• On your host, create a directory called “/mysecondimage”
• In that directory create a dockerfile containing:
• FROM ubuntu:16.04

• RUN apt-get update && apt-get install -y openssh-server


• RUN mkdir /var/run/sshd
• RUN echo 'root:training' | chpasswd
• RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

• # SSH login fix. Otherwise user is kicked off after login


• RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

• ENV NOTVISIBLE "in users profile"


• RUN echo "export VISIBLE=now" >> /etc/profile

• EXPOSE 22
• CMD ["/usr/sbin/sshd", "-D"]
• Go back to root directory, and run: docker build mysecondimage –t mysshcontainer
• Now run your image; docker run –d –P mysshcontainer
Local Repository

• First up, make the daemon.json


changes to accommodate an
insecure registry..
Edit /etc/docker/daemon.json and add:
{
"insecure-registries": [“localhost:5000”]
}

• Restart docker daemon (systemctl restart docker)

• Then deploy the docker registry


container
• Docker run –d –p 5000:5000
registry
Tag an image with a repository ID and push to the repo

• Docker image ls, get the name of


your image
• Type (one line):
• docker image tag myimage
localhost:5000/myimage
• Then push that image:
• docker push localhost:5000/myimage
• Now, if you want to deploy from that
image:
• docker run –d –p 8080:80
localhost:5000/myimage
Cluster Orchestration
Cluster Orchestration
• Docker Swarm
• Default Orchestrator, built into Docker engine, enabled though a single command
• Opinionated for simplicity of use and management (which means fewer configuration options)
• Stacks, Services, Containers, Secrets, Configs, manual scaling, Persistent Volumes, Overlay
Networking, Ingress Routing, Service Discovery, Windows and Linux Mixed Clusters, x86 and
other mixed Clusters

• Kubernetes
• Runs on top of Docker Engine
• Can either deploy “vanilla” Kubernetes, or bundled solutions such as OpenShift, Docker EE,
Rancher, IBM Cloud Private.
• Significantly higher level of complexity than swarm, designed for deploying thousands of
containers; ultimate control of every element in the environment
• Namespaces, Deployments, Service, Pods, auto-scaling, Configurations, Secrets, Reverse Proxy
Ingress (NGINX), Overlay Networking, Persistent Volumes/Claims, currently Linux only
(Windows due in v1.13)
Docker Swarm
Open Swarm required Ports with FirewallD
Before we can create a cluster, we must allow all the ports
needed via the firewall
• firewall-cmd --add-port=2377/tcp --permanent
SWARM Management Port

• firewall-cmd --add-port=7946/tcp --permanent


SWARM Cluster Comms Ports
• firewall-cmd --add-port=7946/udp --permanent
• firewall-cmd --add-port=4789/udp --permanentVXLAN Port
• firewall-cmd --reload
Or with IPTables
• iptables -A INPUT -p tcp --dport 2377 -j ACCEPT
• iptables -A INPUT -p tcp --dport 7946 -j ACCEPT
• iptables -A INPUT -p udp --dport 7946 -j ACCEPT
• iptables -A INPUT -p udp --dport 4789 -j ACCEPT
• /sbin/service iptables save
Create a single node cluster (please pair up now)

• On one of your VMs, type the following:


• docker swarm init
This is all that’s needed to turn a standalone node into a single
node cluster!

Type “docker swarm join-token manager” and copy down the


result
Check the ports…

• Ensure that the ports are


open on the master and
ready for the secondary
node to join.
• TCP:2377, TCP:7946
• UDP:7946, UDP:4789
Create a second node in the cluster
• On the OTHER node in your cluster, type the following
(modified to suit your “join token”):
• docker swarm join --token SWMTKN-1-
5ei4wh5tthtow6el6gdujqub8bmkigglfksrdcd9rtp5ulhk8r-
cxn5a8p2ygz1ih8z2xc424wuo 192.168.1.71:2377
• This adds the node as a secondary manager to the cluster
Swarm Status

• You can run “docker node ls” to see a list of all nodes in the
cluster
New CLI commands available in swarm
Command Function
docker swarm Commands relate to swarm management (join, leave,
lock)
docker node Commands related to swarm node management
(demote, promote, delete, list, update)
docker service Deploy and manage Swarm Services (clustered
container deployments)
docker stack Deploy and manage Swarm Stacks
docker secret Centralised contained privileged information
management
docker config Centralised container configuration management
Deploy a container as a clustered service

• Create a single container service


• docker service create --replicas 1 --name nginx --publish 80:80
nginx
• docker service ls
Docker Management UI’s
Portainer UI
• Open Source Offering
• Portainer Support is $120-$240 per Docker Host per
Year (minimum 10 hosts)
• Optionally, add Docker EE Basic (support for Docker
Engine) for $900-1800 per Host per Year
• Supports Docker Engine, Docker Swarm,
and Azure ACI
• Supports Windows and Linux Hosts
• Limited Role Based Access Control (Admin/User)
• Support for Teams
• 100% UI driven
• Includes Portainer specific functions that
deliver RBAC, App Templates, Registry
Management, and Multi-Cluster
management
Docker Enterprise Edition
• Docker Commercial Offering
• USD$1800-$4200 per Docker Host per month
(production minimum 5 Hosts, but docker
recommended minimum is 7)*
• Supports Docker Swarm and Kubernetes
Orchestrators
• Supports Windows and Linux Hosts
• UI based deployments for Swarm
• YAML based deployments for Kubernetes
• Full Role Based Access Control, including Groups
• Includes HTTP Routing Mesh (L7 load balancer)
• Includes Docker Trusted Registry (DTR)
• Image Security Scanning
• Content Trust and Verification (image signing)
• Image Replication
3 Hosts = 1 UCP Manager, 1 DTR Node, 1 Worker (not recommended)
5 Hosts = 3 UCP Manager, 1 DTR Node, 1 Worker
7 Hosts = 3 UCP Managers, 3 DTR Nodes, 1 Worker
Managing the Swarm Cluster from a UI
• We will now switch to use Portainer as
the management UI for all further swarm
exercises…
• On the docker host, run the following
commands:
• docker volume create portainer
• docker run -d -p 9000:9000 --
name=portainer --restart=always -v
/var/run/docker.sock:/var/run/docker.sock
-v portainer:/data portainer/portainer

• Go to your browser, enter the IP of your


host:9000
• Set a username/password combo
• Select “local” endpoint type
Networking
Networking
• Overlay Networking • Bridge Networking (aka NAT)
• VXLAN Based (UDP) • 1:1 association between Host IP and Container
• Swarm Managers are the VXLAN Controllers • Container has private IP address, only
• Only available with Swarm accessible via host’s IP/NAT
• Includes DDNS for Service Discovery • Not Swarm aware
• Recommended to enable Jumbo Frames • Must manage port conflicts
between Hosts (1526 packet size) • MACVLAN
• Requires UDP Ports open between Hosts • 1:1 association between Host NIC Card and
• Containers have private IP address on the Container
overlay network, and are exposed externally • Container has real IP address on network with
via ingress routing real MAC address
• No firewalling between containers on the same • All ports are exposed that the container
overlay network (they are the same virtual requests (no port publishing)
subnet)
• Firewalling within Container is recommended
Swarm Network Fundamentals
Swarm Networking Deeper
Kubernetes Networking Fundamentals
Kubernetes Networking Deeper
Overlay Network

• Using Portainer, lets create a new


overlay network on your cluster
• Click on Networks, then click add
• Give the new network a name,
and then change the driver to
“overlay”
• Enter subnet details for this
INTERNAL network
• Click create
Deploy service on new overlay
• click on services, create a new service
using busybox as the image, enter ping
google.com as the command, click on
network, and attach to your new
network, click on create
• Click on containers, find your busybox
container task, click on it, open a console
and set command to /bin/sh
• Type “ip address” and see you have an IP
from the pool
• Now create a second container, and see
they can ping each other using that
private network
MACVLAN network
• Using Portainer, lets create a new macvlan network
on your cluster
• Click on Networks, then click add
• Give the new network a temp name, and then
change the driver to “macvlan”
• Enter the device id for your linux host nic card
(ens160)
• Enter subnet details for the REAL network
• Click create
• Create a new network, give a real name, and then
change drive to “macvlan”
• You can now see “create” is available, select this,
and then select your temp config from the
dropdown list
• Click create
Deploy service on new macvlan
• click on services, create a new
service using busybox as the
image, enter ping google.com as
the command, click on network,
and attach to your new macvlan
network, click on create
• Click on containers, find your
busybox container task, click on it,
open a console and set command
to /bin/sh
• Type “ip address” and see you
have an IP from the real network
Load Balancing
• Swarm Ingress Router • HTTP Routing Mesh (Docker EE)
• Layer 4 Routing Engine • Licensed Feature (enabled with UCP)
• Single Instance per Swarm Cluster • Layer 7 Routing Engine
• Must manage port conflicts as all • Managed from UCP UI
swarm services from all overlay • HTTP/HTTPS Reverse Proxy
networks share the same external IP • Leverages Ingress Router
address scope (the host IP’s)
• Load Sharing via simple Round Robin;
no awareness of real load • HTTP Routing Mesh (Kubernetes)
• No healthchecks, if container is up, • NGINX Ingress Controller (external)
traffic is routed to it
• Full NGINX Capabilities
• Deployed and Managed via YAML files
Deploy a container as a clustered service with ingress
load balancer
• Create a single container service
• docker service create --replicas 1 --
name dockerdemo --publish
8080:8080 ehazlett/docker-demo
• docker service ls
• Open a browser and go to your host
ip on port 8080

• Scale to 4 replicas
• Docker service scale dockerdemo=4

Note that when scaling, swarm will distribute containers across all
nodes in the cluster and load balance across them
Ingress limitations

• Try to deploy another cluster service using port 8080..


• What happens?
• Why ?
Docker Services
• Docker Services are how you deploy containers into a
Swarm enabled cluster
• Services define how the container is to be deployed, how it
is to be monitored (and restarted) how it is to be upgraded,
how it is to be scaled, and how it is to be load balanced.
• Services can be used to deploy either a single container, or a
load balanced cluster of containers
How to Deploy a Service
• Docker service create command has • docker network create --driver
many mandatory variables, and overlay mynet
most are using the -- notation • docker service create --name mysql -
• Almost always deployed using replicas 3 -p 3306:3306 --network
Docker Stacks/stack files (swarm mynet --env
aware compose files) MYSQL_ROOT_PASSWORD=mypass
• Services support a scheduling mode; word --env
being a statically defined number of DISCOVERY_SERVICE=10.0.0.2:2379
load balanced replicas, or global --env
mode (one instance on every node BACKUP_PASSWORD=mypassword --
in the cluster) env CLUSTER_NAME=mycluster
mysql
Or with Portainer
Docker Stacks
• Docker provides a simple means to deploy a version: "3.3"
number of swarm services as a single entity…
known as a stack. services:
• A stack is a descriptor of HOW to deploy the blue:
application landscape image: ehazlett/docker-demo:latest
• A stack file (right) is the descriptor in YAML deploy:
format. replicas: 1
• In this case, the stack file is declaring to deploy a restart_policy:
single service (called blue), that service will condition: any
deploy containers from the image update_config:
ehazlett/docker-demo:latest, and will deploy a parallelism: 1
single replica. The service will be restarted if it
fails for any reason. The service will be exposed delay: 10s
on ingress network on port 80. When updating environment:
the service, update one container at a time, and - Title=Blue
wait 10 seconds between updates
ports:
- 80:8080
Docker Stacks
version: "3.3"

• Here is a stack that deploys two


services:
blue:
image: ehazlett/docker-demo:latest

services, blue and green.


deploy:
replicas: 5
restart_policy:

• Blue will deploy 5 replicas and


condition: on-failure
update_config:
parallelism: 1

listen on port 80 delay: 10s


environment:
- Title=Blue

• Green will deploy 2 replicas and ports:


- 80:8080
green:

listen on port 81 image: ehazlett/docker-demo:latest


deploy:
replicas: 2
restart_policy:
condition: any
update_config:
parallelism: 1
delay: 10s
environment:
- Title=Green
ports:
- 81:8080
Docker Configs and Secrets
• Configs - Centralised configuration files; stored within the Swarm
DB
• Secrets - Centralised content, encrypted (one way), and stored
within the Swarm DB
• Once created, cannot be changed (must be replaced)
• Are presented into a container (deployed as a swarm service) as a
file in the container filesystem, and not an environment variable
etc.
• Common use case is SSL certificates, server configuration files,
configuration files that must include plain text passwords
Using Configs
• Using Portainer..
• Click on Configs, Click New Config
• Give the config a name, and then
type in a configuration in the box
(format depends on what you are
using it for, a config is free text)
• Click on Services, create a new
service using busybox as the image,
enter ping google.com as the
command, click on configs, and select
your config, and assign a file path in
the container, click on create.
• Open a console into your container,
and cat the /myconfig.cfg file
Note, if you want a single line secret (we password) as a
variable, you will need to run “read variable < configfile”
Using Secrets as part of the container startup script, and then call
$variable

• Using Portainer..
• Click on Secrets, Click New Secret
• Give the secret a name, and then type the
secret details in the box (format depends
on what you are using it for, a secret is free
text)
• Ensure “encode secret” is enabled
• Click on Services, create a new service
using busybox as the image, enter ping
google.com as the command, click on
secrets, and select your secret, and either
leave the target location a default
(/run/secrets/$secret_name) or assign a
file path in the container, click on create.
• Open a console into your container, and cat
the /myverysecretcert file
TERIMA KASIH
Useful troubleshooting tool - ctop

• docker run -it -v /var/run/docker.sock:/var/run/docker.sock


quay.io/vektorlab/ctop:latest Special kind of volume, allows
control of Docker
Kubernetes Ingress

1
2
http://www.testorg.com

1
2

3
4
5
6 8

6
7
8

Das könnte Ihnen auch gefallen