Sie sind auf Seite 1von 128

Welcome to

Docker Administration and


Operations Training
Introduction about Containers

Containers have a long history in computing.


Unlike hypervisor virtualization, where one or more independent
machines run virtually on physical hardware via an intermediation
layer, containers instead run user space on top of an operating
system's kernel.

As a result, container virtualization is often called operating


system-level virtualization.

Container technology allows multiple isolated user space


instances to be run on a single host.
Introduction about Containers

As a result of their status as guests of the operating system,


containers are sometimes seen as less flexible:
they can generally only run the same or a similar guest operating
system as the underlying host.

For example, you can run Red Hat Enterprise Linux on an


Ubuntu server, but you can't run Microsoft Windows on top of
an Ubuntu server.
Introduction about Containers

Containers have also been seen as less secure than the full
isolation of hypervisor virtualization.

Countering this argument is that lightweight containers lack the


larger attack surface of the full operating system needed by a
virtual machine combined with the potential exposures of the
hypervisor layer itself.
Introduction about Containers

Despite these limitations, containers have been deployed in a


variety of use cases.
They are popular for hyperscale deployments of multi-tenant
services, for lightweight sandboxing, and, despite concerns about
their security, as process isolation environments.
Indeed, one of the more common examples of a container is a
chroot jail, which creates an isolated directory environment for
running processes.
Attackers, if they breach the running process in the jail, then find
themselves trapped in this environment and unable to further
compromise a host.
Introduction about Containers

More recent container technologies have included OpenVZ,


Solaris Zones, and Linux containers like lxc.
Using these more recent technologies, containers can now look
like full-blown hosts in their own right rather than just execution
environments.

In Docker's case, having modern Linux kernel features, such as


control groups and namespaces, means that containers can
have strong isolation, their own network and storage stacks, as
well as resource management capabilities to allow friendly co-
existence of multiple containers on a host.
Introduction about Containers

Containers are generally considered a lean technology because


they require limited overhead.

Unlike traditional virtualization or paravirtualization technologies,


they do not require an emulation layer or a hypervisor layer to
run and instead use the operating system's normal system call
interface.

This reduces the overhead required to run containers and can


allow a greater density of containers to run on a host.
Introducing Docker

Docker is an open-source engine that automates the


deployment of applications into containers.

It was written by the team at Docker, Inc (formerly dotCloud Inc,


an early player in the Platform-as-a-Service (PAAS) market), and
released by them under the Apache 2.0 license.
Introducing Docker

Docker adds an application deployment engine on top of a


virtualized container execution environment.
It is designed to provide a lightweight and fast environment in
which to run your code as well as an efficient workflow to get that
code from your laptop to your test environment and then into
production.

Docker is incredibly simple. Indeed, you can get started with


Docker on a minimal host running nothing but a compatible
Linux kernel and a Docker binary.
Docker's mission is to provide
Docker's mission is to provide
Docker components
Docker components
Docker components
Docker components
Docker components
Docker

Docker borrows the concept of the standard shipping container,


used to transport goods globally, as a model for its containers.
But instead of shipping goods, Docker containers ship software.
Docker
Docker
Docker
Docker
Docker
Docker
Docker
Installing Docker

Installing Docker is quick and easy.


Docker is currently supported on a wide variety of Linux
platforms, including shipping as part of Ubuntu and Red Hat
Enterprise Linux (RHEL).
Also supported are various derivative and related distributions
like Debian, CentOS, Fedora, Oracle Linux, and many others.
Using a virtual environment, you can install and run Docker on OS
X and Microsoft Windows.
Installing Docker

Requirements:

Be running a 64-bit architecture


Be running a Linux 3.8 or later kernel
The kernel must support an appropriate storage driver. For
example,
 Device Mapper
 AUFS
 VFS
 BTRFS
 Cgroups
 Namespace
Installing Docker

Checking for prerequisites:

$ uname -a
$ ls -l /sys/class/misc/device-mapper
$ sudo grep device-mapper /proc/devices
#wget https://download.docker.com/linux/centos/docker-
ce.repo -O /etc/yum.repos.d/docker-ce.repo
 # yum install docker-ce -y
 # systemctl start docker
 # systemctl enable docker
 # systemctl status docker
Installing Docker

After Docker Installation:

# docker version
#docker info
#docker images
#docker ps
Getting Started with Docker

Building our first container

Now let's try and launch our first container with Docker. We're
going to use the docker run command to create a container.
The docker run command provides all of the "launch" capabilities
for Docker.
We'll be using it a lot to create new containers.
#docker run hello-world
#docker run -i -t ubuntu /bin/bash
Getting Started with Docker

Container naming

Docker will automatically generate a name at random for each


container we create.
If we want to specify a particular container name in place of the
automatically generated name, we can do so using the --name
flag.
#docker run --name bob_the_container -i -t ubuntu /bin/bash
Getting Started with Docker

Starting a stopped container

#docker ps
#docker ps -a
#docker start bob_the_container
#docker start aa3f365f0f4e
Getting Started with Docker

Attaching to a container

#docker attach bob_the_container


#docker attach aa3f365f0f4e
Getting Started with Docker

Creating daemonized containers

#docker run --name daemon_dave -d ubuntu /bin/sh -c "while ↩


true; do echo hello world; sleep 1; done"

Seeing what's happening inside our container

#docker logs daemon_dave


#docker logs -f daemon_dave
Getting Started with Docker
Getting Started with Docker

Inspecting the container's processes

#docker top daemon_dave

Stopping a daemonized container

#docker stop daemon_dave


#docker stop c2c4e57c12c4
Getting Started with Docker

Finding out more about our container

#docker inspect daemon_dave


The docker inspect command will interrogate our container and
return its configuration information, including names, commands,
networking configuration, and a wide variety of other useful data.
We can also selectively query the inspect results hash using the -f
or --format flag.
#docker inspect --format='{{ .State.Running }}' daemon_dave
Getting Started with Docker

Deleting a container

#docker rm 80430f8d0921
#docker rm `docker ps -a -q`
Working with Docker images and repositories

What is a Docker image?

A Docker image is made up of filesystems layered over each


other.
At the base is a boot filesystem, bootfs , which resembles the
typical Linux/Unix boot filesystem.
A Docker user will probably never interact with the boot
filesystem. Indeed, when a container has booted, it is moved into
memory, and the boot filesystem is unmounted to free up the
RAM used by the initrd disk image.
Working with Docker images and repositories
Working with Docker images and repositories
Working with Docker images and repositories
Working with Docker images and repositories

#docker images
#docker pull fedora
#docker images fedora
#docker search puppet
#docker pull jamtur01/puppetmaster
Working with Docker images and repositories

Building our own images

There are two ways to create a Docker image:


1. Via the docker commit command
2. Via the docker build command with a Dockerfile

The docker commit method is not currently recommended, as


building with a Dockerfile is far more flexible and powerful,
Working with Docker images and repositories
Working with Docker images and repositories
Working with Docker images and repositories
Working with Docker images and repositories
Working with Docker images and repositories
Working with Docker images and repositories
Working with Docker images and repositories
Working with Docker images and repositories
Dockerfile instructions

We've already seen some of the available Dockerfile instructions,


like RUN and EXPOSE .
But there are also a variety of other instructions we can put in our
Dockerfile .
These include CMD , ENTRYPOINT , ADD , COPY , VOLUME ,
WORKDIR , USER ,
ONBUILD , and ENV .
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Dockerfile instructions
Pushing images to the Docker Hub

Once we've got an image, we can upload it to the Docker Hub.


This allows us to make it available for others to use.

For example, we could share it with others in our organization or


make it publicly available.
Pushing images to the Docker Hub

Once we've got an image, we can upload it to the Docker Hub.


This allows us to make it available for others to use.

For example, we could share it with others in our organization or


make it publicly available.
Pushing images to the Docker Hub
Pushing images to the Docker Hub
Pushing images to the Docker Hub
Pushing images to the Docker Hub
Pushing images to the Docker Hub
Pushing images to the Docker Hub
Running your own Docker registry
Running your own Docker registry
Running your own Docker registry
Running your own Docker registry
Using Docker to test a static website

One of the simplest use cases for Docker is as a local web


development environment.
An environment that allows you to replicate your production
environment and ensure what you develop will run in the
environment to which you want to deploy it.
We're going to start with installing an Nginx container to run a
simple website.
Using Docker to test a static website
Using Docker to test a static website
Using Docker to test a static website
Using Docker to test a static website
Using Docker to test a static website
Using Docker to test a static website
Sharing Data with Containers

The container typically lives as long as the application lives.

However, this has some negative implications for the application


data. It is natural that applications go through a variety of
changes in order to accommodate both businesses, as well as
technical changes, even in their production environments.

There are other causes, such as application malfunction, version


changes, application maintenance, and so on, for applications to
be updated and upgraded consistently.
Sharing Data with Containers

In the case of a general-purpose computing model, even when an


application dies for any reason, the persistent data associated
with this application would be preserved in the filesystem.
Sharing Data with Containers

The data volume


The data volume is the fundamental building block of data
sharing in the Docker environment.

A data volume can be inscribed in a Docker image using the


VOLUME instruction of the Dockerfile .

Also, it can be prescribed during the launch of a container using


the -v option of the docker run subcommand.
Sharing Data with Containers
Sharing Data with Containers
Sharing Data with Containers
Sharing Data with Containers

Sharing host data


Docker exposes the host directory or file mounting facility
through the -v option of the docker run subcommand.
The –v option has three different formats enumerated as follows:

The <host path> is an absolute path in the Docker host,


<container mount path> is an absolute path in the container
filesystem, and <read write mode> can be either read-only ( ro )
or read-write ( rw ) mode.
Sharing Data with Containers
Sharing Data with Containers
Sharing Data with Containers
Sharing Data with Containers
Sharing Data with Containers
Sharing Data with Containers

Running MySQL Database with Persistent Volume


# docker search mysql

# docker pull mysql

# mkdir -p /data/docker/mysql

# mkdir -p /data/docker/mysql/database

# docker run -d --name mysql -v /data/docker/mysql/database:/var/lib/mysql -e


MYSQL_ROOT_PASSWORD=password -e MYSQL_USERNAME=suresh -e
MYSQL_PASSWORD=password2 -e MYSQL_DATABASE=koenig mysql

# ls -l /data/docker/mysql/database/

# docker ps

# docker inspect mysql | grep -i ipaddress

# yum whatprovides "*bin/mysql"

# yum install mariadb-5.5.56-2.el7.x86_64 -y

# mysql -uroot -ppassword -h172.17.0.2


Sharing data between containers

Data-only containers
Sharing data between containers

Data-only containers
Sharing data between containers

Mounting data volume from other containers


The Docker engine provides a nifty interface to mount (share) the
data volume from one container to another.
Docker makes this interface available through the --volumes-
from option of the docker run subcommand.
The --volumes-from option takes a container name or
container ID as its input and automatically mounts all the data
volumes available on the specified container.
Docker allows you to mount multiple containers with the data
volume using the --volumes-from option multiple times.
Sharing data between containers
The practicality of data sharing between
containers
The practicality of data sharing between
containers
The practicality of data sharing between
containers
The practicality of data sharing between
containers
The practicality of data sharing between
containers
The practicality of data sharing between
containers
Networking and Docker

Each Docker container has its own network stack, and this is due
to the Linux kernel NET namespace, where a new NET
namespace for each container is instantiated and cannot be seen
from outside the container or from other containers.

Docker networking is powered by the following network


components and services.
Linux bridges
Open vSwitch
NAT
IPtables
Networking and Docker

The docker0 bridge


The docker0 bridge is the heart of default networking.
When the Docker service is started, a Linux bridge is created on
the host machine.
The interfaces on the containers talk to the bridge, and the bridge
proxies to the external world.
Multiple containers on the same host can talk to each other
through the Linux bridge.
docker0 can be configured via the --net flag and has, in general,
four modes:
Networking and Docker

docker0 can be configured via the --net flag and has, in general,
four modes:

--net default
--net=none
--net=container:$container2
--net=host
Networking and Docker

The --net default mode

In this mode, the default bridge is used as the bridge for


containers to connect to each other.
Networking and Docker

The --net=none mode

With this mode, the container created is truly isolated and cannot
connect to the network.
Networking and Docker

The --net=container:$container2 mode

With this flag, the container created shares its network


namespace with the container called $container2 .
Networking and Docker

The --net=host mode

With this mode, the container created shares its network


namespace with the host.
Port mapping in Docker container
Port mapping in Docker container

When the first container is created, a new network namespace is


created for the container.
A vEthernet link is created between the container and the Linux
bridge.
Traffic sent from eth0 of the container reaches the bridge through
the vEthernet interface and gets switched thereafter.
The following code can be used to show a list of Linux bridges:
Docker OVS
Docker OVS
Docker OVS
Linking Docker containers
Linking Docker containers

# docker pull redis


# docker run -d --name redis1 redis
# docker ps
Notice that it has started on port 6379.
# docker run -it --link redis1:redis --name redisclient1 busybox
/ # cat /etc/hosts
/ # ping -c 3 redis
/ # set | grep REDIS
/ # exit
Create a new bridge network and connect with container

# docker network ls

# docker network create --driver=bridge --subnet=192.168.0.0/24


my-bridge1
# docker network ls
# ip route add 192.168.0.0/24 via 192.168.191.2
# docker network ls
Create a new bridge network and connect with container

# ifconfig |more

# brctl show

# docker run -d -it --name my-ubuntu --network my-bridge1


ubuntu /bin/bash
# brctl show
Create a new bridge network and connect with container

# docker exec -it my-ubuntu /bin/bash


root@41bee90acebf:/# apt-get update && apt-get install net-
tools -y
root@b54dc1234e67:/# ifconfig

Das könnte Ihnen auch gefallen