Sie sind auf Seite 1von 60

Introduction to Kubernetes

Workshop

2015/11/11
Amy Unruh, Jeff Mendoza, Brian Dorsey, Ian Lewis, Sarah Novotny,
Eli Bixby
All code in this presentation is licensed under Apache License Version 2.0.

See https://github.com/kubernetes/kubernetes/ for the “guestbook”


example.
Welcome and Logistics

● The workshop instructions document:

https://goo.gl/YcUVaJ
You can add comments to this doc. Please feel free to do that if you find something that’s
wrong or confusing, or have some suggestions.

● Please group up if you want to!


What is
Kubernetes?

kubernetes.io
github.com/kubernetes
So, what are containers?
Containers
Old Way: Shared Machines

No isolation

No namespacing
app app

Common libs
app app

Highly coupled apps and OS libs

kernel

#kubernetes @kubernetesio
Old Way: Virtual Machines

Some isolation

Inefficient app app

libs libs
Still highly coupled to the guest OS kernel kernel

Hard to manage app app

libs libs
kernel kernel

#kubernetes @kubernetesio
New Way: Containers

app app
libs libs

app app
libs libs

kernel

#kubernetes @kubernetesio
But what ARE they?
• Containers share the same operating system kernel

• Container images are stateless and contain all dependencies


• static, portable binaries
• constructed from layered filesystems
• Containers provide isolation (from each other and from the host)
• Resources (CPU, RAM, Disk, etc.)
• Users
• Filesystem
• Network

@briandorsey
Why containers?
• Performance
• Repeatability
• Isolation
• Quality of service
• Accounting
• Visibility
• Portability

A fundamentally different way of


managing applications
Images by Connie
@briandorsey
Zhou
Now that we have containers...
Isolation: Keep jobs from interfering with each other
Scheduling: Where should my job be run?
Lifecycle: Keep my job running
Discovery: Where is my job now?
Constituency: Who is part of my job?
Scale-up: Making my jobs bigger or smaller
Auth{n,z}: Who can do things to my job?
Monitoring: What’s happening with my job?
Health: How is my job feeling?
@briandorsey
Back to Kubernetes
Greek for “Helmsman”; also the root of
the word “Governor”
• Container orchestrator
• Runs containers
• Supports multiple cloud and
bare-metal environments
• Inspired and informed by Google’s
experiences and internal systems
• Open source, written in Go

Manage applications, not machines


@briandorsey
Design principles
Declarative > imperative: State your desired results, let the system actuate
Control loops: Observe, rectify, repeat
Simple > Complex: Try to do as little as possible
Modularity: Components, interfaces, & plugins
Legacy compatible: Requiring apps to change is a non-starter
Network-centric: IP addresses are cheap
No grouping: Labels are the only groups
Bulk > hand-crafted: Manage your workload in bulk
Open > Closed: Open Source, standards, REST, JSON, etc.

@briandorsey
Primary Kubernetes concepts...
Node: physical or virtual machine running Kubernetes, onto which pods can
be scheduled

Container: A sealed application package (Docker)


Pod: A co-located group of Containers and Volumes
example: content syncer & web server

@briandorsey
...Primary Kubernetes concepts

Controller: A loop that drives current state towards desired state


example: replication controller

Service: A set of running pods that work together


example: load-balanced backends

Labels: Identifying metadata attached to other objects


example: phase=canary vs. phase=prod
Selector: A query against labels, producing a set result
example: all pods where label phase == prod

@briandorsey
Let’s start up a cluster!
http://cloud.google.com/console
Google Container Engine

@briandorsey
Kubernetes Cluster

Kubelet Proxy
Kubelet Proxy
Kubernetes Master Kubelet Proxy
Pod Pod
Controller Pod Pod
Scheduler
Manager Container
Pod Container
Pod
Container
Container Container
Container
Container
Container Container
Container
Container
Container Container
Container
Container Container
API Server
Kubernetes Node
Kubernetes Node
Kubernetes Node

#kubernetes @kubernetesio
Pods
Pods
Content
Small group of containers & volumes Consumers
Manager

Tightly coupled
• same node

The atom of cluster scheduling &


placement File Puller Web Server

Each pod has its own IP address


• shared namespace: share IP address &
localhost Volume
Pod
Ephemeral
• can die and be replaced

Example: data puller & web server


@briandorsey
Volumes
Pod-scoped Container Container

GitHub
Often share pod’s lifetime & fate
Empty
Various types of volumes: Git GCE PD
• Empty directory (default) Host
• Host file/directory Pod
• Git repository
• GCE Persistent Disk
Host’s GCE
• NFS FS
• AWS ElasticBlockStore
• ...and more

@briandorsey
Pod lifecycle
- Once scheduled to a node, pods do not move
● You can set a pod’s container RestartPolicy

- Pod phases are: pending, running, succeeded, failed, or unknown

- Pods do not reschedule themselves if they fail: pod replication and rollout
is handled by a replication controller (which we will introduce soon)

@briandorsey
Labels
App: Nifty App: Nifty
Labels Phase: Dev Phase: Dev
Role: FE Role: BE
Arbitrary metadata

Attached to any API object


Generally represent identity
Queryable by selectors
• think SQL ‘select ... where ...’

The only grouping mechanism


• pods under a ReplicationController
• pods in a Service App: Nifty App: Nifty
• capabilities of a node Phase: Test Phase: Test
Example: “phase: canary” Role: FE Role: BE

@briandorsey
Selectors

App: Nifty App: Nifty


Phase: Dev Phase: Dev
Role: FE Role: BE

App: Nifty App: Nifty


Phase: Test Phase: Test
Role: FE Role: BE

@briandorsey
Selectors

App: Nifty App == Nifty App: Nifty


Phase: Dev Phase: Dev
Role: FE Role: BE

App: Nifty App: Nifty


Phase: Test Phase: Test
Role: FE Role: BE

@briandorsey
Selectors

App == Nifty
App: Nifty App: Nifty
Role == FE
Phase: Dev Phase: Dev
Role: FE Role: BE

App: Nifty App: Nifty


Phase: Test Phase: Test
Role: FE Role: BE

@briandorsey
Selectors

App == Nifty
App: Nifty App: Nifty
Role == BE
Phase: Dev Phase: Dev
Role: FE Role: BE

App: Nifty App: Nifty


Phase: Test Phase: Test
Role: FE Role: BE

@briandorsey
Selectors

App == Nifty
App: Nifty App: Nifty
Phase == Dev
Phase: Dev Phase: Dev
Role: FE Role: BE

App: Nifty App: Nifty


Phase: Test Phase: Test
Role: FE Role: BE

@briandorsey
Selectors

App: Nifty App: Nifty


Phase: Dev Phase: Dev
Role: FE Role: BE

App: Nifty App: Nifty


Phase: Test Phase: Test
App == Nifty
Role: FE Role: BE
Phase == Test

@briandorsey
Replication Controllers
Control loops
Drive current state -> desired state
observe
Act independently
act
Use APIs - no shortcuts or back
doors

Observed state is truth

Recurring pattern in the system diff

Example: ReplicationController

@briandorsey
Replication Controllers

Replication Controller
- Name = “nifty-rc”
- Selector = {“App”: “Nifty”,
"Phase": "Dev",
"Role": "FE"}
- PodTemplate = { ... }
- NumReplicas = 4

@briandorsey
Replication Controllers Replication Controller
- Desired = 4
- Current = 4

b0111
f0118

node 3
node 1

d9376 a1209

node 2 node 4

@briandorsey
Replication Controllers Replication Controller
- Desired = 4
- Current = 4

b0111
f0118

node 3
node 1

d9376 a1209

node 2 node 4

@briandorsey
Replication Controllers Replication Controller
- Desired = 4
- Current = 3

b0111
f0118

node 3
node 1

a1209

node 4

@briandorsey
Replication Controllers Replication Controller
- Desired = 4
- Current = 4

b0111

f0118 c9bad
node 3
node 1

a1209

node 4

@briandorsey
The first appearance of the
‘guestbook’ app
Services
Services
A group of pods that act as one == Service Client
• group == selector

Defines access policy


• only “load balanced” for now
Portal (VIP)
Gets a stable virtual IP and port
• called the service portal
• also a DNS name

VIP is captured by kube-proxy


• watches the service constituency
• updates when backends change

Hide complexity - ideal for non-native apps


@briandorsey
Services Service
- Name = “nifty-svc”
Client - Selector = {“App”: “Nifty”}
- Port = 9376
TCP / UDP - targetPort = 8080

Portal IP is assigned
10.0.0.1 : 9376
iptables
DNAT

kube-proxy apiserver

TCP / UDP
watch
10.240.1.1 : 8080 10.240.2.2 : 8080 10.240.3.3 : 8080

@briandorsey
back to the ‘guestbook’ app...
...let’s add a frontend!
Inspecting your cluster and
apps:
kubectl, and the dashboard UI
Cluster services
Logging, Monitoring, DNS, etc.

All run as pods in the cluster - no special treatment, no back doors

Open-source solutions for everything


• cadvisor + influxdb + heapster == cluster monitoring
• fluentd + elasticsearch + kibana == cluster logging
• skydns + kube2sky == cluster DNS

Can be easily replaced by custom solutions


• Modular clusters to fit your needs

@briandorsey
Rolling updates, rollbacks, and
canaries
A Kubernetes Cluster, redux

Kubernetes Master
Kubelet Proxy
<backing store> Kubelet Proxy
Kubelet Proxy
Pod Pod
Controller Pod Pod
Scheduler
Manager Container
Pod Container
Pod
Container
Container Container
Container
Container
Container Container
Container
Container
Container Container
Container
Container Container
API Server
Kubernetes Node
Kubernetes Node
Kubernetes Node
The API server is the
front-end for the Kubernetes
control plane
(scales horizontally)
#kubernetes @kubernetesio
Canary Example

Replication
Replication Replication
Controller Replication
Controller Pod Pod Controller
Controller
Pod Pod Pod
version = v1 frontend frontend version = v2
#pods = 2 #pods = 1
version= v1 version = v1 version = v2
type = FE type = FE type = FE

show: version = v2 show: version = v2

Service
Service
VIP
Label
Label selectors:
selector:
version
type = FE = 1.0
type = Frontend

#kubernetes @kubernetesio
Pod Patterns
http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patt
erns.html
Sidecar Pattern
Github Consumers
Sidecar containers extend and enhance
the "main" container.

Git Node.js App


Synchronizer Container

Volume
Pod
Ambassador Pattern
Consumers
Ambassador containers proxy a local
connection to the world. Redis Shards

PHP App Redis Proxy

localhost
Pod
Adapter Pattern
Adapter containers standardize and
Monitoring
normalize output.
System

Redis
Redis
Exporter

localhost OR Volume
Pod
New in 1.1
http://blog.kubernetes.io/2015/11/Kubernetes-1-1-Performance-upgra
des-improved-tooling-and-a-growing-community.html
Kubernetes 1.1

HTTP Load Batch


Autoscaling
Balancing Jobs

New
Resource IP Tables
kubectl
Overcommit Kube Proxy
tools

+ Daemon Sets, Deployments,


1M QPS, 1000+ nodes* and much more!
Ingress for HTTP Load Balancing [Beta]

Service-foo: 24.1.2.3 Service-bar: 24.4.5.6


Ingress API

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec: http://k8s.io/foo http://k8s.io/bar
rules:
- host: k8s.io
http:
paths:
- path: /foo
backend:
serviceName: fooSvc fooSvc barSvc
servicePort: 80
- path: /bar
backend:
serviceName: barSvc
servicePort: 80
Horizontal Pod Autoscaling [Beta]
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleRef:
kind: ReplicationController
name: php-apache
namespace: default
minReplicas: 1
maxReplicas: 10
cpuUtilization:
targetPercentage: 50
https://www.flickr.com/photos/davedehetre/4440211085
Kubernetes is Open Source
We want your help!

http://kubernetes.io
https://github.com/kubernetes/kubernetes/
Kubernetes Slack Community: http://slack.kubernetes.ios
@kubernetesio

#kubernetes @kubernetesio
● Cloud Native Computing Foundation:
https://cncf.io/
● Open Container Initiative:
https://www.opencontainers.org/

#kubernetes @kubernetesio
end

Das könnte Ihnen auch gefallen