Sie sind auf Seite 1von 19

A Technical Seminar on

“DOCKER”
Submitted in Partial Fulfillment of the Requirements
For the award of the Degree of

Bachelor of Technology in
Science and Humanities Department(S&H)

By
DACHEPALLY MANJUNATH
18311A1973

Under the Guidance of


Mr.DAYAKAR
(ASSISTANT PROFESSOR)

Department of Science and Humanities


Sreenidhi Institute of Science & Technology (Autonomous)
( An Autonomous Institution affiliated to JNTUH)
Yamnampet,Ghatkesar,Hyderabad-501 301.

1
APRIL 2019

DEPARTMENT OF SCIENCE AND HUMANITIES

SREENIDHI INSTITUTE OF SCIENCCE AND TECHNOLOGY

Yamnampet, Ghatkesar ,Hyderabad-501 301

CERTIFICATE

This is to certify that the Technical Seminar entitled “DOCKER”, submitted by


DACHEPALLY MANJUNATH bearing Roll No. 18311A1973 towards partial fulfillment for
the award of Bachelors Degree in Electronics & Computer Engineering from Sreenidhi Institute
of Science & Technology, Ghatkesar, Hyderabad, is a record of bonafide work done by him. The
results embodied in the work are not submitted to any other University or Institute for award of
any degree or diploma.

MR.DAYAKAR DR. VENKAT REDDY


Assistant Professor Professor& HOD

2
DECLARATION

This is to certify that the work reported in the present seminar titled “DOCKER” is a record
work done by me in the Department of Electronics and Computer Engineering, Sreenidhi
Institute of Science and Technology, Yamnampet, Ghatkesar.

The report is based on the seminar work done entirely by me and not copied from any other
source.

DACHEPALLY MANJUNATH

18311A1973

3
ACKNOWLEDGMENT

We convey our sincere thanks to Dr.Shivareddy, Principal, Sreenidhi Institute of


Science and Technology, Ghatkesar for providing resources to complete this seminar.
We are very thankful to Dr.Venkatreddy, Head of ECM Department, Sreenidhi Institute
of Science and Technology, Ghatkesar for providing an initiative to this seminar and
giving valuable timely suggestions over our seminar work and for their kind co-operation
in the completion of the seminar.

We convey our sincere thanks to all the faculties of ECM department, Sreenidhi Institute
of Science and Technology, for their continuous help, co-operation, and support to
complete this seminar.

Finally, we extend our sense of gratitude to almighty, our parents, all our friends,
teaching and non-teaching staff, who directly or indirectly helped us in this endeavor.

4
LIST OF FIGURES:

PAGE NUMBERS
FIGURE 1: INTRODUCTION 08

FIGURE 2: HISTORY 09
FIGURE 3: OPERATIONS 11
FIGURE 4: POPULARITY OF DOCKER 12
FIGURE 5: POPULARITY OF DOCKER 14
FIGURE 6: DISADVANTAGES 15

5
CONTENTS

1. ABSTRACT 7
2. INTRODUCTION 8
3. HISTORY 8
4. TECHNOLOGY 9
5.OPERATIONS 11
6. POPULARITY 11
7. DISADVANTAGES 14
8. CONCLUSION 16

9. REFERENCES 17

6
ABSTRACT
Docker is an open source tool that simplifies managing Linux containers. A container is
a sandbox environment that runs a collection of processes. Containers are light-weight
VMs that share the same kernel as the host OS. Docker adds some niceties to linux
containers such as AUFS,version control,docker registry(repository),versioning etc. This
talk will serve as an introduction to working with docker tools. I will cover the basic
concepts behind docker and explain the difference between a docker and a VM. Show a
demo of how easy it is to create a docker image and launch a container from it.

Docker is an open platform for developing ,shipping and running various applications in
a faster way.Docker enables the applications to run separately from the host
infrastructure and treat the infrastructure like a managed application. Docker also helps
to ship code faster ,test faster,deploy faster and shorten the cycle between writing code
and running code,Docker does this by combining a light weight container virtualization
platform with workflows and tooling helps to manage and deploy applications. Docker
uses resource isolation features of Linux kernel features that limits ,accounts for and
isolates the resource use.

Docker also implements a high-level API that provides light weight containers that run
processes in isolation .By using these isolated containers ,resources can be services
restricted , and process provisioned to have a private view of the operating system with
their own process.

7
INTRODUCTION

Docker is the leader in the containerization market, combining an enterprise-


grade container platform with world-class services to give developers and IT alike
the freedom to build, manage and secure applications without the fear of
technology or infrastructure lock-in. Today’s organizations are under pressure to
digitally transform their business but are constrained by a diverse portfolio of
applications, clouds and premises-based infrastructures.

Docker unlocks the potential of every organization with a container platform that brings
traditional applications and microservices built on Window, Linux and mainframe into an
automated and secure supply chain, advancing dev to ops collaboration.
As a result, organizations report a 300 percent improvement in time to market, while
reducing operational costs by 50 percent. Inspired by open source innovation and a rich
ecosystem of technology and go-to-market partners, Docker’s container platform and
services are used by millions of developers and more than 650 Global 10K commercial
customers including ADP, GE, MetLife, PayPal and Societe Generale.

FIGURE 1

HISTORY
Solomon Hykes started Docker in France as an internal project within dotCloud,
a platform-as-a-service company, with initial contributions by other dot Cloud engineers
including Andrea Luzzardi and Francois-Xavier Bourlet. Jeff Lindsay also became
involved as an independent collaborator. Docker represents an evolution of dot Cloud's
proprietary technology, which is itself built on earlier open-source projects such
as Cloudlets.The software debuted to the public in Santa Clara at PyCon in 2013.
Docker was released as open source in March 2013.On March 13, 2014, with the

8
release of version 0.9, Docker dropped LXC as the default execution environment and
replaced it with its own lib container library written in the Go programming language.
Adoption

 On September 19, 2013, Red Hat and Docker announced a collaboration


around Fedora, Red Hat Enterprise Linux, and OpenShift.
 In November 2014 Docker container services were announced for the Amazon
Elastic Compute Cloud (EC2).
 On November 10, 2014, Docker announced a partnership with Stratoscale.
 On December 4, 2014, IBM announced a strategic partnership with Docker that
enables Docker to integrate more closely with the IBM Cloud.
 On June 22, 2015, Docker and several other companies announced that they are
working on a new vendor and operating-system-independent standard for software
containers.
 As of October 24, 2015, the project had over 25,600 GitHub stars (making it the 20th
most-starred GitHub project), over 6,800 forks, and nearly 1,100 contributors.
 In April 2016, Windocks, an independent ISV released a port of Docker's open
source project to Windows, supporting Windows Server 2012 R2 and Server 2016,
with all editions of SQL Server 2008 onward.
 A May 2016 analysis showed the following organizations as main contributors to
Docker: The Docker team, Cisco, Google, Huawei, IBM, Microsoft, and Red Hat.
 On October 4, 2016, Solomon Hykes announced InfraKit as a new self-healing
container infrastructure effort for Docker container environments.
 A January 2017 analysis of LinkedIn profile mentions showed Docker presence
grew by 160% in 2016.

FIGURE 2

TECHNOLOGY
 Docker is developed primarily for Linux, where it uses the resource isolation
features of the Linux kernel such as cgroups and kernel namespaces, and
a union-capable file system such as OverlayFS and others to allow

9
independent containers to run within a single Linux instance, avoiding the
overhead of starting and maintaining virtual machines (VMs).The Linux kernel's
support for namespaces mostly isolates an application's view of the operating
environment, including process trees, network, user IDs and mounted file
systems, while the kernel's cgroups provide resource limiting for memory and
CPU.Since version 0.9, Docker includes the libcontainer library as its own way
to directly use virtualization facilities provided by the Linux kernel, in addition to
using abstracted virtualization interfaces via libvirt, LXC and systemd-nspawn.

 Building on top of facilities provided by the Linux kernel (primarily cgroups and
namespaces), a Docker container, unlike a virtual machine, does not require or
include a separate operating system.Instead, it relies on the kernel's functionality
and uses resource isolation for CPU and memory, and separate namespaces to
isolate the application's view of the operating system. Docker accesses the Linux
kernel's virtualization features either directly using the lib container library, which
is available as of Docker 0.9, or indirectly via libvirt, LXC (Linux Containers)
or systemd-nspawn.

Tools

 Docker Compose is a tool for defining and running multi-container Docker


applications.It uses YAML files to configure the application's services and performs
the creation and start-up process of all the containers with a single command.
The docker-compose CLI utility allows users to run commands on multiple
containers at once, for example, building images, scaling containers, running
containers that were stopped, and more. Commands related to image manipulation,
or user-interactive options, are not relevant in Docker Compose because they
address one container. The docker-compose.yml file is used to define an
application's services and includes various configuration options. For example,
the build option defines configuration options such as the Docker file path,
the command option allows one to override default Docker commands, and
more.The first public version of Docker Compose (version 0.0.1) was released on
December 21, 2013.The first production-ready version (1.0) was made available on
October 16, 2014.
 Docker Swarm provides native clustering functionality for Docker containers, which
turns a group of Docker engines into a single virtual Docker engine.In Docker 1.12
and higher, Swarm mode is integrated with Docker Engine.The swarm CLI utility
allows users to run Swarm containers, create discovery tokens, list nodes in the
cluster, and more.The docker node CLI utility allows users to run various
commands to manage nodes in a swarm, for example, listing the nodes in a swarm,
updating nodes, and removing nodes from the swarm. Docker manages swarms
using the Raft Consensus Algorithm. According to Raft, for an update to be
performed, the majority of Swarm nodes need to agree on the update.
10
OPERATIONS

FIGURE 3

Docker is a tool that can package an application and its dependencies in a virtual
container that can run on any Linux server. This helps enable flexibility and portability
on where the application can run, whether on premises, public cloud, private cloud, bare
metal, etc.
Because Docker containers are lightweight, a single server or virtual machine can run
several containers simultaneously. A 2016 analysis found that a typical Docker use
case involves running five containers per host, but that many organizations run 10 or
more.
Using containers may simplify the creation of highly distributed systems by allowing
multiple applications, worker tasks and other processes to run autonomously on a single
physical machine or across multiple virtual machines. This allows the deployment of
nodes to be performed as the resources become available or when more nodes are
needed, allowing a platform as a service (PaaS)-style of deployment and scaling for
systems such as Apache Cassandra, MongoDB and Riak.

Popularity & benefits of using docker


1. Return on investment & cost savings

The first advantage of using docker is the ROI. The biggest driver of most management
decisions when selecting a new product is the return on investment. The more a
solution can drive down costs while raising profits, the better the solution is, especially
for large, established companies, that need to generate steady revenue on the long
term.In this sense, Docker can help facilitate this type of savings by dramatically
reducing infrastructure resources. The nature of Docker is that fewer resources are
necessary to run the same application. Because of the reduced infrastructure
requirements that Docker has, organizations are able to save on everything from server
costs to the employees needed to maintain them. Docker allows engineering teams to
be smaller and more effective.

11
2. Standardization & productivity

Docker containers ensure consistency across multiple development, release


cycles and standardising your environment. One of the biggest advantages to a Docker-
based architecture is actually standardization. Docker provides repeatable
development, build, test, and production environments. Standardizing service
infrastructure across the entire pipeline allows every team member to work on a
production parity environment. By doing this, engineers are more equipped to efficiently
analyze and fix bugs within the application. This reduces the amount of time wasted on
defects and increases the amount of time available for feature development.

As we mentioned, Docker containers allow you to commit changes to your Docker


images and version control them. For example if you perform a component upgrade that
breaks your whole environment, it is very easy to rollback to a previous version of your
Docker image. This whole process can be tested in a few minutes. Docker is fast,
allowing you to quickly make replications and achieve redundancy. Also, launching
Docker images is as fast as running a machine process.

FIGURE 4

3. CI efficiency

Docker enables you to build a container image and use that same image across every
step of the deployment process. A huge benefit of this is the ability to separate non-
dependent steps and run them in parallel. The length of time it takes from build to
production can be sped up notably.

4. Compatibility & maintainability

Eliminate the “it works on my machine” problem once and for all. One of the benefits
that the entire team will appreciate is parity. Parity, in terms of Docker, means that your
images run the same no matter which server or whose laptop they are running on. For
your developers, this means less time spent setting up environments, debugging
environment-specific issues, and a more portable and easy-to-set-up codebase. Parity
also means your production infrastructure will be more reliable and easier to maintain.
12
5. Simplicity & faster configurations

One of the key benefits of Docker is the way it simplifies matters. Users can take their
own configuration, put it into code and deploy it without any problems. As Docker can
be used in a wide variety of environments, the requirements of the infrastructure are no
longer linked with the environment of the application.

6. Rapid Deployment

Docker manages to reduce deployment to seconds. This is due to the fact that it creates
a container for every process and does not boot an OS. Data can be created and
destroyed without worry that the cost to bring it up again would be higher than
affordable.

7. Continuous Deployment & Testing

Docker ensures consistent environments from development to production. Docker


containers are configured to maintain all configurations and dependencies internally.
So, you can use the same container from development to production making sure there
are no discrepancies or manual intervention.
If you need to perform an upgrade during a product’s release cycle, you can easily
make the necessary changes to Docker containers, test them, and implement the same
changes to your existing containers. This sort of flexibility is another key advantage of
using Docker. Docker really allows you to build, test and release images that can be
deployed across multiple servers. Even if a new security patch is available, the process
remains the same. You can apply the patch, test it and release it to production.

8. Multi-Cloud Platforms

This is possibly one of Docker’s greatest benefits. Over the last few years, all major
cloud computing providers, including Amazon Web Services (AWS) and Google
Compute Platform (GCP), have embraced Docker’s availability and added individual
support. Docker containers can be run inside an Amazon EC2 instance, Google
Compute Engine instance, Rackspace server or VirtualBox, provided that the host OS
supports Docker. If this is the case, a container running on an Amazon EC2 instance
can easily be ported between environments, for example to VirtualBox, achieving similar
consistency and functionality. Also, Docker works very well with other providers like

13
Microsoft Azure, and OpenStack, and can be used with various configuration managers
like Chef, Puppet, and Ansible,etc.

FIGURE 5

9.ISOLATION

Docker makes sure each container has its own resources that are isolated from other
containers. You can have various containers for separate applications running
completely different stacks. Docker helps you ensure clean app removal since each
application runs on its own container. If you no longer need an application, you can
simply delete its container. It won’t leave any temporary or configuration files on your
host OS.
On top of these benefits, Docker also ensures that each application only uses resources
that have been assigned to them. A particular application won’t use all of your available
resources, which would normally lead to performance degradation or complete
downtime for other applications.

10. Security

And the last benefit of using docker is – security. From a security point of view, Docker
ensures that applications that are running on containers are completely segregated and
isolated from each other, granting you complete control over traffic flow and
management. No Docker container can look into processes running inside another
container. From an architectural point of view, each container gets its own set of
resources ranging from processing to network stacks.

DISADVANTAGES

 Containers don’t work at bare-metal rates – Containers utilise resources more


efficiently than virtual machines. But containers are however directed to
performance overhead due to overlay networking, interfacing within containers

14
and the host system and so on. If you want 100% bare-metal performance, you
want to apply bare metal, not containers.
 The container ecosystem is split – But the core Docker platform is open
source, some container products don’t work with other ones.
 Data storage is intricate – By design, all of the data inside a container leaves
forever when it closes down except you save it somewhere else first. There are
ways to store data tenaciously in Docker, such as Docker Data Capacities, but
this is arguably a test that still has yet to be approached in a seamless manner.
 Graphical applications do not operate well – Docker was created as a solution
for deploying server applications that don’t need a graphical interface. While
there are some creative approaches that one can practice to run a GUI app
inside a container, these solutions are solid at best.
 Few applications do not benefit from Docker Containers – In common, the
applications that are intended to work as a collection of thoughtful microservices
attain to get the most from containers. Contrarily, Docker’s one real benefit is that
it can interpret application delivery by giving an easy packaging mechanism.
Those who are planning to migrate to Docker should have these advantages and
disadvantages in mind. Docker is not the best choice for application deployment
invariably. In remarkable cases, traditional virtual machines or bare-metal servers are
better solutions. Don’t let the Docker hype remove this reality. Fields like networking,
storage and version management (for the contents of containers are shortly
underserved by the present Docker ecosystem and produce opportunities for both
startups and incumbents.
Over time, it’s likely that the difference between virtual machines and containers will
grow less important, which will push consideration to the ‘build’ and ‘ship’ aspects. The
differences here will make the puzzle of ‘What happens to Docker?’ least significant
than ‘What appears to the IT industry as a result of Docker?

FIGURE 6

15
CONCLUSION
Containerization offers a container based solution for virtualization. It encapsulates the
application and its dependencies into a system and language-independent package,
whereas your code runs in isolation from other containers but share the host’s
resources. Therefore you do not need a VM with an entire guest OS and its overhead
due to not used resources, but only your container to run your app.
Docker is that kind of a lightweight container based solution for virtualization.
Comparing the start and stop times of Docker to VMs we see a significant difference,
where VMs need 30-45s to start, 5-10s to stop. Docker is in the start time about 600
times faster and in the stop time about 100 times. Both times you will only need about
50ms.
Docker’s Technology Docker is based on some features - namely namespaces and c
groups - of the Linux Kernel. It provides Linux containers (LXC) or its own
implementation libcontainer as container format. So it is able to provide lightweight
operating system virtualization technology.

The already mentioned Docker Engine is basically all the virtualization technology and
utilities. Docker uses a client-server architecture, in which the user interacts with the
Docker daemon (through the commandline interface on the client). The daemon is
responsible for building images, running containers and their distribution.

16
Reference

1. Ahmad, F., and Musilek, P. A Keystroke and Pointer Control


Input Interface for Wearable Computers. In Proc. IEEE
PERCOM ’06, 2-11.
2. Amento, B., Hill, W., and Terveen, L. The Sound of One
Hand: A Wrist-mounted Bio-acoustic Fingertip Gesture Interface.
In CHI ‘02 Ext. Abstracts, 724-725.
3. Argyros, A.A., and Lourakis, M.I.A. Vision-based Interpretation
of Hand Gestures for Remote Control of a Computer
Mouse. In Proc. ECCV 2006 Workshop on Computer Vision in
HCI, LNCS 3979, 40-51.
4. Burges, C.J. A Tutorial on Support Vector Machines for Pattern
Recognition. Data Mining and Knowledge Discovery, 2.2,
June 1998, 121-167.
5. Clinical Guidelines on the Identification, Evaluation, and
Treatment of Overweight and Obesity in Adults. National
Heart, Lung and Blood Institute. Jun. 17, 1998.
6. Deyle, T., Palinko, S., Poole, E.S., and Starner, T. Hambone:
A Bio-Acoustic Gesture Interface. In Proc. ISWC '07. 1-8.
7. Erol, A., Bebis, G., Nicolescu, M., Boyle, R.D., and Twombly,
X. Vision-based hand pose estimation: A review. Computer
Vision and Image Understanding. 108, Oct., 2007.
8. Fabiani, G.E. McFarland, D.J. Wolpaw, J.R. and Pfurtscheller,
G. Conversion of EEG activity into cursor movement by a
brain-computer interface (BCI). IEEE Trans. on Neural Systems
and Rehabilitation Engineering, 12.3, 331-8. Sept. 2004.
9. Grimes, D., Tan, D., Hudson, S.E., Shenoy, P., and Rao, R.
Feasibility and pragmatics of classifying working memory
load with an electroencephalograph. Proc. CHI ’08, 835-844.
10. Harrison, C., and Hudson, S.E. Scratch Input: Creating Large,
Inexpensive, Unpowered and Mobile finger Input Surfaces. In
Proc. UIST ’08, 205-208.
11. Hirshfield, L.M., Solovey, E.T., Girouard, A., Kebinger, J.,
Jacob, R.J., Sassaroli, A., and Fantini, S. Brain measurement
for usability testing and adaptive interfaces: an example of uncovering
syntactic workload with functional near infrared
spectroscopy. In Proc. CHI ’09, 2185-2194.
12. Ishii, H., Wisneski, C., Orbanes, J., and Chun, B., Paradiso, J.

17
PingPongPlus: design of an athletic-tangible interface for
computer-supported cooperative play. Proc. CHI ’99, 394-401.
13. Lakshmipathy, V., Schmandt, C., and Marmasse, N. Talk-
Back: a conversational answering machine. In Proc. UIST ’03,
41-50.
14. Lee, J.C., and Tan, D.S. Using a low-cost electroencephalograph
for task classification in HCI research. In Proc. CHI ’06,
81-90.
15. Lyons, K., Skeels, C., Starner, T., Snoeck, C. M., Wong, B.A.,
andAshbrook, D. Augmenting conversations using dualpurpose
speech. In Proc. UIST ’04. 237-246.
16. Mandryk, R.L., and Atkins, M.S. A Fuzzy Physiological Approach
for Continuously Modeling Emotion During Interaction
with Play Environments. Intl Journal of Human-Computer
Studies, 6(4), 329-347, 2007.
17. Mandryk, R.L., Inkpen, K.M., and Calvert, T.W. Using
Psychophysiological
Techniques to Measure User Experience
with Entertainment Technologies. Behaviour and Information
Technology, 25(2), 141-58, March 2006.
18. McFarland, D.J., Sarnacki, W.A., and Wolpaw, J.R. Brain–
computer interface (BCI) operation: optimizing information
transfer rates. Biological Psychology, 63(3), 237-51. Jul 2003.
19. Mistry, P., Maes, P., and Chang, L. WUW - wear Ur world: a
wearable gestural interface. In CHI ‘09 Ext. Abst., 4111-4116.
20. Moore, M., and Dua, U. A galvanic skin response interface for
people with severe motor disabilities. In Proc. ACM
SIGACCESS Accessibility and Comp. ‘04, 48-54.
21. Paradiso, J.A., Leo, C.K., Checka, N., and Hsiao, K. Passive
acoustic knock tracking for interactive windows. In CHI ‘02
Extended Abstracts, 732-733.
22. Post, E.R. and Orth, M. Smart Fabric, or “Wearable Clothing.”
In Proc. ISWC ’97, 167.
23. Rosenberg, R. The biofeedback Pointer: EMG Control of a
Two Dimensional Pointer. In Proc. ISWC ’98, 4-7.
24. Saponas, T.S., Tan, D.S., Morris, D., and Balakrishnan, R.
Demonstrating the feasibility of using forearm electromyography
for muscle-computer interfaces. In Proc. CHI ’09, 515-24.
25. Sturman, D.J. and Zeltzer, D. A Survey of Glove-based Input.
IEEE Comp Graph and Appl, 14.1, Jan 1994.
18
26. Wilson, A. PlayAnywhere: a compact interactive tabletop
projection-vision system. In Proc. UIST ‘05, 83-92.
27. Wilson, A.D. Robust computer vision-based detection of
pinching for one and two-handed gesture input. In Proc. UIST
’06, 255-258.
28. Witten, I.H. and Frank, E. Data Mining: Practical machine
learning tools and techniques, 2nd Edition, Morgan Kaufmann,
San Francisco, 2005

19

Das könnte Ihnen auch gefallen