Sie sind auf Seite 1von 55

Highly_paid_skills

Amazon Web Services (AWS) describes both a technology and a company. The company AWS is
a subsidiary of Amazon.com and provides on-demand cloud computing platforms to both individuals,
companies and governments, on a paid subscription basis with a free-tier option available for 12
months. The technology allows subscribers to have at their disposal a full-fledged virtual cluster of
computers, available 24/7/365, through the internet. AWS' version of virtual computers have most of
the attributes of a real computer including hardware (CPU(s) & GPU(s) for processing, local/RAM
memory, hard-disk/SSD storage); a choice of operating systems; networking; and pre-loaded
application software such as web servers, databases, CRM... nearly anything. Each AWS system
also virtualizes its console I/O (keyboard, display, and mouse), allowing AWS subscribers to connect
to their AWS system using a modern browser. The browser acts as a window into the virtual
computer, letting subscribers log-in, configure and use their virtual systems just as they would a real
physical computer. They can choose to deploy their AWS systems to provide internet-based
services for their own and their customers' benefit.
The AWS technology is implemented at server farms throughout the world, and maintained by the
Amazon subsidiary. Fees are based on a combination of usage, the
hardware/OS/software/networking features chosen by the subscriber,
required availability, redundancy, security, and service options. Based on what the subscriber needs
and pays for, they can reserve a single virtual AWS computer, a cluster of virtual computers, a
physical (real) computer dedicated for their exclusive use, or even a cluster of dedicated physical
computers. As part of the subscription agreement,[3] Amazon manages, upgrades, and provides
industry-standard security to each subscriber's system. AWS services operate from many global
geographical regions including 6 in North America.[4]
In 2016, AWS comprised more than 70 services spanning a wide range
including compute, storage, networking, database, analytics, [[Application service
provider|application services], deployment, management, mobile, developer tools, and tools for
the Internet of Things. The most popular include Amazon Elastic Compute Cloud (EC2) and Amazon
Simple Storage Service (S3). Most services are not exposed directly to end users, but instead offer
functionality through APIs for developers to use in their applications. Amazon Web Services
offerings are accessed over HTTP, using the REST architectural style and SOAP protocol.
Amazon markets AWS to subscribers as a way of obtaining large scale computing capacity more
quickly and cheaply than building an actual physical server farm.[5] All services are billed based on
usage, but each service measures usage in varying ways.

Contents
[hide]

1Availability and Topology


o 1.1Region and region names table [14]
2History
o 2.1Growth and Profitability
o 2.2Customer base
o 2.3Significant service outages
3List of products
o 3.1Compute
o 3.2Networking
o 3.3Content delivery
o 3.4Storage and content delivery
o 3.5Database
o 3.6Deployment
o 3.7Management
o 3.8Application services
o 3.9Analytics
o 3.10Miscellaneous
4Pop-up lofts
5Charitable work
6Key People
7References
8External links

Availability and Topology[edit]

Map showing Amazon Web Services' availability zones within geographic regions around the world.

As of 2017, AWS has distinct operations in the following 16 geographical "regions":[4]

North America (6 regions)


US East (Northern Virginia), where the majority of AWS servers are based[6]
US East (Ohio)
US West (Oregon)
US West (Northern California)
AWS GovCloud (US), based in the Northwestern United States, provided for U.S.
government customers, complementing existing government agencies already using the US
East Region[7]
Canada (Central)
South America (1 region)
Brazil (So Paulo)
Europe / Middle East / Africa (3 regions)
EU (Ireland)
EU (Frankfurt), Germany
EU (London), United Kingdom
Asia Pacific (6 regions)
Asia Pacific (Tokyo), Japan
Asia Pacific (Seoul), South Korea
Asia Pacific (Singapore)
Asia Pacific (Mumbai), India
Asia Pacific (Sydney), Australia
Asia Pacific (Beijing), China
AWS has announced that 3 new regions (having 7 Availability Zones) will come online throughout
2017 in China, India, and the United Kingdom.[4]
Each region is wholly contained within a single country and all of its data and services stay within the
designated region.[3] Each region has multiple "Availability Zones",[8] which consist of one or more
discrete data centers, each with redundant power, networking and connectivity, housed in separate
facilities. Availability Zones do not automatically provide additional scalability or redundancy within a
region, since they are intentionally isolated from each other to prevent outages from spreading
between Zones. Several services can operate across Availability Zones (e.g., S3, DynamoDB) while
others can be configured to replicate across Zones to spread demand and avoid downtime from
failures.
As of December 2014, Amazon Web Services operated an estimated 1.4 Million servers across 28
availability zones.[9] The global network of AWS Edge locations consists of 54 points of presence
worldwide, including locations in the United States, Europe, Asia, Australia, and South America.[10]
In 2014, AWS committed to achieving 100% renewable energy usage.[11] In the United States, AWS'
partnerships with renewable energy providers include:

Community Energy of Virginia, a solar farm coming online in 2016, to support the US East
region.[12]
Pattern Development, in January 2015, to construct and operate Amazon Wind Farm Fowler
Ridge.
Iberdrola Renewables, LLC, in July 2015, to construct and operate Amazon Wind Farm US East.
EDP Renewables, in November 2015, to construct and operate Amazon Wind Farm US
Central.[13]
Tesla Motors, to apply battery storage technology to address power needs in the US West
(Northern California) region.[12]
Region and region names table [14][edit]
Region Name Region

US East (N. Virginia) us-east-1

US East (Ohio) us-east-2

US West (N. California) us-west-1

US West (Oregon) us-west-2

Canada (Central) ca-central-1


Region Name Region

China (Beijing) cn-north-1

Asia Pacific (Mumbai) ap-south-1

Asia Pacific (Seoul) ap-northeast-2

Asia Pacific (Singapore) ap-southeast-1

Asia Pacific (Sydney) ap-southeast-2

Asia Pacific (Tokyo) ap-northeast-1

EU (Frankfurt) eu-central-1

EU (Ireland) eu-west-1

EU (London) eu-west-2

South America (So Paulo) sa-east-1

AWS GovCloud (US) us-gov-west-1

History[edit]
Further information: Timeline of Amazon Web Services
AWS Summit 2013 event in NYC.

The AWS platform was launched in July 2002 to "expose technology and product data from Amazon
and its affiliates, enabling developers to build innovative and entrepreneurial applications on their
own."[2] In the beginning, the platform consisted of only a few disparate tools and services. Then in
late 2003, the AWS concept was publicly reformulated when Chris Pinkham and Benjamin Black
presented a paper describing a vision for Amazon's retail computing infrastructure that was
completely standardized, completely automated, and would rely extensively on web services for
services such as storage and would draw on internal work already underway. Near the end of their
paper, they mentioned the possibility of selling access to virtual servers as a service, proposing the
company could generate revenue from the new infrastructure investment.[15] In November 2004, the
first AWS service launched for public usage: Simple Queue Service (SQS).[16]Thereafter Pinkham
and lead developer Christoper Brown developed the Amazon EC2 service, with a team in Cape
Town, South Africa.[17]
Amazon Web Services was officially re-launched on March 14, 2006,[2] combining the three initial
service offerings of Amazon S3 cloud storage, SQS, and EC2. The AWS platform finally provided an
integrated suite of core online services, as Chris Pinkham and Benjamin Black had proposed back in
2003,[15] as a service offered to other developers, web sites, client-side applications, and
companies.[1] Andy Jassy, AWS founder and vice president in 2006, said at the time that Amazon S3
(one of the first and most scalable elements of AWS) "helps free developers from worrying about
where they are going to store data, whether it will be safe and secure, if it will be available when they
need it, the costs associated with server maintenance, or whether they have enough storage
available. Amazon S3 enables developers to focus on innovating with data, rather than figuring out
how to store it."[2] His quote marks a milestone in the Internet's history, when massive managed
resources became available to developers worldwide, allowing them to offer new scalable web-
enabled technologies. In 2016 Jassy was promoted to CEO of the division.[18] Reflecting the success
of AWS, his annual compensation in 2017 hit nearly $36 million.[19]
To support industry-wide training and skills standardization, AWS began offering a certification
program for computer engineers, on April 30, 2013, to highlight expertise in cloud computing.[20]
James Hamilton, an AWS engineer, wrote a retrospective article in 2016 to highlight the ten-year
history of the online service from 2006 to 2016. As an early fan and outspoken proponent of the
technology, he had joined the AWS engineering team in 2008.[21]

Growth and Profitability[edit]


In November 2010, it was reported that all of Amazon.com's retail sites had been completely moved
under the AWS umbrella.[22] Prior to 2012, AWS was considered a part of Amazon.com and so its
revenue was not delineated in Amazon financial statements. In that year industry watchers for the
first time estimated AWS revenue to be over $1.5 billion.[23]
In April 2015, Amazon.com reported AWS was profitable, with sales of $1.57 billion in the first
quarter of the year and $265 million of operating income. Founder Jeff Bezos described it as a fast-
growing $5 billion business; analysts described it as "surprisingly more profitable than forecast".[24] In
October 2015, Amazon.com said in its Q3 earnings report that AWS's operating income was $521
million, with operating margins at 25 percent. AWS's 2015 Q3 revenue was $2.1 billion, a 78%
increase from 2014's Q3 revenue of $1.17 billion.[25] 2015 Q4 revenue for the AWS segment
increased 69.5% y/y to $2.4 billion with 28.5% operating margin, giving AWS a $9.6 billion run rate.
In 2015, Gartner estimated that AWS customers are deploying 10x more infrastructure on AWS than
the combined adoption of the next 14 providers.[26]
In 2016 Q1, revenue was $2.57 billion with net income of $604 million, a 64% increase over 2015 Q1
that resulted in AWS being more profitable than Amazon's North American retail business for the first
time.[27] In the first quarter of 2016, Amazon experienced a 42% rise in stock value as a result of
increased earnings, of which AWS contributed 56% to corporate profits.[28][29]
With a 50% increase in revenues the past few years, AWS is predicted to have $13 billion in revenue
in 2017.[30]

Customer base[edit]
AWS adoption has increased since launch in 2002.

On March 14, 2006, Amazon said in a press release:[2] "More than 150,000 developers have
signed up to use Amazon Web Services since its inception."

In June 2007, Amazon claimed that more than 180,000 developers had signed up to use
Amazon Web Services.[31]

In November 2012, AWS hosted its first customer event in Las Vegas.[32]

On May 13, 2013, AWS was awarded an Agency Authority to Operate (ATO) from the U.S.
Department of Health and Human Services under the Federal Risk and Authorization
Management Program.[33]

In October 2013, it was revealed that AWS was awarded a $600M contract with the CIA.[34]

During August 2014, AWS received Department of Defense-Wide provisional authorization for
all U.S. Regions.[35]

During the 2015 re:Invent keynote, AWS disclosed that they have more than a million active
customers every month in 190 countries, including nearly 2,000 government agencies, 5,000
education institutions and more than 17,500 nonprofits.

On April 5 2017, AWS and DXC Technology (formed from a merger of CSC and HPE)
announced an expanded alliance to increase access of AWS features for enterprise clients in
existing data centers.[36]
Notable customers include NASA,[37] the Obama presidential campaign of 2012,[38] Kempinski
Hotels,[39] and Netflix.[40]

Significant service outages[edit]


On April 20, 2011, AWS suffered a major outage. Parts of the Elastic Block Store (EBS) service
became "stuck" and could not fulfill read/write requests. It took at least two days for service to be
fully restored.[41]
On June 29, 2012, several websites that rely on Amazon Web Services were taken offline due
to a severe storm in Northern Virginia, where AWS' largest data center cluster is located.[42]

On October 22, 2012, a major outage occurred, affecting many sites such
as Reddit, Foursquare, Pinterest, and others. The cause was a memory leak bug in an
operational data collection agent.[43]

On December 24, 2012, AWS suffered another outage causing websites such as Netflix to be
unavailable for customers in the Northeastern United States.[44] AWS cited their Elastic Load
Balancing (ELB) service as the cause.[45]

On February 28, 2017, AWS experienced a massive outage of S3 services in its Northern
Virginia data center. A majority of websites which relied on AWS S3 either hung or stalled, and
Amazon reported within five hours that AWS was fully online again.[46] No data has been reported
to have been lost due to the outage. The outage was caused by a human error made
while debugging, that resulted in removing more server capacity than intended, which caused a
domino effect of outages.[47]

List of products[edit]
This section contains content that is written like an
advertisement. Please help improve it by removing promotional
content and inappropriate external links, and by adding encyclopedic
content written from a neutral point of view. (September 2016) (Learn how and
when to remove this template message)

Compute[edit]
Amazon Elastic Compute Cloud (EC2) is an IaaS service providing virtual servers controllable by
an API, based on the Xen hypervisor. Equivalent remote services include Microsoft
Azure, Google Compute Engine and Rackspace; and on-premises equivalents such
as OpenStack or Eucalyptus.
Amazon Elastic Beanstalk provides a PaaS service for hosting applications, equivalent services
include Google App Engine or Heroku or OpenShift for on-premises use.
Amazon Lambda (AWS Lambda) runs code in response to AWS internal or external events such
as http requests, transparently providing the resource required.[48] Lambda is tightly integrated
with AWS but similar services such as Google Cloud Functions and open solutions such
as OpenWhisk are becoming competitors.
Networking[edit]
Amazon Route 53 provides a scalable Managed DNS service providing Domain Name Services.
Amazon Virtual Private Cloud (VPC) creates a logically isolated set of AWS resources which can
be connected using a VPN connection. This competes against on-premises solutions such
as OpenStack or HPE Helion Eucalyptus used in conjunction with PaaS software.
AWS Direct Connect provides dedicated network connections into AWS data centers.
Amazon Elastic Load Balancing (ELB) automatically distributes incoming traffic across
multiple Amazon EC2 instances.
AWS Elastic Network Adapter (ENA) provides up to 20Gbit/s of network bandwidth to
an Amazon EC2 instance.[49]
Content delivery[edit]
Amazon CloudFront, a content delivery network (CDN) for distributing objects to so-called "edge
locations" near the requester.
Storage and content delivery[edit]
This section needs additional citations for verification. Please
help improve this article by adding citations to reliable sources.
Unsourced material may be challenged and removed. (September
2016) (Learn how and when to remove this template message)

Amazon Simple Storage Service (S3) provides scalable object storage accessible from a Web
Service interface. Applicable use cases include backup/archiving, file (including media) storage
and hosting, static website hosting, application data hosting, and more.
Amazon Glacier provides long-term storage options (compared to S3). High redundancy and
availability, but low-frequent access times. Intended for archiving data.
AWS Storage Gateway, an iSCSI block storage virtual appliance with cloud-based backup.
Amazon Elastic Block Store (EBS) provides persistent block-level storage volumes for EC2.
AWS Import/Export, accelerates moving large amounts of data into and out of AWS using
portable storage devices for transport.
Amazon Elastic File System (EFS) a file storage service for Amazon Elastic Compute Cloud
(Amazon EC2) instances.
Database[edit]
Amazon DynamoDB provides a scalable, low-latency NoSQL online Database Service backed
by SSDs.
Amazon ElastiCache provides in-memory caching for web applications.[50] This is Amazon's
implementation of Memcached and Redis.[51]
Amazon Relational Database Service (RDS) provides scalable database servers
with MySQL, Oracle, SQL Server, and PostgreSQL support.[52]
Amazon Redshift provides petabyte-scale data warehousing with column-based storage and
multi-node compute.
Amazon SimpleDB allows developers to run queries on structured data. It operates in concert
with EC2 and S3.
AWS Data Pipeline provides reliable service for data transfer between different AWS compute
and storage services (e.g., Amazon S3, Amazon RDS, Amazon DynamoDB, Amazon EMR). In
other words, this service is simply a data-driven workload management system, which provides
a management API for managing and monitoring of data-driven workloads in cloud
applications.[53]
Amazon Aurora provides a MySQL-compatible relational database engine that has been created
specifically for the AWS infrastructure that claims faster speeds and lower costs that are realized
in larger databases.
Deployment[edit]
This section needs additional citations for verification. Please
help improve this article by adding citations to reliable sources.
Unsourced material may be challenged and removed. (September
2016) (Learn how and when to remove this template message)

AWS CloudFormation provides a declarative template-based Infrastructure as Code model for


configuring AWS.[54]
AWS Elastic Beanstalk provides deployment and management of applications in the cloud.
AWS OpsWorks provides configuration of EC2 services using Chef.
AWS CodeDeploy provides automated code deployment to EC2 instances.
Management[edit]
This section needs additional citations for verification. Please
help improve this article by adding citations to reliable sources.
Unsourced material may be challenged and removed. (September
2016) (Learn how and when to remove this template message)

Amazon Identity and Access Management (IAM) is an implicit service, providing the
authentication infrastructure used to authenticate access to the various services.
AWS Directory Service a managed service that allows connection to AWS resources with an
existing on-premises Microsoft Active Directory or to set up a new, stand-alone directory in the
AWS Cloud.
Amazon CloudWatch, provides monitoring for AWS cloud resources and applications, starting
with EC2.
AWS Management Console (AWS Console), A web-based point and click interface to manage
and monitor the Amazon infrastructure suite including (but not limited
to) EC2, EBS, S3, SQS, Amazon Elastic MapReduce, and Amazon CloudFront. A mobile
application for Android which has support for some of the management features from the
console.
Amazon CloudHSM - The AWS CloudHSM service helps to meet corporate, contractual
and regulatory compliance requirements for data security by using dedicated Hardware Security
Module (HSM) appliances within the AWS cloud.
AWS Key Management Service (KMS) a managed service to create and control encryption keys.
Amazon EC2 Container Service (ECS) a highly scalable and fast container management service
using Docker containers.
Application services[edit]
Amazon API Gateway is a service for publishing, maintaining and securing web service APIs.
Amazon CloudSearch provides basic full-text search and indexing of textual content.
Amazon DevPay, currently in limited beta version, is a billing and account management system
for applications that developers have built atop Amazon Web Services.
Amazon Elastic Transcoder (ETS) provides video transcoding of S3 hosted videos, marketed
primarily as a way to convert source files into mobile-ready versions.
Amazon Simple Email Service (SES) provides bulk and transactional email sending.
Amazon Simple Queue Service (SQS) provides a hosted message queue for web applications.
Amazon Simple Notification Service (SNS) provides a hosted multi-protocol "push" messaging
for applications.
Amazon Simple Workflow (SWF) is a workflow service for building scalable, resilient
applications.
Amazon Cognito is a user identity and data synchronization service that securely manages and
synchronizes app data for users across their mobile devices.[55]
Amazon AppStream 2.0 is a low-latency service that streams and resources intensive
applications and games from the cloud using NICE DVC technology.[56]
Analytics[edit]
Amazon Athena is an ETL-like service launched in November 2016. It allows server-less
querying of S3 content using standard SQL.[57]
Amazon Elastic MapReduce (EMR) Provides a PaaS service delivering Hadoop for
running MapReduce queries framework running on the web-scale infrastructure
of EC2 and Amazon S3.
Amazon Machine Learning a service that assists developers of all skill levels to use machine
learning technology.
Amazon Kinesis is a cloud-based service for real-time data processing over large, distributed
data streams. It streams data in real time with the ability to process thousands of data streams
on a per-second basis. The service, designed for real-time apps, allows developers to pull any
amount of data, from any number of sources, scaling up or down as needed. It has some
similarities in functionality to Apache Kafka.[58]
Amazon Elasticsearch Service provides fully managed Elasticsearch and Kibana services.[59]
Amazon QuickSight is a business intelligence, analytics, and visualization tool launched in
November 2016.[60] It provides ad-hoc services by connecting to AWS or non-AWS data sources.
Miscellaneous[edit]
Amazon Marketplace Web Service (MWS) allows users to manage complete shipment process
from creating listing to downloading shipment label using API.
Amazon Fulfillment Web Service provides a programmatic web service for sellers to ship items
to and from Amazon using Fulfillment by Amazon. This service will no longer be supported by
Amazon. All of the functionality of this service is now transferred to Amazon marketplace Web
service.
Amazon Historical Pricing provides access to Amazon's historical sales data from its affiliates. (It
appears that this service has been discontinued.)
Amazon Mechanical Turk (Mturk) manages small units of work distributed among many persons.
Amazon Product Advertising API formerly known as Amazon Associates Web Service (A2S) and
Amazon E-Commerce Service (ECS), provides access to Amazon's product data and electronic
commerce functionality.
Amazon Gift Code On Demand (AGCOD) for Corporate Customers[61] enables companies to
distribute Amazon gift cards (gift codes) instantly in any denomination, integrating Amazon's gift-
card technology into customer loyalty, employee incentive and payment disbursement platforms.
AWS Partner Network (APN) provides technology partners and consulting partners with the
technical information and sales and marketing support to increase business opportunities
through AWS and with businesses using AWS. Launched in April 2012, the APN is made up of
Technology Partners including Independent Software Vendors (ISVs), tool providers, platform
providers, and others.[62][63][64] Consulting Partners include System Integrators (SIs), agencies,
consultancies, Managed Service Providers (MSPs), and others. Potential Technology and
Consulting Partners must meet technical and non-technical training requirements set by AWS.[65]
Amazon Lumberyard is a freeware triple-A game engine that is integrated with AWS.[66]
Amazon Chime an enterprise collaboration service organizations can use for voice, video
conference, and instant messaging.[67]

Pop-up lofts[edit]
In June 2014 AWS opened their first temporary Pop-up Loft, in San Francisco, to help businesses
discover their services.[68] In May 2015 they expanded to New York City,[69][70]and in September 2015
expanded to Berlin.[71] AWS opened their fourth location, in Tel Aviv from March 1, 2016 to March 22,
2016.[72] A Pop-up Loft was open in London from September 10 to October 29, 2015.[73]
Charitable work[edit]
In 2017 AWS launched a program in the United Kingdom to help young adults and military veterans
retrain in technology-related skills. In partnership with the Prince's Trust and the Ministry of Defence
(MoD), AWS will help to provide re-training opportunities for young people from disadvantaged
backgrounds and former soldiers who have left the military. AWS is working alongside a number of
partner companies including Cloudreach, Sage, EDF Energy and Tesco Bank.[74]
Microsoft Azure /r/ is a cloud computing service created by Microsoft for building, deploying,
and managing applications and services through a global network of Microsoft-managed data
centers. It provides software as a service, platform as a service and infrastructure as a service and
supports many different programming languages, tools and frameworks, including both Microsoft-
specific and third-party software and systems.
Azure was announced in October 2008 and released on February 1, 2010 as Windows Azure,
before being renamed to Microsoft Azure on March 25, 2014.[1][2]

Contents
[hide]

1Services
o 1.1Compute
o 1.2Mobile services
o 1.3Storage services
o 1.4Data management
o 1.5Messaging
o 1.6Media services
o 1.7CDN
o 1.8Developer
o 1.9Management
o 1.10Machine Learning
2Regions
3Design
o 3.1Deployment models
4Timeline
5Privacy
6Significant outages
7Certifications
8See also
9References
10Further reading
11External links

Services[edit]
Microsoft lists over 600 Azure services,[3] of which some are covered below:

Compute[edit]
Virtual machines, infrastructure as a service (IaaS) allowing users to launch general-
purpose Microsoft Windows and Linux virtual machines, as well as preconfigured machine
images for popular software packages.[4]
App services, platform as a service (PaaS) environment letting developers easily publish and
manage Web sites.
Websites, high density hosting of websites allows developers to build sites
using ASP.NET, PHP, Node.js, or Python and can be deployed using FTP, Git, Mercurial, Team
Foundation Server or uploaded through the user portal. This feature was announced in preview
form in June 2012 at the Meet Microsoft Azure event.[5] Customers can create websites in PHP,
ASP.NET, Node.js, or Python, or select from several open source applications from a gallery to
deploy. This comprises one aspect of the platform as a service (PaaS) offerings for the Microsoft
Azure Platform. It was renamed to Web Apps in April 2015.[1][6]
WebJobs, applications that can be deployed to a Web App to implement background
processing. That can be invoked on a schedule, on demand or can run continuously. The Blob,
Table and Queue services can be used to communicate between Web Apps and Web Jobs and
to provide state.[citation needed]
Mobile services[edit]
Mobile Engagement collects real-time analytics that highlight users behavior. It also provides
push notifications to mobile devices.[7]
HockeyApp can be used to develop, distribute, and beta-test mobile apps[8]
Storage services[edit]
Storage Services provides REST and SDK APIs for storing and accessing data on the cloud.
Table Service lets programs store structured text in partitioned collections of entities that are
accessed by partition key and primary key. It's a NoSQL non-relational database.
Blob Service allows programs to store unstructured text and binary data as blobs that can be
accessed by a HTTP(S) path. Blob service also provides security mechanisms to control access
to data.
Queue Service lets programs communicate asynchronously by message using queues.
File Service allows storing and access of data on the cloud using the REST APIs or the SMB
protocol.[9]
Data management[edit]
Azure Search provides text search and a subset of OData's structured filters using REST or
SDK APIs.
DocumentDB is a NoSQL database service that implements a subset of the SQL SELECT
statement on JSON documents.
Redis Cache is a managed implementation of Redis.
StorSimple manages storage tasks between on-premises devices and cloud storage.[10]
SQL Database, formerly known as SQL Azure Database, works to create, scale and extend
applications into the cloud using Microsoft SQL Server technology. It also integrates with Active
Directory and Microsoft System Center and Hadoop.[11]
SQL Data Warehouse is a data warehousing service designed to handle computational and data
intensive queries on datasets exceeding 1TB.
Messaging[edit]
The Microsoft Azure Service Bus allows applications running on Azure premises or off premises
devices to communicate with Azure. This helps to build scalable and reliable applications in
a service-oriented architecture (SOA). The Azure service bus supports four different types of
communication mechanisms:[citation needed]
Event Hubs, which provide event and telemetry ingress to the cloud at massive scale, with low
latency and high reliability. For example an event hub can be used to track data from cell
phones such as a GPS location coordinate in real time.[citation needed]
Queues, which allow one-directional communication. A sender application would send the
message to the service bus queue, and a receiver would read from the queue. Though there can
be multiple readers for the queue only one would process a single message.
Topics, which provide one-directional communication using a subscriber pattern. It is similar to a
queue, however each subscriber will receive a copy of the message sent to a Topic. Optionally
the subscriber can filter out messages based on specific criteria defined by the subscriber.
Relays, which provide bi-directional communication. Unlike queues and topics, a relay doesn't
store in-flight messages in its own memory. Instead, it just passes them on to the destination
application.
Media services[edit]
A PaaS offering that can be used for encoding, content protection, streaming, or analytics.[citation needed]

CDN[edit]
A global content delivery network (CDN) for audio, video, applications, images, and other static files.
Can be used to cache static assets of websites geographically closer to users to increase
performance. The network can be managed by a REST based HTTP API.[citation needed]
Azure has 30 point of presence locations worldwide (also known as Edge locations) as of December,
2016.[12]

Developer[edit]
Application Insights[citation needed]
Visual Studio Team Services[citation needed]
Management[edit]
Azure Automation, provides a way for users to automate the manual, long-running, error-prone,
and frequently repeated tasks that are commonly performed in a cloud and enterprise
environment. It saves time and increases the reliability of regular administrative tasks and even
schedules them to be automatically performed at regular intervals. You can automate processes
using runbooks or automate configuration management using Desired State Configuration.[1]
Microsoft SMA (software)
Machine Learning[edit]
Microsoft Azure Machine Learning (Azure ML) service is part of Cortana Intelligence Suite that
enables predictive analytics and interaction with data using natural language and speech
through Cortana.[13]

Regions[edit]
Azure is generally available in 38 regions around the world.[14]

Design[edit]
Microsoft Azure uses a specialized operating system, called Microsoft Azure, to run its "fabric
layer":[citation needed] a cluster hosted at Microsoft's data centers that manages computing and storage
resources of the computers and provisions the resources (or a subset of them) to applications
running on top of Microsoft Azure. Microsoft Azure has been described as a "cloud layer" on top of a
number of Windows Server systems, which use Windows Server 2008 and a customized version
of Hyper-V, known as the Microsoft Azure Hypervisor to provide virtualization of services.[citation needed]
Scaling and reliability are controlled by the Microsoft Azure Fabric Controller[citation needed] so the services
and environment do not crash, if one of the servers crashes within the Microsoft data center and
provides the management of the user's Web application like memory resources and load
balancing.[citation needed]
Azure provides an API built on REST, HTTP, and XML that allows a developer to interact with the
services provided by Microsoft Azure. Microsoft also provides a client-side managed class library
that encapsulates the functions of interacting with the services. It also integrates with Microsoft
Visual Studio, Git, and Eclipse.[citation needed]
In addition to interacting with services via API, users can manage Azure services using the Web-
based Azure Portal, which reached General Availability in December 2015.[15]The portal allows users
to browse active resources, modify settings, launch new resources, and view basic monitoring data
from active virtual machines and services. More advanced Azure management services are
available. [16]

Deployment models[edit]
Microsoft Azure offers two deployment models for cloud resources: the "classic" deployment model
and the Azure Resource Manager.[17] In the classic model, each Azure resource (virtual machine,
SQL database, etc.) was managed individually. The Azure Resource Manager, introduced in
2014,[17] enables users to create groups of related services so that closely coupled resources can be
deployed, managed, and monitored together.[18]

Timeline[edit]

Ray Ozzie announcing Windows Azure at PDC 2008, October 27

October 2008 (PDC LA), Announced the Windows Azure Platform


March 2009 Announced SQL Azure Relational Database
November 2009 Updated Windows Azure CTP, Enabled full trust, PHP, Java, CDN CTP and
more
February 2010 Windows Azure Platform commercially available
June 2010 Windows Azure Update, .NET Framework 4, OS Versioning, CDN, SQL Azure
Update[19]
October 2010 (PDC) Platform enhancements, Windows Azure Connect, Improved Dev / IT Pro
Experience
December 2011 Traffic manager, SQL Azure reporting, HPC scheduler
June 2012 Websites, Virtual machines for Windows and Linux, Python SDK, New portal,
Locally redundant storage
April 2014 Windows Azure renamed to Microsoft Azure[1]
July 2014 Azure Machine Learning public preview[20]
November 2014 Outage affecting major websites including MSN.com.[21]
September 2015 Azure Cloud Switch introduced as a cross-platform Linux distribution.[22]

Privacy[edit]
Microsoft has stated that, per the USA Patriot Act, the US government could have access to the data
even if the hosted company is not American and the data resides outside the USA.[23] However,
Microsoft Azure is compliant with the E.U. Data Protection Directive (95/46/EC)[24][25][contradictory]. To
manage privacy and security-related concerns, Microsoft has created a Microsoft Azure Trust
Center,[26] and Microsoft Azure has several of its services compliant with several compliance
programs including ISO 27001:2005 and HIPAA. A full and current listing can be found on the
Microsoft Azure Trust Center Compliance page.[27] Of special note, Microsoft Azure has been granted
JAB Provisional Authority to Operate (P-ATO) from the U.S. government in accordance with
guidelines spelled out under the Federal Risk and Authorization Management Program (FedRAMP),
a U.S. government program that provides a standardized approach to security assessment,
authorization, and continuous monitoring for cloud services used by the federal government.[28]

Significant outages[edit]
Documented Microsoft Azure outages and service disruptions.

Date Cause Notes

Incorrect code for calculating leap


2012-02-29
day dates[29]

2012-07-26 Misconfigured network device[30][31]

Xbox Live, Xbox Music and Video also


2013-02-22 Expiry of an SSL certificate[32]
affected[33]

2013-10-30 Worldwide partial compute outage[34]

Azure storage upgrade caused Xbox Live, Windows Store, MSN, Search,
2014-11-18 reduced capacity across several Visual Studio Online among others were
regions[35] affected.[36]

As of December 4, 2015, Azure has been available for 99.9936% of the past year.[37]
Certifications[edit]
Microsoft Azure certifications
Business analytics (BA) refers to the skills, technologies, practices for continuous iterative
exploration and investigation of past business performance to gain insight and drive business
planning.[1] Business analytics focuses on developing new insights and understanding of business
performance based on data and statistical methods. In contrast, business intelligence traditionally
focuses on using a consistent set of metrics to both measure past performance and guide business
planning, which is also based on data and statistical methods.(citation needed)
Business analytics makes extensive use of statistical analysis, including explanatory and predictive
modeling,[2] and fact-based management to drive decision making. It is therefore closely related
to management science. Analytics may be used as input for human decisions or may drive fully
automated decisions. Business intelligence is querying, reporting, online analytical
processing (OLAP), and "alerts".
In other words, querying, reporting, OLAP, and alert tools can answer questions such as what
happened, how many, how often, where the problem is, and what actions are needed. Business
analytics can answer questions like why is this happening, what if these trends continue, what will
happen next (that is, predict), what is the best that can happen (that is, optimize).[3]

Contents
[hide]

1Examples of application
2Types of analytics
3Basic domains within analytics
4History
5Challenges
6Competing on analytics
7See also
8References
9Further reading

Examples of application[edit]
Banks, such as Capital One, use data analysis (or analytics, as it is also called in the business
setting), to differentiate among customers based on credit risk, usage and other characteristics and
then to match customer characteristics with appropriate product offerings. Harrahs, the gaming firm,
uses analytics in its customer loyalty programs. E & J Gallo Winery quantitatively analyses and
predicts the appeal of its wines. Between 2002 and 2005, Deere & Company saved more than $1
billion by employing a new analytical tool to better optimise inventory.[3] A telecoms company that
pursues efficient call center usage over customer service may save money.

Types of analytics[edit]
Decision Analytics: supports human decisions with visual analytics that the user models to
reflect reasoning.[4]
Descriptive Analytics: gains insight from historical data with reporting, scorecards, clustering etc.
Predictive Analytics: employs predictive modelling using statistical and machine
learning techniques
Prescriptive Analytics: recommends decisions using optimisation, simulation, etc.

Basic domains within analytics[edit]


Behavioral analytics
Cohort Analysis
Collections analytics
Contextual data modeling - supports the human reasoning that occurs after viewing "executive
dashboards" or any other visual analytics
Cyber analytics
Enterprise Optimization
Financial services analytics
Fraud analytics
Health care analytics
Marketing analytics
Pricing analytics
Retail sales analytics
Risk & Credit analytics
Supply Chain analytics
Talent analytics
Telecommunications
Transportation analytics

History[edit]
Analytics have been used in business since the management exercises were put into place
by Frederick Winslow Taylor in the late 19th century. Henry Ford measured the time of each
component in his newly established assembly line. But analytics began to command more attention
in the late 1960s when computers were used in decision support systems. Since then, analytics
have changed and formed with the development of enterprise resource planning (ERP)
systems, data warehouses, and a large number of other software tools and processes.[3]
In later years the business analytics have exploded with the introduction to computers. This change
has brought analytics to a whole new level and has brought about endless possibilities. As far as
analytics has come in history, and what the current field of analytics is today, many people would
never think that analytics started in the early 1900s with Mr. Ford himself.

Challenges[edit]
Business analytics depends on sufficient volumes of high quality data. The difficulty in ensuring data
quality is integrating and reconciling data across different systems, and then deciding what subsets
of data to make available.[3]
Previously, analytics was considered a type of after-the-fact method of forecasting consumer
behavior by examining the number of units sold in the last quarter or the last year. This type of data
warehousing required a lot more storage space than it did speed. Now business analytics is
becoming a tool that can influence the outcome of customer interactions.[5] When a specific customer
type is considering a purchase, an analytics-enabled enterprise can modify the sales pitch to appeal
to that consumer. This means the storage space for all that data must react extremely fast to provide
the necessary data in real-time.

Competing on analytics[edit]
Thomas Davenport, professor of information technology and management at Babson College argues
that businesses can optimize a distinct business capability via analytics and thus better compete. He
identifies these characteristics of an organization that are apt to compete on analytics:[3]

One or more senior executives who strongly advocate fact-based decision making and,
specifically, analytics
Widespread use of not only descriptive statistics, but also predictive modeling and
complex optimization techniques
Substantial use of analytics across multiple business functions or processes
Movement toward an enterprise level approach to managing analytical tools, data, and
organizational skills and capabilities

See also[edit]
Analytics
Business analysis
Business analyst
Business intelligence
Business process discovery
Customer dynamics
Data mining
OLAP
Statistics
Test and learn

DevOps
From Wikipedia, the free encyclopedia

Software development process

Core activities

Requirements
Design
Construction
Testing
Debugging
Deployment
Maintenance
Paradigms and models

Software engineering
Waterfall
Cleanroom
Incremental
Spiral
V-Model
Agile

Methodologies and frameworks

Prototyping
RAD
UP
XP
TSP
PSP
DSDM
MSF
Scrum
Kanban
Dual Vee Model
TDD
ATDD
BDD
FDD
DDD
MDD
IID
Lean
DevOps

Supporting disciplines

Configuration management
Infrastructure as Code
Documentation
Software quality assurance (SQA)
Project management
User experience
WinOps

Tools

Compiler
Debugger
Profiler
GUI designer
Modeling
IDE
Build automation
Release automation
Testing

Standards and BOKs

CMMI
IEEE standards
ISO 9001
ISO/IEC standards
SWEBOK
PMBOK
BABOK

v
t
e

DevOps (a clipped compound of "software DEVelopment" and "information technology OPerationS")


is a term used to refer to a set of practices that emphasize the collaboration and communication of
both software developers and information technology (IT) professionals while automating the
process of software delivery and infrastructure changes.[1][2] It aims at establishing a culture and
environment where building, testing, and releasing software can happen rapidly, frequently, and
more reliably.[3][4][5]

Contents
[hide]

1History
2Overview
3DevOps toolchain
4Relationship to agile and continuous delivery
o 4.1Agile
o 4.2Continuous delivery
5Goals
o 5.1Benefits of DevOps
6Cultural change
o 6.1Building a DevOps culture
7Deployment
8DevOps and architecture
9Scope of adoption
o 9.1Incremental adoption
9.1.1The first way: systems thinking
9.1.2The second way: amplify feedback loops
9.1.3The third way: culture of continual experimentation and learning
10References
11Further reading

History[edit]
At the Agile 2008 conference, Andrew Clay Shafer and Patrick Debois discussed "Agile
Infrastructure".[6] The term DevOps was popularized through a series of "devopsdays" starting in
2009 in Belgium.[7] Since then, there have been devopsdays conferences, held in many countries,
worldwide.[8]
The popularity of DevOps has grown in recent years, inspiring many other tangential movements
including OpsDev and WinOps.[9] WinOps embodies the same set of practices and emphasis on
culture as DevOps, but is specific for a Microsoft-centric view.[10]

Overview[edit]

Venn diagram showing DevOps as the intersection of development (software


engineering), operations and quality assurance (QA)

In traditional, functionally-separated organizations, there is rarely a cross-departmental integration of


these functions with IT operations. But DevOps promotes a set of processes and methods for
thinking about communication and collaboration between departments of development, QA (quality
assurance), and IT operations.[11] In some organizations, this collaboration involves embedding IT
operations specialists within software development teams, thus forming a cross-functional team
this may also be combined with matrix management.
DevOps toolchain[edit]

Illustration showing stages in a DevOps toolchain

See also: DevOps toolchain

Because DevOps is a cultural shift and collaboration (between development, operations and testing),
there is no single "DevOps tool": it is rather a set (or "DevOps toolchain"), consisting of multiple
tools.[12] Generally, DevOps tools fit into one or more of these categories, which is reflective of key
aspects of the software development and delivery process:[13][14]

1. Code Code development and review, version control tools, code merging;
2. Build Continuous integration tools, build status;
3. Test Continuous testing tools that provide feedback on business risks;
4. Package Artifact repository, application pre-deployment staging;
5. Release Change management, release approvals, release automation;
6. Configure Infrastructure configuration and management, Infrastructure as Code tools;
7. Monitor Applications performance monitoring, enduser experience.
Though there are many tools available, certain categories of them are essential in the DevOps
toolchain setup for use in an organization. Some attempts to identify those basic tools can be found
in the existing literature.[15]
Tools such as Docker (containerization), Jenkins (continuous integration), Puppet (Infrastructure as
Code) and Vagrant (virtualization platform)among many othersare often used and frequently
referenced in DevOps tooling discussions.[16]

Relationship to agile and continuous delivery[edit]


Agile[edit]
Agile and DevOps are similar, but, while agile software development represents a change in thinking
and practice (that should lead to organizational change), DevOps places more emphasis on
implementing organizational change to achieve its goals.[17]
The need for DevOps was born from the increasing popularity of agile software development, as that
tends to lead to an increased number of releases.
One goal of DevOps is to establish an environment where releasing more reliable applications,
faster and more frequently, can occur. Release managers are beginning to utilize tools (such
as application release automation and continuous integration tools) to help advance this goaldoing
so through the continuous delivery approach.[18]
Continuous delivery[edit]
Continuous delivery and DevOps are similar in their meanings (and are often conflated), but they are
two different concepts:[19]
DevOps has a broader scope, and centers around:
Organizational change: specifically, to support greater collaboration between the various
types of worker involved in software delivery:
Developers;
Operations;
Quality Assurance;
Management;
System Administration;
Database Administration;
Coordinators
Automating the processes in software delivery.[20]
Continuous delivery, on the other hand, is an approach to automate the delivery aspect, and
focuses on:
Bringing together different processes;
Executing them more quickly and more frequently.
They have common end goals and are often used in conjunction, to achieve them.[20] DevOps
and continuous delivery share a background in agile methods and lean thinking: small and quick
changes with focused value to the end customer.[17] They are well communicated and collaborated
internally, thus helping achieve quick time to market, with reduced risks.

Goals[edit]
The specific goals of DevOps span the entire delivery pipeline. They include improved deployment
frequency, which can lead to:

Faster time to market;


Lower failure rate of new releases;
Shortened lead time between fixes;
Faster mean time to recovery (in the event of a new release crashing or otherwise disabling the
current system).
Simple processes become increasingly programmable and dynamic, using a DevOps
approach.[21] DevOps aims to maximize the predictability, efficiency, security, and maintainability of
operational processes. Very often, automation supports this objective.
DevOps integration targets product delivery, continuous testing, quality testing, feature development,
and maintenance releases in order to improve reliability and security and provide
faster development and deployment cycles. Many of the ideas (and people) involved in DevOps
came from the enterprise systems management and agile software development movements.[22]
DevOps aids in software application release management for an organization, by standardizing
development environments. Events can be more easily tracked, as well as resolving documented
process control and granular reporting issues. The DevOps approach grants developers more
control of the environment, giving infrastructure more application-centric understanding.[clarification needed]
Benefits of DevOps[edit]
Companies that practice DevOps have reported significant benefits, including: significantly
shorter time to market, improved customer satisfaction, better product quality, more reliable
releases, improved productivity and efficiency, and the increased ability to build the right product by
fast experimentation.[23]
However, a study released in January 2017 by F5 of almost 2,200 IT executives and industry
professionals found that only one in five surveyed think DevOps had a strategic impact on their
organization despite rise in usage. The same study found that only 17% identified DevOps as key,
well below software as a service (42%), big data (41%) and public cloud infrastructure as a service
(39%).[24]

Cultural change[edit]
DevOps is more than just a tool or a process change; it inherently requires an organizational culture
shift.[25] This cultural change is especially difficult, because of the conflicting nature of departmental
roles:

Operations seeks organizational stability;


Developers seek change;
Testers seek risk reduction.[26]
Getting these groups to work cohesively is a critical challenge in enterprise DevOps adoption.[27]
Building a DevOps culture[edit]

DevOps TShirt worn at a computer conference.

DevOps principles demand strong interdepartmental communicationteam-building and


other employee engagement activities are often usedto create an environment that fosters this
communication and cultural change, within an organization.[28] Teambuilding activities can
include board games, trust activities, and employee engagement seminars.[29]

Deployment[edit]
Companies with very frequent releases may require a DevOps awareness or orientation program.
For example, the company that operates the image hosting website Flickr developed a DevOps
approach, to support a business requirement of ten deployments per day;[30] this daily deployment
cycle would be much higher at organizations producing multi-focus or multi-function applications.
This is referred to as continuous deployment[31] or continuous delivery [32] and has been associated
with the lean startup methodology.[33]Working groups, professional associations and blogs have
formed on the topic since 2009.[5][34][35]

DevOps and architecture[edit]


To practice DevOps effectively, software applications have to meet a set of architecturally significant
requirements (ASRs; such as: deployability, modifiability, testability, and monitorability).[36] These
ASRs require a high priority and cannot be traded off lightly.
Although in principle it is possible to practice DevOps with any architectural style
the microservices architectural style is becoming the standard for building continuously deployed
systems. Because the size of each service is small, it allows the architecture of an individual service
to emerge through continuous refactoring,[37] hence reduces the need for a big upfront design[citation
needed]
and allows for releasing the software early[citation needed] and continuously.

Scope of adoption[edit]
Some articles in the DevOps literature assume, or recommend, significant participation in DevOps
initiatives from outside an organization's IT department, e.g.: "DevOps is just the agile principle,
taken to the full enterprise."[38]
A survey published in January 2016 by the SaaS cloud-computing company RightScale, DevOps
adoption increased from 66 percent in 2015 to 74 percent in 2016. And among larger enterprise
organizations, DevOps adoption is even higher 81 percent.[39]
Adoption of DevOps is being driven by many factors including:

1. Use of agile and other development processes and methods;


2. Demand for an increased rate of production releases from application and business
unit stakeholders;
3. Wide availability of virtualized[40] and cloud infrastructure from internal and external
providers;
4. Increased usage of data center automation[41] and configuration management tools;
5. Increased focus on test automation[42] and continuous integration methods;
6. A critical mass of publiclyavailable best practices.
Incremental adoption[edit]
The theory of constraints applies to the adoption of DevOps: the limiting constraint is often the
ingrained aversion to change from departments within the enterprise.[43]Incremental adoption
embodies the methodology the "Theory of Constraints" provides for combating any constraint, known
as The five focusing steps.[44]
The incremental approach centers around the idea of minimizing risk and cost of a DevOps adoption
whilst building the necessary in-house skillset and momentum needed to have widespread
successful implementation across the enterprise.[45] Gene Kims "Three Ways Principles" essentially
establishes different ways of incremental DevOps adoption:[46]
The first way: systems thinking[edit]
The emphasis is on the performance of the entire system, as opposed to the performance of a
specific or single department or individual. Focus is also on all business value streams that are
enabled by IT. A key aspect of this approach is ensuring that known defects are never passed along,
e.g., from QA to operations.[47]
The second way: amplify feedback loops[edit]
The emphasis is on increasing feedback and understanding of all teams involved. The outcomes of
this will be: increased communication and responding to all customers (internal and external),
shortening and amplifying all feedback loops, and embedding knowledge where and to whom its
needed.[47]
The third way: culture of continual experimentation and learning[edit]
Two things are equally important: experimentation and practice. Embedding this in the working
culturewhere learning from taking risks, and repetition and practice are encouragedis key to
mastery. Risk taking and experimentationpromoting improvement and masteryprovides the
skills, required to revert any mistakes.[47
Jira (software)
From Wikipedia, the free encyclopedia

[hide]This article has multiple issues. Please help improve it or discuss


these issues on the talk page. (Learn how and when to remove these template
messages)

This article contains content that is written like an advertisement. (November 2015)

This article may rely excessively on sources too closely associated with the
subject, potentially preventing the article from
being verifiable and neutral. (November 2015)

Jira

Jira logo

Developer(s) Atlassian, Inc.

Initial release 2002; 15 years ago[1]

Stable release 7.3.6 / 27 April 2017; 38 days ago[2]

Written in Java

Operating system Cross-platform

Type Bug tracking system, project management

software

License Proprietary, free for use by official non-

profit organizations, charities, and open-


source projects, but not governmental,
academic or religious organizations [3][4]

Website atlassian.com/software/jira

Jira (/di.r/ JEE-rah)[5] (stylized JIRA) is a proprietary issue tracking product, developed
by Atlassian. It provides bug tracking, issue tracking, and project management functions. Although
normally styled JIRA, the product name is not an acronym, but a truncation of Gojira, the Japanese
name for Godzilla,[6] itself a reference to Jira's main competitor, Bugzilla. It has been developed since
2002.[1] According to one ranking method, as of June 2017, Jira is the most popular issue
management tool.[7]

Contents
[hide]

1Description
2License
3Security
4See also
5References
6External links

Description[edit]
According to Atlassian, Jira is used for issue tracking and project management by over 25,000
customers in 122 countries around the globe.[8] Some of the organizations that have used Jira at
some point in time for bug-tracking and project management include Fedora
Commons,[9] Hibernate,[10] JBoss,[11] Skype Technologies,[12] Spring Framework,[13] and The Apache
Software Foundation, which uses both Jira and Bugzilla.[14] Jira includes tools allowing migration from
competitor Bugzilla.[15]
Jira is offered in three packages:[citation needed]

Jira Core includes the base software.


Jira Software is intended for use by software development teams and includes Jira Core and
Jira Agile.
Jira Service Desk is intended for use by IT or business service desks.
Jira is written in Java and uses the Pico inversion of control container, Apache OFBiz entity engine,
and WebWork 1 technology stack. For remote procedure calls (RPC), Jira supports REST, SOAP,
and XML-RPC.[16] Jira integrates with source control programs such as Clearcase, Concurrent
Versions System (CVS), Git, Mercurial, Perforce,[17]Subversion,[18] and Team Foundation Server. It
ships with various translations including English, French, German, Japanese, and Spanish.[19]
The main features of Jira for agile software development are the functionality to plan development
iterations, the iteration reports and the bug tracking functionality.
Jira supports the Networked Help Desk API for sharing customer support tickets with other issue
tracking systems.[20]

License[edit]
Jira is a commercial software product that can be licensed for running on-premises or available as a
hosted application. Pricing depends on the maximum number of users.[21]
Atlassian provides Jira for free to open source projects meeting certain criteria, and to organizations
that are non-academic, non-commercial, non-governmental, non-political, non-profit, and secular.
For academic and commercial customers, the full source code is available under a developer source
license.[21]

Security[edit]
In April 2010 a cross-site scripting vulnerability in Jira led to the compromise of two Apache Software
Foundation servers. The Jira password database was compromised. The database
contained unsalted password hashes, which are vulnerable to dictionary lookups and cracking tools.
Apache advised users to change their passwords.[22] Atlassian themselves were also targeted as part
of the same attack and admitted that a legacy database with passwords stored in plain text had been
compromised.[23]

Machine learning
From Wikipedia, the free encyclopedia

For the journal, see Machine Learning (journal).

Machine learning and


data mining

Problems[show]

Supervised learning
(classification regression)

[show]

Clustering[show]

Dimensionality reduction[show]

Structured prediction[show]
Anomaly detection[show]

Neural nets[show]

Reinforcement Learning[show]

Theory[show]

Machine learning venues[show]

Machine learning portal

v
t
e

Machine learning is the subfield of computer science that, according to Arthur Samuel in 1959,
gives "computers the ability to learn without being explicitly programmed."[1] Evolved from the study
of pattern recognition and computational learning theory in artificial intelligence,[2] machine learning
explores the study and construction of algorithms that can learn from and make predictions
on data[3] such algorithms overcome following strictly static program instructions by making data-
driven predictions or decisions,[4]:2through building a model from sample inputs. Machine learning is
employed in a range of computing tasks where designing and programming explicit algorithms with
good performance is difficult or infeasible; example applications include email filtering, detection of
network intruders or malicious insiders working towards a data breach,[5] optical character
recognition (OCR),[6] learning to rank and computer vision.
Machine learning is closely related to (and often overlaps with) computational statistics, which also
focuses on prediction-making through the use of computers. It has strong ties to mathematical
optimization, which delivers methods, theory and application domains to the field. Machine learning
is sometimes conflated with data mining,[7] where the latter subfield focuses more on exploratory data
analysis and is known as unsupervised learning.[4]:vii[8] Machine learning can also be
unsupervised[9] and be used to learn and establish baseline behavioral profiles for various
entities[10] and then used to find meaningful anomalies.
Within the field of data analytics, machine learning is a method used to devise complex models and
algorithms that lend themselves to prediction; in commercial use, this is known as predictive
analytics. These analytical models allow researchers, data scientists, engineers, and analysts to
"produce reliable, repeatable decisions and results" and uncover "hidden insights" through learning
from historical relationships and trends in the data.[11]
As of 2016, machine learning is a buzzword, and according to the Gartner hype cycle of 2016, at its
peak of inflated expectations.[12]Because finding patterns is hard, often not enough training data is
available, and also because of the high expectations it often fails to deliver.[13][14]

Contents
[hide]
1Overview
o 1.1Types of problems and tasks
2History and relationships to other fields
o 2.1Relation to statistics
3Theory
4Approaches
o 4.1Decision tree learning
o 4.2Association rule learning
o 4.3Artificial neural networks
o 4.4Deep learning
o 4.5Inductive logic programming
o 4.6Support vector machines
o 4.7Clustering
o 4.8Bayesian networks
o 4.9Reinforcement learning
o 4.10Representation learning
o 4.11Similarity and metric learning
o 4.12Sparse dictionary learning
o 4.13Genetic algorithms
o 4.14Rule-based machine learning
o 4.15Learning classifier systems
5Applications
6Model assessments
7Ethics
8Software
o 8.1Free and open-source software
o 8.2Proprietary software with free and open-source editions
o 8.3Proprietary software
9Journals
10Conferences
11See also
12References
13Further reading
14External links

Overview[edit]
Tom M. Mitchell provided a widely quoted, more formal definition: "A computer program is said to
learn from experience E with respect to some class of tasks T and performance measure P if its
performance at tasks in T, as measured by P, improves with experience E."[15] This definition is
notable for its defining machine learning in fundamentally operational rather than cognitive terms,
thus following Alan Turing's proposal in his paper "Computing Machinery and Intelligence", that the
question "Can machines think?" be replaced with the question "Can machines do what we (as
thinking entities) can do?".[16] In the proposal he explores the various characteristics that could be
possessed by a thinking machine and the various implications in constructing one.
Types of problems and tasks[edit]
Machine learning tasks are typically classified into three broad categories, depending on the nature
of the learning "signal" or "feedback" available to a learning system. These are[17]
Supervised learning: The computer is presented with example inputs and their desired outputs,
given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find
structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns
in data) or a means towards an end (feature learning).
Reinforcement learning: A computer program interacts with a dynamic environment in which it
must perform a certain goal (such as driving a vehicle or playing a game against an
opponent[4]:3). The program is provided feedback in terms of rewards and punishments as it
navigates its problem space.
Between supervised and unsupervised learning is semi-supervised learning, where the teacher
gives an incomplete training signal: a training set with some (often many) of the target outputs
missing. Transduction is a special case of this principle where the entire set of problem instances is
known at learning time, except that part of the targets are missing.

A support vector machine is a classifier that divides its input space into two regions, separated by a linear
boundary. Here, it has learned to distinguish black and white circles.

Among other categories of machine learning problems, learning to learn learns its own inductive
bias based on previous experience. Developmental learning, elaborated for robot learning,
generates its own sequences (also called curriculum) of learning situations to cumulatively acquire
repertoires of novel skills through autonomous self-exploration and social interaction with human
teachers and using guidance mechanisms such as active learning, maturation, motor synergies, and
imitation.
Another categorization of machine learning tasks arises when one considers the desired output of a
machine-learned system:[4]:3

In classification, inputs are divided into two or more classes, and the learner must produce a
model that assigns unseen inputs to one or more (multi-label classification) of these classes.
This is typically tackled in a supervised way. Spam filtering is an example of classification, where
the inputs are email (or other) messages and the classes are "spam" and "not spam".
In regression, also a supervised problem, the outputs are continuous rather than discrete.
In clustering, a set of inputs is to be divided into groups. Unlike in classification, the groups are
not known beforehand, making this typically an unsupervised task.
Density estimation finds the distribution of inputs in some space.
Dimensionality reduction simplifies inputs by mapping them into a lower-dimensional
space. Topic modeling is a related problem, where a program is given a list of human
language documents and is tasked to find out which documents cover similar topics.

History and relationships to other fields[edit]


See also: Timeline of machine learning

As a scientific endeavour, machine learning grew out of the quest for artificial intelligence. Already in
the early days of AI as an academic discipline, some researchers were interested in having
machines learn from data. They attempted to approach the problem with various symbolic methods,
as well as what were then termed "neural networks"; these were mostly perceptrons and other
models that were later found to be reinventions of the generalized linear models of
statistics.[18] Probabilistic reasoning was also employed, especially in automated medical
diagnosis.[17]:488
However, an increasing emphasis on the logical, knowledge-based approach caused a rift between
AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems
of data acquisition and representation.[17]:488 By 1980, expert systems had come to dominate AI, and
statistics was out of favor.[19] Work on symbolic/knowledge-based learning did continue within AI,
leading to inductive logic programming, but the more statistical line of research was now outside the
field of AI proper, in pattern recognition and information retrieval.[17]:708710; 755 Neural networks research
had been abandoned by AI and computer science around the same time. This line, too, was
continued outside the AI/CS field, as "connectionism", by researchers from other disciplines
including Hopfield, Rumelhart and Hinton. Their main success came in the mid-1980s with the
reinvention of backpropagation.[17]:25
Machine learning, reorganized as a separate field, started to flourish in the 1990s. The field changed
its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It
shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and
models borrowed from statistics and probability theory.[19] It also benefited from the increasing
availability of digitized information, and the possibility to distribute that via the Internet.
Machine learning and data mining often employ the same methods and overlap significantly, but
while machine learning focuses on prediction, based on known properties learned from the training
data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the
analysis step of Knowledge Discovery in Databases). Data mining uses many machine learning
methods, but with different goals; on the other hand, machine learning also employs data mining
methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much
of the confusion between these two research communities (which do often have separate
conferences and separate journals, ECML PKDD being a major exception) comes from the basic
assumptions they work with: in machine learning, performance is usually evaluated with respect to
the ability to reproduce known knowledge, while in Knowledge Discovery and Data Mining (KDD) the
key task is the discovery of previously unknown knowledge. Evaluated with respect to known
knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised
methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability
of training data.
Machine learning also has intimate ties to optimization: many learning problems are formulated as
minimization of some loss function on a training set of examples. Loss functions express the
discrepancy between the predictions of the model being trained and the actual problem instances
(for example, in classification, one wants to assign a label to instances, and models are trained to
correctly predict the pre-assigned labels of a set of examples). The difference between the two fields
arises from the goal of generalization: while optimization algorithms can minimize the loss on a
training set, machine learning is concerned with minimizing the loss on unseen samples.[20]
Relation to statistics[edit]
Machine learning and statistics are closely related fields. According to Michael I. Jordan, the ideas of
machine learning, from methodological principles to theoretical tools, have had a long pre-history in
statistics.[21] He also suggested the term data science as a placeholder to call the overall field.[21]
Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic
model,[22] wherein 'algorithmic model' means more or less the machine learning algorithms
like Random forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that
they call statistical learning.[23]

Theory[edit]
Main article: Computational learning theory

A core objective of a learner is to generalize from its experience.[24][25] Generalization in this context is
the ability of a learning machine to perform accurately on new, unseen examples/tasks after having
experienced a learning data set. The training examples come from some generally unknown
probability distribution (considered representative of the space of occurrences) and the learner has
to build a general model about this space that enables it to produce sufficiently accurate predictions
in new cases.
The computational analysis of machine learning algorithms and their performance is a branch
of theoretical computer science known as computational learning theory. Because training sets are
finite and the future is uncertain, learning theory usually does not yield guarantees of the
performance of algorithms. Instead, probabilistic bounds on the performance are quite common.
The biasvariance decomposition is one way to quantify generalization error.
For the best performance in the context of generalization, the complexity of the hypothesis should
match the complexity of the function underlying the data. If the hypothesis is less complex than the
function, then the model has underfit the data. If the complexity of the model is increased in
response, then the training error decreases. But if the hypothesis is too complex, then the model is
subject to overfitting and generalization will be poorer.[26]
In addition to performance bounds, computational learning theorists study the time complexity and
feasibility of learning. In computational learning theory, a computation is considered feasible if it can
be done in polynomial time. There are two kinds of time complexity results. Positive results show
that a certain class of functions can be learned in polynomial time. Negative results show that certain
classes cannot be learned in polynomial time.

Approaches[edit]
Main article: List of machine learning algorithms

Decision tree learning[edit]


Main article: Decision tree learning

Decision tree learning uses a decision tree as a predictive model, which maps observations about an
item to conclusions about the item's target value.
Association rule learning[edit]
Main article: Association rule learning

Association rule learning is a method for discovering interesting relations between variables in large
databases.
Artificial neural networks[edit]
Main article: Artificial neural network

An artificial neural network (ANN) learning algorithm, usually called "neural network" (NN), is a
learning algorithm that is inspired by the structure and functional aspects of biological neural
networks. Computations are structured in terms of an interconnected group of artificial neurons,
processing information using a connectionist approach to computation. Modern neural networks
are non-linear statistical data modeling tools. They are usually used to model complex relationships
between inputs and outputs, to find patterns in data, or to capture the statistical structure in an
unknown joint probability distribution between observed variables.
Deep learning[edit]
Main article: Deep learning

Falling hardware prices and the development of GPUs for personal use in the last few years have
contributed to the development of the concept of deep learning which consists of multiple hidden
layers in an artificial neural network. This approach tries to model the way the human brain
processes light and sound into vision and hearing. Some successful applications of deep learning
are computer vision and speech recognition.[27]
Inductive logic programming[edit]
Main article: Inductive logic programming

Inductive logic programming (ILP) is an approach to rule learning using logic programming as a
uniform representation for input examples, background knowledge, and hypotheses. Given an
encoding of the known background knowledge and a set of examples represented as a logical
database of facts, an ILP system will derive a hypothesized logic program that entails all positive and
no negative examples. Inductive programming is a related field that considers any kind of
programming languages for representing hypotheses (and not only logic programming), such
as functional programs.
Support vector machines[edit]
Main article: Support vector machines

Support vector machines (SVMs) are a set of related supervised learning methods used
for classification and regression. Given a set of training examples, each marked as belonging to one
of two categories, an SVM training algorithm builds a model that predicts whether a new example
falls into one category or the other.
Clustering[edit]
Main article: Cluster analysis

Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that
observations within the same cluster are similar according to some predesignated criterion or
criteria, while observations drawn from different clusters are dissimilar. Different clustering
techniques make different assumptions on the structure of the data, often defined by some similarity
metric and evaluated for example by internal compactness (similarity between members of the same
cluster) and separation between different clusters. Other methods are based on estimated
density and graph connectivity. Clustering is a method of unsupervised learning, and a common
technique for statistical data analysis.
Bayesian networks[edit]
Main article: Bayesian network
A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical
model that represents a set of random variables and their conditional independencies via a directed
acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic
relationships between diseases and symptoms. Given symptoms, the network can be used to
compute the probabilities of the presence of various diseases. Efficient algorithms exist that
perform inference and learning.
Reinforcement learning[edit]
Main article: Reinforcement learning

Reinforcement learning is concerned with how an agent ought to take actions in an environment so
as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt to find
a policy that maps states of the world to the actions the agent ought to take in those states.
Reinforcement learning differs from the supervised learning problem in that correct input/output pairs
are never presented, nor sub-optimal actions explicitly corrected.
Representation learning[edit]
Main article: Representation learning

Several learning algorithms, mostly unsupervised learning algorithms, aim at discovering better
representations of the inputs provided during training. Classical examples include principal
components analysis and cluster analysis. Representation learning algorithms often attempt to
preserve the information in their input but transform it in a way that makes it useful, often as a pre-
processing step before performing classification or predictions, allowing reconstruction of the inputs
coming from the unknown data generating distribution, while not being necessarily faithful for
configurations that are implausible under that distribution.
Manifold learning algorithms attempt to do so under the constraint that the learned representation is
low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned
representation is sparse (has many zeros). Multilinear subspace learning algorithms aim to learn
low-dimensional representations directly from tensor representations for multidimensional data,
without reshaping them into (high-dimensional) vectors.[28] Deep learning algorithms discover multiple
levels of representation, or a hierarchy of features, with higher-level, more abstract features defined
in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one
that learns a representation that disentangles the underlying factors of variation that explain the
observed data.[29]
Similarity and metric learning[edit]
Main article: Similarity learning

In this problem, the learning machine is given pairs of examples that are considered similar and
pairs of less similar objects. It then needs to learn a similarity function (or a distance metric function)
that can predict if new objects are similar. It is sometimes used in Recommendation systems.
Sparse dictionary learning[edit]
Main article: Sparse dictionary learning

In this method, a datum is represented as a linear combination of basis functions, and the
coefficients are assumed to be sparse. Let x be a d-dimensional datum, D be a d by n matrix, where
each column of D represents a basis function. r is the coefficient to represent x using D.

Mathematically, sparse dictionary learning means solving where r is sparse. Generally


speaking, n is assumed to be larger than d to allow the freedom for a sparse representation.
Learning a dictionary along with sparse representations is strongly NP-hard and also difficult to solve
approximately.[30] A popular heuristic method for sparse dictionary learning is K-SVD.
Sparse dictionary learning has been applied in several contexts. In classification, the problem is to
determine which classes a previously unseen datum belongs to. Suppose a dictionary for each class
has already been built. Then a new datum is associated with the class such that it's best sparsely
represented by the corresponding dictionary. Sparse dictionary learning has also been applied
in image de-noising. The key idea is that a clean image patch can be sparsely represented by an
image dictionary, but the noise cannot.[31]
Genetic algorithms[edit]
Main article: Genetic algorithm

A genetic algorithm (GA) is a search heuristic that mimics the process of natural selection, and uses
methods such as mutation and crossover to generate new genotype in the hope of finding good
solutions to a given problem. In machine learning, genetic algorithms found some uses in the 1980s
and 1990s.[32][33] Vice versa, machine learning techniques have been used to improve the
performance of genetic and evolutionary algorithms.[34]
Rule-based machine learning[edit]
Rule-based machine learning is a general term for any machine learning method that identifies,
learns, or evolves `rules to store, manipulate or apply, knowledge. The defining characteristic of a
rule-based machine learner is the identification and utilization of a set of relational rules that
collectively represent the knowledge captured by the system. This is in contrast to other machine
learners that commonly identify a singular model that can be universally applied to any instance in
order to make a prediction.[35] Rule-based machine learning approaches include learning classifier
systems, association rule learning, and artificial immune systems.
Learning classifier systems[edit]
Main article: Learning classifier system

Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that
combine a discovery component (e.g. typically a genetic algorithm) with a learning component
(performing either supervised learning, reinforcement learning, or unsupervised learning). They seek
to identify a set of context-dependent rules that collectively store and apply knowledge in
a piecewise manner in order to make predictions.[36]

Applications[edit]
Applications for machine learning include:

Adaptive websites
Affective computing
Bioinformatics
Brain-machine interfaces
Cheminformatics
Classifying DNA sequences
Computational anatomy
Computer vision, including object recognition
Detecting credit card fraud
Game playing
Information retrieval
Internet fraud detection
Marketing
Machine learning control
Machine perception
Medical diagnosis
Economics
Natural language processing
Natural language understanding
Optimization and metaheuristic
Online advertising
Recommender systems
Robot locomotion
Search engines
Sentiment analysis (or opinion mining)
Sequence mining
Software engineering
Speech and handwriting recognition
Financial market analysis
Structural health monitoring
Syntactic pattern recognition
User behavior analytics
Translation[37]
In 2006, the online movie company Netflix held the first "Netflix Prize" competition to find a program
to better predict user preferences and improve the accuracy on its existing Cinematch movie
recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-
Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble
model to win the Grand Prize in 2009 for $1 million.[38] Shortly after the prize was awarded, Netflix
realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a
recommendation") and they changed their recommendation engine accordingly.[39]
In 2012, co-founder of Sun Microsystems Vinod Khosla predicted that 80% of medical doctors jobs
would be lost in the next two decades to automated machine learning medical diagnostic software.[40]
In 2014, it has been reported that a machine learning algorithm has been applied in Art History to
study fine art paintings, and that it may have revealed previously unrecognized influences between
artists.[41]

Model assessments[edit]
Classification machine learning models can be validated by accuracy estimation techniques like
the Holdout method, which splits the data in a training and test set (conventionally 2/3 training set
and 1/3 test set designation) and evaluates the performance of the training model on the test set. In
comparison, the N-fold-cross-validation method randomly splits the data in k subsets where the k-1
instances of the data are used to train the model while the kth instance is used to test the predictive
ability of the training model. In addition to the holdout and cross-validation methods, bootstrap, which
samples n instances with replacement from the dataset, can be used to assess model accuracy.[42]
In addition to overall accuracy, investigators frequently report sensitivity and specificity (True
Positive Rate: TPR and True Negative Rate: TNR, respectively) meaning True Positive Rate (TPR)
and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the False
Positive Rate (FPR) as well as the False Negative Rate (FNR). However, these rates are ratios that
fail to reveal their numerators and denominators. The Total Operating Characteristic (TOC) is an
effective method to express a models diagnostic ability. TOC shows the numerators and
denominators of the previously mentioned rates, thus TOC provides more information than the
commonly used Receiver operating characteristic (ROC) and ROCs associated Area Under the
Curve (AUC).

Ethics[edit]
Machine Learning poses a host of ethical questions. Systems which are trained on datasets
collected with biases may exhibit these biases upon use, thus digitizing cultural
prejudices.[43] Responsible collection of data thus is a critical part of machine learning.
Because language contains biases, machines trained on language corpora will necessarily also
learn bias.[44]
See Machine ethics for additional information.

Software[edit]
Software suites containing a variety of machine learning algorithms include the following :
Free and open-source software[edit]

Deeplearning4j
dlib
ELKI
GNU Octave
H2O
Mahout
Mallet
mlpy
MLPACK
MOA (Massive Online Analysis)
MXNet
ND4J: ND arrays for Java
NuPIC
OpenAI Gym
OpenAI Universe
OpenNN
Orange
R
scikit-learn
Shogun
SMILE
TensorFlow
Torch
Yooreeka
Weka
Proprietary software with free and open-source editions[edit]

KNIME
RapidMiner
Proprietary software[edit]

Amazon Machine Learning


Angoss KnowledgeSTUDIO
Ayasdi
Google Prediction API
IBM SPSS Modeler
KXEN Modeler
LIONsolver
Mathematica
MATLAB
Microsoft Azure Machine Learning
Neural Designer
NeuroSolutions
Oracle Data Mining
RCASE
SAS Enterprise Miner
SequenceL
Skymind
Splunk
STATISTICA Data Miner

Journals[edit]
Journal of Machine Learning Research
Machine Learning
Neural Computation

Conferences[edit]
Conference on Neural Information Processing Systems
International Conference on Machine Learning

See also[edit]

Artificial intelligence portal

Machine learning portal

Automatic reasoning
Big data
Computational intelligence
Computational neuroscience
Data science
Ethics of artificial intelligence
Existential risk from advanced artificial intelligence
Explanation-based learning
Quantum machine learning
Important publications in machine learning
List of machine learning algorithms
List of datasets for machine learning research
Similarity learning
Machine Learning Applications in Bioinformatics
Key Skills - Skilling up for Salesforce
The return on investment for mastering Salesforce applications can prove to
be very attractive for IT professionals.
John Kittle, Eurostaff TechnologyNovember 23, 2012

Share

witter

acebook

inkedIn

oogle Plus

What is the skill? Weve seen demand for professionals that can implement Salesforce applications
nearly triple in the past couple of years. Salesforce, which is the enterprise cloud computing leader, offers
customer relationship management and sales applications. Salesforce can be divided into four distinct job
types that cover the different implementation stages of products: Salesforce
Administrator, Force.com Developer, Implementation Expert and Architects.
The administrators keep Salesforce working on a day-to-day basis after the implementation team puts the
right solutions in place. The architects and developers design and build the new applications that make
businesses grow and stay at the forefront of technological advances.
Many existing developers and IT professionals will have the skills required to be able to perform in a
Salesforce type role, but for experts to really achieve success (and the highest salaries) they need to
become cloud certified. This consists of taking part in classroom training and/or self-study, as well as
exams.

As well as CRM, the top related IT skills that will help you are:

Apex Code
Java
SaaS
.NET
Visualforce
Where did it come from? Salesforce was founded 13 years ago by former Oracle executive Marc
Benioff and is best known for its customer relationship software. In the last three years, Salesforce has
made over 15 acquisitions as it expands into the social enterprise arena which has since created the six
distinct products below.
What is it for? In its current form, Salesforce can be split into six pre-defined products (mainly driven by
acquisitions). The first, and the original area, is The Sales Cloud which aims to boost revenues and
productivity within the business. The Service Cloud is about connecting with customers from a service
point of view via social media communities. Marketing Cloud aligns the sales, service and marketing
functions of a business by listening to social chat and connecting with customers. Salesforce Platform is
used by a wide-range of companies in order to build real-time apps for its customers. Chatter very much
acts like a social intranet for businesses allowing colleagues across divisions, offices and countries to
collaborate in real time, in context, from anywhere. Finally, Work.com is an internal sales performance
management platform it is designed to motivate the sales team and drive performance
Related

Key Skills Big Data, big salaries

Key Skills - Working with Workday

Nine best Linux and open source courses 2017

How Salesforce brought AI and machine learning into its platform of products

Accenture acquires Salesforce specialist Tquila to widen cloud offering

What is unique about it? The intuitive, easy to use and well-designed aspects of the Salesforce platform
means there are far more options from a development point of view. The Sales Cloud also offers features
that are unique in the CRM sector. What is also unique about Salesforce is the fact that implementation is
fast, and can be achieved with a relatively small team. Typically a medium-sized implementation could
take around a year and only require half a dozen consultants. Those candidates with experience in
implementing Siebel will be well suited to Salesforce roll-outs, although our clients that have requested
Siebel roll-outs are usually large sized and the implementation takes a couple of years. Our clients
requesting Salesforce are sometimes small, or medium sized and looking for a shorter term contract, This
offers variety, but not necessarily long-term stability, for candidates.
Where is it used? Salesforce is not ring-fenced into any particular industry and is common across
verticals such as Communications (Telefonica), Financial Services (ING), Government, Healthcare &
Life Sciences, Hi-Tech (Dell & Cisco), Manufacturing (Toyota) and Retail (Burberry).
How do I acquire the skill, is it difficult to master? Salesforce offers a range of training programmes at
different prices, for example the Administration Essentials for New Admins virtual class costs 3,125
for 25 hours. There are also free online resources and a range of books, but you will need to be motivated
to teach yourself and invest the time (and sometimes money) to become certified and/or at least be able to
demonstrate your expertise in the Salesforce platform. We also work closely with training providers and
often put candidates in touch with consultancies to improve their skillset.
A knowledge of development/architecture is needed for the more advanced Salesforce courses, and the
best approach for career development is to specialise.

The return on investment on your Salesforce training could be very attractive, as the cost of the training
courses are relativity low, yet the day rates you can command are high.

Pay and prospects? Salaries are up 10% on last year, averaging around 46,000 per annum (though do
note that the salaries are sensitive to location). Application support can expect to get 30,000 up to two
years experience, rising potentially to 50,000 for those with five years. Database developers would look
at similar levels with potentially a few thousand extra for those more experienced. Business Analysts take
a bit more home, 34,000 at the start of their career, rising to 58,000 with five years experience.
Development Managers could potentially earn around 83,000 with five years of experience. From a
contract side typical daily rates for Salesforce developers (technical resources) with two-three years
experience would be upwards of 330 a day. Salesforce BAs / Architects can earn upwards of 410 a day
with two-three years experience and 450 or more with five+ years.
Whats next/where does it lead? With so many acquisitions and a constantly changing IT industry,
people with Salesforce skills will have to evolve with the company and platform to stay at the forefront of
the market.
[See also Key Skills - Working with Workday]
John Kittle is Contracts Director at Eurostaff Technology, a specialist recruiters to the technology sector

Selenium (software)
From Wikipedia, the free encyclopedia
This article needs additional citations for verification. Please
help improve this article by adding citations to reliable sources.
Unsourced material may be challenged and removed. (January
2017) (Learn how and when to remove this template message)

Selenium

Stable release 3.0 / October 13, 2016; 7 months ago

Repository github.com/SeleniumHQ/selenium

Development status Active

Written in Java

Operating system Cross-platform

Type Software testingframework for web

applications

License Apache License 2.0

Website www.seleniumhq.org

Selenium is a portable software-testing framework for web applications. Selenium provides a


record/playback tool for authoring tests without the need to learn a test scripting language (Selenium
IDE). It also provides a test domain-specific language (Selenese) to write tests in a number of
popular programming languages, including C#, Groovy, Java, Perl, PHP, Python, Ruby and Scala.
The tests can then run against most modern web browsers. Selenium deploys on Windows, Linux,
and OS X platforms. It is open-source software, released under the Apache 2.0 license: web
developers can download and use it without charge.

Contents
[hide]
1History
2Components
o 2.1Selenium IDE
o 2.2Selenium client API
o 2.3Selenium Remote Control
o 2.4Selenium WebDriver
o 2.5Selenium Grid
3See also
4References
5External links

History[edit]
Selenium was originally developed by Jason Huggins in 2004 as an internal tool at ThoughtWorks.
Huggins was later joined by other programmers and testers at ThoughtWorks, before Paul Hammant
joined the team and steered the development of the second mode of operation that would later
become "Selenium Remote Control" (RC). The tool was open sourced that year.
In 2005 Dan Fabulich and Nelson Sproul (with help from Pat Lightbody) made an offer to accept a
series of patches that would transform Selenium-RC into what it became best known for. In the
same meeting, the steering of Selenium as a project would continue as a committee, with Huggins
and Hammant being the ThoughtWorks representatives.
In 2007, Huggins joined Google. Together with others like Jennifer Bevan, he continued with the
development and stabilization of Selenium RC. At the same time, Simon Stewart at ThoughtWorks
developed a superior browser automation tool called WebDriver. In 2009, after a meeting between
the developers at the Google Test Automation Conference, it was decided to merge the two projects,
and call the new project Selenium WebDriver, or Selenium 2.0.[1]
In 2008, Philippe Hanrigou (then at ThoughtWorks) made "Selenium Grid", which provides a hub
allowing the running of multiple Selenium tests concurrently on any number of local or remote
systems, thus minimizing test execution time. Grid offered, as open source, a similar capability to the
internal/private Google cloud for Selenium RC. Pat Lightbody had already made a private cloud for
"HostedQA" which he went on to sell to Gomez, Inc.
The name Selenium comes from a joke made by Huggins in an email, mocking a competitor
named Mercury, saying that you can cure mercury poisoning by taking selenium supplements. The
others that received the email took the name and ran with it.[2]

Components[edit]
Selenium is composed of several components with each taking on a specific role in aiding the
development of web application test automation.
Selenium IDE[edit]
Selenium IDE is a complete integrated development environment (IDE) for Selenium tests. It is
implemented as a Firefox Add-On, and allows recording, editing, and debugging tests. It was
previously known as Selenium Recorder. Selenium-IDE was originally created by Shinya Kasatani
and donated to the Selenium project in 2006. It is little-maintained and is compatible with Selenium
RC, which was deprecated.[3]
Scripts may be automatically recorded and edited manually providing autocompletion support and
the ability to move commands around quickly. Scripts are recorded in Selenese, a special test
scripting language for Selenium. Selenese provides commands for performing actions in a browser
(click a link, select an option), and for retrieving data from the resulting pages.
Selenium client API[edit]
As an alternative to writing tests in Selenese, tests can also be written in various programming
languages. These tests then communicate with Selenium by calling methods in the Selenium Client
API. Selenium currently provides client APIs for Java, C#, Ruby, JavaScript and Python.
With Selenium 2, a new Client API was introduced (with WebDriver as its central component).
However, the old API (using class Selenium) is still supported.
Selenium Remote Control[edit]
Selenium Remote Control (RC) is a server, written in Java, that accepts commands for the browser
via HTTP. RC makes it possible to write automated tests for a web application in any programming
language, which allows for better integration of Selenium in existing unit test frameworks. To make
writing tests easier, Selenium project currently provides client drivers
for PHP, Python, Ruby, .NET, Perl and Java. The Java driver can also be used with JavaScript (via
the Rhino engine). An instance of selenium RC server is needed to launch html test case - which
means that the port should be different for each parallel run.[citation needed] However, for Java/PHP test
case only one Selenium RC instance needs to be running continuously.[citation needed]
Selenium Remote Control was a refactoring of Driven Selenium or Selenium B designed by Paul
Hammant, credited with Jason as co-creator of Selenium. The original version directly launched a
process for the browser in question, from the test language of Java, .Net, Python or Ruby. The wire
protocol (called 'Selenese' in its day) was reimplemented in each language port. After the refactor by
Dan Fabulich, and Nelson Sproul (with help from Pat Lightbody) there was an intermediate daemon
process between the driving test script, and the browser. The benefits included the ability to drive
remote browsers, and the reduced need to port every line of code to an increasingly growing set of
languages. Selenium Remote Control completely took over from the Driven Selenium code-line in
2006. The browser pattern for 'Driven'/'B' and 'RC' was response/request, which subsequently
became known as Comet.
With the release of Selenium 2, Selenium RC has been officially deprecated in favor of Selenium
WebDriver.
Selenium WebDriver[edit]
Selenium WebDriver is the successor to Selenium RC. Selenium WebDriver accepts commands
(sent in Selenese, or via a Client API) and sends them to a browser. This is implemented through a
browser-specific browser driver, which sends commands to a browser, and retrieves results. Most
browser drivers actually launch and access a browser application (such
as Firefox, Chrome or Internet Explorer); there is also an HtmlUnit browser driver, which simulates a
browser using HtmlUnit.
Unlike in Selenium 1, where the Selenium server was necessary to run tests, Selenium WebDriver
does not need a special server to execute tests. Instead, the WebDriver directly starts a browser
instance and controls it. However, Selenium Grid can be used with WebDriver to execute tests on
remote systems (see below). Where possible, WebDriver uses native operating system level
functionality rather than browser-based JavaScript commands to drive the browser. This bypasses
problems with subtle differences between native and JavaScript commands, including security
restrictions.[4]
In practice, this means that the Selenium 2.0 API has significantly fewer calls than does the
Selenium 1.0 API. Where Selenium 1.0 attempted to provide a rich interface for many different
browser operations, Selenium 2.0 aims to provide a basic set of building blocks from which
developers can create their own Domain Specific Language. One such DSL already exists:
the Watir project in the Ruby language has a rich history of good design. Watir-webdriver
implements the Watir API as a wrapper for Selenium-Webdriver in Ruby. Watir-webdriver is created
entirely automatically, based on the WebDriver specification and the HTML specification.
As of early 2012, Simon Stewart (inventor of WebDriver), who was then with Google and now with
Facebook, and David Burns of Mozilla were negotiating with the W3C to make WebDriver an internet
standard. In July 2012, the working draft was released. Selenium-Webdriver (Selenium 2.0) is fully
implemented and supported in Python, Ruby, Java, and C#.
Selenium Grid[edit]
Selenium Grid is a server that allows tests to use web browser instances running on remote
machines. With Selenium Grid, one server acts as the hub. Tests contact the hub to obtain access to
browser instances. The hub has a list of servers that provide access to browser instances
(WebDriver nodes), and lets tests use these instances. Selenium Grid allows running tests in parallel
on multiple machines, and to manage different browser versions and browser configurations
centrally (instead of in each individual test).
The ability to run tests on remote browser instances is useful to spread the load of testing across
several machines, and to run tests in browsers running on different platforms or operating systems.
The latter is particularly useful in cases where not all browsers to be used for testing can run on the
same platform.

See also[edit]

Software Testing portal

Free software portal

Acceptance testing
Capybara (software)
Given-When-Then
List of web testing tools
MediaWiki Selenium extension
MediaWiki Selenium Framework extension
Regression testing
Robot Framework
Sikuli

Apache Spark
From Wikipedia, the free encyclopedia

Apache Spark
Original author(s) Matei Zaharia

Developer(s) Apache Software Foundation, UC

Berkeley AMPLab, Databricks

Initial release May 30, 2014; 3 years ago

Stable release v2.1.1 / May 2, 2017; 30 days ago

Repository github.com/apache/spark

Development status Active

Written in Scala, Java, Python, R[1]

Operating system Microsoft Windows, OS X, Linux

Type Data analytics, machine

learning algorithms

License Apache License 2.0

Website spark.apache.org

Apache Spark is an open-source cluster-computing framework. Originally developed at


the University of California, Berkeley's AMPLab, the Spark codebase was later donated to
the Apache Software Foundation, which has maintained it since. Spark provides an interface for
programming entire clusters with implicit data parallelism and fault-tolerance.

Contents
[hide]

1Overview
o 1.1Spark Core
o 1.2Spark SQL
o 1.3Spark Streaming
o 1.4MLlib Machine Learning Library
o 1.5GraphX
2History
3Notes
4See also
5References
6External links

Overview[edit]
Apache Spark provides programmers with an application programming interface centered on a data
structure called the resilient distributed dataset (RDD), a read-only multiset of data items distributed
over a cluster of machines, that is maintained in a fault-tolerant way.[2] It was developed in response
to limitations in the MapReduce cluster computing paradigm, which forces a particular
linear dataflow structure on distributed programs: MapReduce programs read input data from
disk, map a function across the data, reduce the results of the map, and store reduction results on
disk. Spark's RDDs function as a working set for distributed programs that offers a (deliberately)
restricted form of distributed shared memory.[3]
The availability of RDDs facilitates the implementation of both iterative algorithms, that visit their
dataset multiple times in a loop, and interactive/exploratory data analysis, i.e., the
repeated database-style querying of data. The latency of such applications (compared to a
MapReduce implementation, as was common in Apache Hadoop stacks) may be reduced by several
orders of magnitude.[2][4] Among the class of iterative algorithms are the training algorithms
for machine learning systems, which formed the initial impetus for developing Apache Spark.[5]
Apache Spark requires a cluster manager and a distributed storage system. For cluster
management, Spark supports standalone (native Spark cluster), Hadoop YARN, or Apache
Mesos.[6] For distributed storage, Spark can interface with a wide variety, including Hadoop
Distributed File System (HDFS),[7] MapR File System (MapR-FS),[8]Cassandra,[9] OpenStack
Swift, Amazon S3, Kudu, or a custom solution can be implemented. Spark also supports a pseudo-
distributed local mode, usually used only for development or testing purposes, where distributed
storage is not required and the local file system can be used instead; in such a scenario, Spark is
run on a single machine with one executor per CPU core.
Spark Core[edit]
Spark Core is the foundation of the overall project. It provides distributed task dispatching,
scheduling, and basic I/O functionalities, exposed through an application programming interface
(for Java, Python, Scala, and R) centered on the RDD abstraction (the Java API is available for other
JVM languages, but is also usable for some other non-JVM languages, such as Julia,[10] that can
connect to the JVM). This interface mirrors a functional/higher-order model of programming: a
"driver" program invokes parallel operations such as map, filter or reduce on an RDD by passing a
function to Spark, which then schedules the function's execution in parallel on the cluster.[2] These
operations, and additional ones such as joins, take RDDs as input and produce new RDDs. RDDs
are immutable and their operations are lazy; fault-tolerance is achieved by keeping track of the
"lineage" of each RDD (the sequence of operations that produced it) so that it can be reconstructed
in the case of data loss. RDDs can contain any type of Python, Java, or Scala objects.
Aside from the RDD-oriented functional style of programming, Spark provides two restricted forms of
shared variables: broadcast variables reference read-only data that needs to be available on all
nodes, while accumulators can be used to program reductions in an imperative style.[2]
A typical example of RDD-centric functional programming is the following Scala program that
computes the frequencies of all words occurring in a set of text files and prints the most common
ones. Each map, flatMap (a variant of map) and reduceByKey takes an anonymous function that
performs a simple operation on a single data item (or a pair of items), and applies its argument to
transform an RDD into a new RDD.
val conf = new SparkConf().setAppName("wiki_test") // create a spark config
object
val sc = new SparkContext(conf) // Create a spark context
val data = sc.textFile("/path/to/somedir") // Read files from "somedir" into
an RDD of (filename, content) pairs.
val tokens = data.flatMap(_.split(" ")) // Split each file into a list of
tokens (words).
val wordFreq = tokens.map((_, 1)).reduceByKey(_ + _) // Add a count of one to
each token, then sum the counts per word type.
wordFreq.sortBy(s => -s._2).map(x => (x._2, x._1)).top(10) // Get the top 10
words. Swap word and count to sort by count.

Spark SQL[edit]
Spark SQL is a component on top of Spark Core that introduced a data abstraction called
DataFrames,[a] which provides support for structured and semi-structured data. Spark SQL provides
a domain-specific language (DSL) to manipulate DataFrames in Scala, Java, or Python. It also
provides SQL language support, with command-line interfaces and ODBC/JDBC server. Although
DataFrames lack the compile-time type-checking afforded by RDDs, as of Spark 2.0, the strongly
typed DataSet is fully supported by Spark SQL as well.

import org.apache.spark.sql.SQLContext

val url =
"jdbc:mysql://yourIP:yourPort/test?user=yourUsername;password=yourPassword"
// URL for your database server.
val sqlContext = new org.apache.spark.sql.SQLContext(sc) // Create a sql
context object

val df = sqlContext
.read
.format("jdbc")
.option("url", url)
.option("dbtable", "people")
.load()

df.printSchema() // Looks the schema of this DataFrame.


val countsByAge = df.groupBy("age").count() // Counts people by age

Spark Streaming[edit]
Spark Streaming leverages Spark Core's fast scheduling capability to perform streaming analytics. It
ingests data in mini-batches and performs RDD transformations on those mini-batches of data. This
design enables the same set of application code written for batch analytics to be used in streaming
analytics, thus facilitating easy implementation of lambda architecture.[11][12] However, this
convenience comes with the penalty of latency equal to the mini-batch duration. Other streaming
data engines that process event by event rather than in mini-batches include Storm and the
streaming component of Flink.[13] Spark Streaming has support built-in to consume
from Kafka, Flume, Twitter, ZeroMQ, Kinesis, and TCP/IP sockets.[14]
MLlib Machine Learning Library[edit]
Spark MLlib is a distributed machine learning framework on top of Spark Core that, due in large part
to the distributed memory-based Spark architecture, is as much as nine times as fast as the disk-
based implementation used by Apache Mahout (according to benchmarks done by the MLlib
developers against the Alternating Least Squares (ALS) implementations, and before Mahout itself
gained a Spark interface), and scales better than Vowpal Wabbit.[15] Many common machine learning
and statistical algorithms have been implemented and are shipped with MLlib which simplifies large
scale machine learning pipelines, including:

summary statistics, correlations, stratified sampling, hypothesis testing, random data


generation[16]
classification and regression: support vector machines, logistic regression, linear regression,
decision trees, naive Bayes classification
collaborative filtering techniques including alternating least squares (ALS)
cluster analysis methods including k-means, and Latent Dirichlet Allocation (LDA)
dimensionality reduction techniques such as singular value decomposition (SVD), and principal
component analysis (PCA)
feature extraction and transformation functions
optimization algorithms such as stochastic gradient descent, limited-memory BFGS (L-BFGS)
GraphX[edit]
GraphX is a distributed graph processing framework on top of Apache Spark. Because it is based on
RDDs, which are immutable, graphs are immutable and thus GraphX is unsuitable for graphs that
need to be updated, let alone in a transactional manner like a graph database.[17] GraphX provides
two separate APIs for implementation of massively parallel algorithms (such as PageRank):
a Pregel abstraction, and a more general MapReduce style API.[18] Unlike its predecessor Bagel,
which was formally deprecated in Spark 1.6, GraphX has full support for property graphs (graphs
where properties can be attached to edges and vertices).[19]
GraphX can be viewed as being the Spark in-memory version of Apache Giraph, which utilized
Hadoop disk-based MapReduce.[20]
Like Apache Spark, GraphX initially started as a research project at UC Berkeley's AMPLab and
Databricks, and was later donated to the Apache Software Foundation and the Spark project.[21]

History[edit]
Spark was initially started by Matei Zaharia at UC Berkeley's AMPLab in 2009, and open sourced in
2010 under a BSD license.
In 2013, the project was donated to the Apache Software Foundation and switched its license
to Apache 2.0. In February 2014, Spark became a Top-Level Apache Project.[22]
In November 2014, Spark founder M. Zaharia's company Databricks set a new world record in large
scale sorting using Spark.[23][third-party source needed]
Spark had in excess of 1000 contributors in 2015,[24] making it one of the most active projects in the
Apache Software Foundation[25] and one of the most active open source big data projects.[26]
Given the popularity of the platform by 2014, paid programs like General Assembly and free
fellowships like The Data Incubator have started offering customized training courses [27]

Version Original release date Latest version Release date

0.5 2012-06-12 0.5.1 2012-10-07

0.6 2012-10-14 0.6.2 2013-02-07[28]

0.7 2013-02-27 0.7.3 2013-07-16

0.8 2013-09-25 0.8.1 2013-12-19

0.9 2014-02-02 0.9.2 2014-07-23

1.0 2014-05-30 1.0.2 2014-08-05

1.1 2014-09-11 1.1.1 2014-11-26

1.2 2014-12-18 1.2.2 2015-04-17

1.3 2015-03-13 1.3.1 2015-04-17

1.4 2015-06-11 1.4.1 2015-07-15

1.5 2015-09-09 1.5.2 2015-11-09

1.6 2016-01-04 1.6.3 2016-11-07

2.0 2016-07-26 2.0.2 2016-11-14


2.1 2016-12-28 2.1.1 2017-05-02

Legend:

Old version

Older version, still supported

Latest version

Latest preview version

Tableau Software
From Wikipedia, the free encyclopedia

A major contributor to this article appears to have a close connection with its
subject. It may require cleanup to comply with Wikipedia's content policies,
particularly neutral point of view. Please discuss further on the talk
page. (November 2016) (Learn how and when to remove this template message)

Tableau Software

Type Public

Traded as NASDAQ: DATA

Industry Software

Founded Seattle, Washington (2003)

Founder Christian Chabot


Chris Stolte
Pat Hanrahan
Headquarters Seattle, Washington, United States

Key people Adam Selipsky (CEO)


Christian Chabot, Co-founder and Chairman
of the Board
Chris Stolte, Co-founder and Technical
Advisor
Andrew Beers, CDO
Pat Hanrahan, Chief Scientist and Co-founder

Revenue US$412,616,000 (2014)


US$ 232,440,000 (2013)
[1]

Net income US$ 5,873,000 (2014)


US$ 7,076,000 (2013)
[1]

Number of 3,445 (April 2017)


employees

Website tableau.com

Tableau Software (/tblo/ tab-LOH) is a software company[2] headquartered in Seattle,


Washington, United States which produces interactive data visualization products[3] focused
on business intelligence.[4] It initially began in order to commercialize research which had been
conducted at Stanford University's Department of Computer Science between 1999 and 2002.[5] It
was founded in Mountain View, California in January, 2003 by Chris Stolte, who specialized in
visualization techniques for exploring and analyzing relational databases and data cubes.[6] The
product queries relational databases, OLAP cubes, cloud databases, and spreadsheets and then
generates a number of graph types.
Tableau has a mapping functionality,[7][8] and is able to plot latitude and longitude coordinates.[9] It has
been criticized for being overly US-centric.[10] They also offer custom geocoding,[11] as well as five
ways to access their products: Desktop (both professional and personal editions), Server, Online
(which scales to support thousands of users), Reader, and Public, with the last two free to
use.[12]Vizable, a consumer data visualization mobile app, was released in 2015.[13]

Contents
[hide]

1Public company
2Wikileaks and policy changes
3Awards
4References
5External links

Public company[edit]
On May 17, 2013, Tableau launched an initial public offering (IPO) on the New York Stock
Exchange,[14] raising more than $250 million USD.[15] Prior to its IPO, Tableau raised over $45 million
in venture capital investment from investors such as NEA and Meritech.[15]
The company's 2013 revenue reached $232.44 million, an 82% growth over 2012's $128 million.[16] In
2010, Tableau reported revenue of $34.2 million. That figure grew to $62.4 million in 2011 and
$127.7 million in 2012. Profit during the same periods came to $2.7 million, $3.4 million, and $1.6
million, respectively.[17] The founders moved the company to Seattle, Washington in October, 2003,
where it remains headquartered today.[18] In August 2016, Tableau announced the appointment of
Adam Selipsky as president and CEO, effective September 16, 2016, replacing co-founder Christian
Chabot as CEO.[19]

Wikileaks and policy changes[edit]

Pie chart created by Tableau Software

Population pyramid created by Tableau Software

On December 2, 2010, Tableau withdrew its visualizations from the contents of the United States
diplomatic cables leak by WikiLeaks, with Tableau stating that it was directly due to political pressure
from US Senator Joe Lieberman.[20][21]
On February 21, 2011, Tableau posted an updated data policy.[22] The accompanying blog post cited
the two main changes as (1) creating a formal complaint process and (2) using freedom of speech
as a guiding principle.[23] In addition, the post announced the creation of an advisory board to help the
company navigate future situations that "push the boundaries of the policy."[23] Tableau likened the
new policy to the model set forth in the Digital Millennium Copyright Act, and opined that under the
new policy, the Wikileaks cables would not have been removed.[24]
Awards[edit]
Tableau Software has won awards including "Best Overall in Data Visualization" by DM Review,
"Best of 2005 for Data Analysis" by PC Magazine,[25] and "2008 Best Business Intelligence Solution
(CODiE award)" by the Software & Information Industry Association.[26]

References[edit]
1. ^ Jump up to:a b

Das könnte Ihnen auch gefallen