Sie sind auf Seite 1von 28

FACILITATING THE SPREAD OF KNOWLEDGE AND INNOVATION IN PROFESSIONAL SOFTWARE DEVELOPMENT

Patterns of
DevOps Culture eMag Issue 36 - November 2015

ARTICLE ARTICLE INTERVIEW &

Practical DevOps Team BOOK REVIEW

Postmortems at Etsy Topologies Build Quality In


Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 1
Practical How Different Team Topologies
Postmortems at Etsy Influence DevOps Culture
We take a look at Etsy’s blameless postmortems, There are many different team topologies that can be effective
both in terms of philosophy, process and practi- for DevOps. Each topology comes with a slightly different cul-
cal measures/guidance to avoid blame and better ture, and a team topology suitable for one organisation may not
prepare for the next outage. Because failures are be suited to another organisation, even in a similar sector. This
inevitable in complex socio-technical systems, it’s article explores the cultural differences between team topologies
the failure handling and resolution that can be im- for DevOps, to help you choose a suitable DevOps topology for
proved by learning from postmortems. your organisation.

Leadership, Mentoring, and


Team Chemistry
How does fire fighting compare to DevOps? Michael Biven,
team lead at Ticketmaster, shares important lessons on lead-
ership, mentoring and team chemistry from his experience
as a fire fighter.

Infrastructure Code Reviews


As infrastructure becomes code, reviewing (and testing)
provides the confidence necessary for refactoring and fixing
systems. Reviews also help spread consistent best practices
throughout an organization and are applicable where testing
might require too much scaffolding.

Build Quality In: Book Review and Interview


Book review and interview with Steve Smith and Matthew Skelton, authors of “Build Quality In”, a
collection of experience reports (including their own) on Continuous Delivery and DevOps initia-
tives, by authors ranging from in-house technical and process leaders to external consultants imple-
menting technical and organizational improvements.

FOLLOW US CONTACT US
GENERAL FEEDBACK feedback@infoq.com

ADVERTISING sales@infoq.com

EDITORIAL editors@infoq.com
facebook.com @InfoQ google.com linkedin.com
/InfoQ /+InfoQ company/infoq
MANUEL isofInfoQ’s DevOps Lead Editor and an enthusiast
Continuous Delivery and Agile practices.
PAIS Manuel Pais tweets @manupaisable

A LETTER FROM
THE EDITOR

DevOps is a movement. DevOps is a mindset. applied these lessons to the IT world. Leaders need
DevOps is devs and ops working together. DevOps is to nurture their teams by complaining up and prais-
a way of organizing. DevOps is continuous learning. ing down. They need to let team members take the
The intangibility of DevOps makes it hard for lead when appropriate. Biven mentions how team
leaders to come up with a clear-cut roadmap for chemistry amplifies individuals’ talents as well as the
adopting DevOps in their organizations. It requires team’s talent. This question sums up the mindset ev-
grass-roots adoption and top-down support. The eryone should strive for: “What did you do today that
meaning of DevOps is highly contextual as there are you would be proud of?”
no popular methodologies with prescribed practices As infrastructure becomes code, review and
to abide by. testing provide the confidence necessary for refac-
However, healthy organizations exhibit similar toring and fixing systems. Code reviews are not just
patterns of behavior, structure, and improvement for software, Chris Burroughs explains. In fact, they
effort. In this e-mag, we explore some of those pat- are well suited for infrastructure automation given
terns through testimonies from their practitioners that some scenarios might be difficult to test — they
and through analysis by consultants in the field who can be too expensive or take too much time — but
have been exposed to multiple DevOps adoption ini- changes can be immediately reviewed. Artifacts that
tiatives. should be reviewed include configuration manage-
First, Daniel Schauenberg takes a look at Etsy’s ment, deployment scripts, provisioning manifests,
blameless postmortems in terms of philosophy, pro- packages, and runbooks. Reviews also help spread
cess, and practical measures and guidance to avoid consistent best practices throughout an organiza-
blame and better prepare for the next outage. Be- tion, Burroughs adds.
cause failures are inevitable in complex socio-tech- Finally, we finish with an insightful interview
nical systems, it’s the failure handling and resolution with Steve Smith and Matthew Skelton, authors of
that can improve by learning from postmortems. Build Quality In, a collection of experience reports
Matthew Skelton explains how there are many (including their own) on Continuous Delivery (CD)
different team topologies that can work for DevOps. and DevOps initiatives. Report authors range from
Each topology comes with a slightly different culture, in-house technical and process leaders to external
and a team topology suitable for one organization consultants. Many stories talk about organizational
may not be suited to another, even in a similar sec- improvements as a pre-condition for success. A key
tor. Skelton explores the cultural differences between takeaway is how contextual CD and DevOps are,
team topologies for DevOps to help you choose a which means that there are no silver bullets but also
suitable DevOps topology for your organization. that different organizations and people can make it
Michael Biven, team lead at Ticketmaster, shares work if they keep a focus on people as well as on pro-
lessons he learned as a firefighter on leadership, cess and tools.
mentoring, and team chemistry. Biven successfully
Read online on InfoQ

Practical Postmortems at Etsy

Daniel Schauenberg is a staff software engineer on Etsy’s infrastructure and development-tools


team. He loves automation, monitoring, documentation, and simplicity. In previous lives, he has
worked in systems and network administration, on connecting chemical plants to IT systems,
and as an embedded-systems networking engineer. Things he thoroughly enjoys when not
writing code include coffee, breakfast, TV shows, and basketball.

The days of running a website on a single computer are basically over.


The basic architecture of any modern site consists of myriad systems
that interact with each other.

In the best case, you start out between components become possible — or you would have
with a Web front end and a da- more complex, understanding put guardrails and protection in
tabase representing your initial of how the whole system works place to prevent it. Everybody is
feature. And then you add things: shrinks, and you start to witness caught by surprise. You want the
features, billing, a background emergent behavior that you business to run, you want the site
queue, another handful of serv- weren’t aware of before — be- to be up. You are working hard to
ers, another database, image up- havior that can’t be explained by fix the problem while your amyg-
loads, etc. And of course you also single components but that are dala is hijacking your brain. May-
hire more people to work on all a result of the interplay among be people are running around
of this. them. Things start to break a lot the office or are asking questions
At this point, you realize you more often. in your chat system. How could
work in a  complex socio-techni- Traditionally, it has never this have happened? Why didn’t
cal system. You did from the be- been great to have things break. we have tests for the code path
ginning, but it gets a little more You are suddenly confronted with that broke? Why didn’t you know
obvious here. As interactions a situation you thought wasn’t about that failure case? When

4 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


is the site going to be back up? judged a success or a failure, de- ing as possible. This means there
Eventually, everything recovers pending on the outcome. Focus- are no restrictions on who can at-
and your Web application nice- ing on the action itself and the tend (save for how many people
ly hums along again. It’s time to person as the perpetrator doesn’t can fit into the conference room,
talk about what happened. give us any advantage in learn- and it’s not rare that we have
In what has been called the ing how the incident came to be. standing-room-only postmor-
“old view“ — the traditional af- Even more, a person who feels tems). There are only rules about
termath of such an outage — we judged will not readily talk about the minimum number of people
would come together and yell at all the influences that went into who need to be there. As this is
the person who was maintaining the decisions they made. They a lesson, it doesn’t make sense
the system that broke. Or the one will try to get out of that meeting to talk about what happened if
who wrote the code. Or the one as fast as possible. the people most knowledgeable
fixing it. Surely, if they had done a This is why we focus not on about the past aren’t in the room,
better job, we wouldn’t have had the action itself — most often so everyone who worked on fix-
that outage. This usually ends in the most prominent thing peo- ing the outage, helped commu-
an unproductive meeting that ple point to as the cause — but nicate, got paged, or otherwise
makes everybody feel worse. On on exploring the conditions and contributed to the situation
top of that, you don’t even find context that influenced deci- needs to be there. This is to en-
out what really happened. sions and actions. After all, there sure that we learn the most we
This is why in the new ap- is no root cause. We are trying can out of what happened, and
proach to systems safety — the to reconstruct what happened that we get the most accurate
foundation of what is commonly as close as possible, a challenge timeline possible of events.
called “blameless postmortems” that is only made harder by the Usually, after an outage,
— we take a different route. The brain’s tendency to misremem- the people closest to the action
fundamental difference is that ber. If one person can do what start an entry in our Morgue
we don’t stop at human error for seems in hindsight to be a mis- postmortem tracker. It’s a sim-
why something broke. Humans take then anyone could have ple PHP application we  open-
don’t generally come to work to done it. sourced  that keeps track of all
do a bad job. Nobody sets out to We can punish the person our postmortems and their as-
bring the site down when they who pushed the button, and the sociated information. It stores
come into the office in the morn- next person who does it, and the the times the outage started and
ing. The fundamental assump- next person, and the next person. ended, and has fields for a time-
tion we must make when we go Or we can try to find out why the line, IRC logs, graphs, images, re-
into a blameless postmortem is action made sense at the time, mediation items (in form of Jira
that whatever decisions people how the surrounding system en- tickets for us), and much more.
made or whatever actions peo- couraged or at least didn’t warn While everything is still some-
ple took made sense at the time. about impending problems, and what fresh in people’s minds, the
They believed they were improv- what we can do to better sup- timeline is prepared with all the
ing something, fixing something, port the next person who might IRC communication (Morgue au-
deploying a change that was push the button. We can have an tomatically pulls all the logs from
flagged off in production, delet- open and welcoming exchange start to end times for configured
ing a file that wasn’t referenced in which we treat the person who channels) and the graphs people
anymore. If we could go back in supposedly broke the site as the were looking at to make sense of
time to ask that person if their actor closest to the action and what was going on.
change would break the site, thus most knowledgeable about At this stage, it’s import-
they would tell us, “No way.” Be- the surprise that just affected us. ant to add as much information
ing able to point to that change This is one of the biggest oppor- as possible and not cut things
in the debriefing as the root tunities we have to learn more prematurely. The timeline will
cause of the outage is a function about how our socio-technical already be biased, influenced by
of hindsight bias. system behaves in reality and not how the people preparing the
The Austrian physicist and just in theory. postmortem experienced the
philosopher Ernst Mach said in event. We want to keep this bias
1905 that “Knowledge and error Blameless postmortems as small as possible by including
flow from the same mental sourc- at Etsy as much information as possible.
es; only success can tell one from At Etsy, we strive to make post- We also set up the event (we have
the other.” Any action can be mortems as open and welcom- a specific calendar just for post-

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 5


mortems), book a conference for that one fix that will prevent look at the best ways to work
room, invite all participating par- this from happening next time. around them.
ties, and send an announcement It also puts the focus on thinking
to our postmortems mailing list about what tools and informa- Beware biases
inviting anyone to come. We tion would be helpful to have One of the most frequent biases
also request a volunteer facilita- next time, and leads to a more we encounter is  hindsight bias,
tor (more on that later) via our flourishing discussion instead of sometimes called the “knew-it-
facilitators mailing list. Hand- the usual feeling of “well, we got all-along effect”. Hindsight bias is
ily, Morgue can automatically our fix, so we’re done now.” the tendency to declare an event
schedule a meeting and request With those clarifications predictable when it is discussed
a facilitator. We’ve tried to make stated, we start to talk about the after the fact. This often occurs
it as easy as possible to set up a timeline. Here, we are mostly fol- because the debriefing turns
postmortem and we’re still work- lowing the IRC transcripts. As we up new information that wasn’t
ing on making it easier. use IRC heavily at Etsy, it contains available to the people involved
the most complete recorded col- when the incident was going on.
The meeting lection of conversations. As we Although it might seem obvious
Once everyone in the meet- go through the logs, the facili- to a participant in the debriefing
ing room is ready to discuss what tator looks for so-called second why a deployment might break
happened, the facilitator estab- stories: things that aren’t obvi- the site, this wasn’t obvious to all
lishes parameters and guides the ous from the log context, things at the time. In this moment, the
discussion. We begin by making people have thought about that facilitator’s role is to deflect the
sure everyone understands the prompted them to say what they bias and point out that it’s based
gist of this meeting. This is im- did, and even things they didn’t on information that was un-
portant for people who have say. The facilitator looks for any- known at the time. This doesn’t
come to a postmortem for the thing that could helps us better mean this is useless information;
first time and a good opportu- understand what people were we will and should incorporate
nity to reiterate it for everyone. doing at the time — what they new information in the discus-
We state very clearly that this is tried and what worked. The idea sion and in remediation items
about learning from surprises. It’s here again is to build a complete at the right time. But during the
important to establish that most picture of the past, and focusing timeline discussion, this habit
of our time (30-40 minutes in a only on what we see when we fuels hindsight bias, and it’s not
60-minute meeting) will focus read the logs gives us an impres- useful for learning what really
on reconstructing the timeline, sion of a linear causal chain of happened.
which is critical to get right as we events that does not reflect the Another common bias
will base any possible remedia- reality. is  confirmation bias. It describes
tion on this timeline. If we don’t the tendency to look for and fa-
get the timeline right, or at least Facilitator’s role vor information that supports
as close as possible to what re- The facilitator’s role is to guide one’s own hypothesis. It’s not
ally happened, our remediation the discussion and make sure surprising that this is so common
items will be less effective and we don’t fall back into “old view” — it feels much better to be right
the overall success of the post- patterns of thinking, something than wrong. Confirmation bias
mortem will decline. that happens often as it’s the ap- can skew recollection and pro-
It is also important to men- proach to debriefings that most duce a one-sided version of the
tion that no matter how hard of us have learned. The facilitator past that’s biased by the memory
we try, this incident will happen should look out for two of the and hypothesis of a single person
again — we cannot prevent the most important and fortunately or a small group. It doesn’t allow
future from happening. What we also most practical things: hid- for the full story and will lead to
can do is prepare: make sure we den biases and counterfactuals. a misunderstanding of what re-
have better tools, more (helpful) The human mind is full of cogni- ally happened, thus lessons and
information, and a better un- tive biases, and there are enough remediation items will not effec-
derstanding of our systems next to fill  a whole list on Wikipedia. tively improve any actual system-
time this happens. Emphasizing While it’d be beyond our scope ic shortcomings. Lessons tainted
this often helps people keep the here to go into detail about all of by confirmation bias most often
right priorities in mind during the them, we will discuss some com- lead to a feel-good fix that will
meeting rather than rushing to mon examples that are compar- be ineffective or even adversarial
remediation items and looking atively easy to look out for, and the next time a similar situation

6 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


arises. An example is the addition sions and what sources of confi- doesn’t exist and are trying to
of an alert that doesn’t actually dence they thought out for their fix things that aren’t a problem.
check the health of a relevant actions — focusing on the action We all are continuously drawn
component — and the next time and how it made sense rather to that one single explanation
someone is in this situation, ev- than the outcome of any deci- that perfectly lays out how ev-
erything may appear to be work- sions. Outcome bias is extremely erything works in our complex
ing as the alert is not firing even common in the actors closest to systems. We want to believe that
though things are breaking out the action. Confronted with poor if someone did that one thing
of sight. outcomes, people are quick to differently, everything would
A facilitator has to look out judge their past actions as bad have been fine. It’s so tempting.
for confirmation bias, which of- choices. Here the role of the facil- But it’s not the reality. The past is
ten appears as following one itator is especially important, as not a linear sequence of events;
line of thought in the timeline they will remind actors how ac- it’s not a chain of falling domi-
for which there is little evidence tions made sense at the time and nos that you can stop by taking
or information, thus confirming would have been executed by one away. We are trying to make
a hunch or opinion instead of anyone else in the same manner. sense of the past and reconstruct
basing the timeline on facts. It is It’s important to keep the as much as possible from memo-
important to keep the timeline debriefing a safe space in which ry and evidence. And if we want
discussion on track and rooted in to share information so we can to get it right, we have to focus
what we can see in logs, graphs, all learn. We have probably all on what really happened and
and chat transcripts. This way, we witnessed the old view of ap- that includes watching out for
can try to make sure everybody’s proaching outages and how it counterfactuals that describe an
contribution to the timeline is leads people to feel unsafe in a alternate reality.
discussed evenly and we are not debriefing. We are often enough
trying to follow only one person’s victims of our own biases when Guidance
perspective of the event. we look back on our actions or Throughout discussion of the
A third common bias to fear judgment by our peers. De- timeline, the facilitator acts as a
look out for is outcome bias, the constructing biases as soon as guide. The facilitator wants to ask
tendency to evaluate the quality they arise is paramount in this clarifying questions about why
of a decision based on its out- endeavor to make a debriefing a people took different actions,
come instead of the information place of trust and learning. what they assumed, the inten-
the person had at the time. An tions of actions, what sources
example of outcome bias is dif- Counterfactuals of confidence they thought out,
ferently judging two people who In addition to looking out for what tools they had to improvise,
have done the exact same thing specific biases, another practical which graphs they looked at for
(e.g. pushing the button to de- way to keep the discussion ef- confirmation, why using one tool
ploy the site) based on whether ficient and targeted is to watch over the other made more sense
or not their change broke some- out for counterfactuals, or state- at the time, etc. Especially when
thing. This is harmful especially ments that are literally counter multiple people were debugging
because it doesn’t focus on what to the facts of what happened in the same thing, it is extremely
a person knew at the time and the past. Counterfactuals are as useful to ask those questions, as
thus on the whole decision-mak- easy to spot as they are hard to everyone has a different view of
ing context at the time. It’s akin avoid. Common phrases that in- the world. While one engineer
to that often sought but rarely dicate counterfactuals are “they will debug a given problem with
available skill of predicting the should have”, “she failed to”, “he strace and tcpdump, another will
future. If we don’t try to make could have”, and other phrases write a small Perl script to do al-
sense of the past based on actu- that describe a reality that didn’t most the same thing. This is a
al available information, there is happen. great opportunity to detect com-
hardly any way through which Remember that in a de- mon things that were missing
we can make improvements for briefing we want to learn what and subsequently improvised
the actors at the sharp end. happened and how we can sup- by multiple engineers, which
A good way for the facil- ply more guardrails, tools, and can surface in the discussion to
itator to look out for outcome resources for the next person in come. A designated note taker
bias is to perceive whether or this situation. If we discuss things is a good idea here: all of these
not people are asking about the that didn’t happen, we are basing things should be noted without
thought process leading to deci- our discussion on a reality that commentary or bias. A facilita-

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 7


At Etsy we strive to tor will be completely occupied
with following the timeline and
rules, for example, always have
room for improvement — they
the thought processes of peo- were created under a set of as-
make postmortems as ple involved and may not be
able to record detailed notes.
sumptions that also influenced
the creation of the very system

open and welcoming Having those notes, however, is


extremely important for the dis-
being monitored so it is almost
certain that they do not account

as possible. We have
cussion following the timeline as for all emergent behavior. Ideal-
they provide another set of facts ly, specialists for the system that
on which to base improvements was broken as well as specialists
to make sure we learn and remediations.
Once we have arrived at the
for your alerting system will both
be present. This is a great oppor-

as much as possible end of the timeline, most often


indicated by the end of the chat
tunity to quickly discuss the fea-
sibility of adding an alert to aid
transcript, everyone has to agree detection in the future.
from outages. on it before the meeting can con-
tinue. Accuracy and agreement is
There is no need (and al-
most certainly no time) to go into
extremely important as we will specifics here. But it should be
base the upcoming discussion clear to the people in the room
and any remediations on that what is worthy of a remediation
timeline. item and noted as such. Another
area that can almost always use
The discussion some improvement is metrics
Following the timeline review, reporting and documentation.
we hold a discussion to go deep- During an outage, almost cer-
er into the details and informa- tainly someone who was digging
tion we uncovered during the through a log file or introspect-
timeline reconstruction. This is ing a process on a server found
the time to refer back to obser- some helpful information. Logi-
vations and notes and to explore cally, we should make this infor-
any a lack of tooling or resourc- mation as visible and accessible
es and room for improvement. as possible. So it’s not rare that
While backed by the timeline, we end up with a new graph or
the discussion is not restricted to a new saved search in our log-ag-
only that content; there is helpful gregation tool that makes it eas-
context we can pull in too. ier to find that information next
This discussion doesn’t fol- time. Once easily accessible, it
low a strict format but is guided becomes a resource for anyone
by helpful questions like “Did we trying either to find out how to
detect something was wrong prop- fix the same situation or elimi-
erly/fast enough?”, “Did we notify nate it as a contributing factor to
our customers, support people, the current outage. At the same
users appropriately?”, “Was there time, discussing a certain system
any cleanup to do?”, “Did we have will surface the experts in us-
all the tools available or did we ing  and troubleshooting it, and
have to improvise?”, “Did we have this is likely an opportunity for
enough visibility?”. If the outage those experts to document their
continued for a longer period of knowledge better in a runbook,
time, we can add “Was there trou- alert, or dashboard. Here is an
bleshooting fatigue?” and “Did we important distinction: this is not
do a good handoff?” Some of those about an actor who needs better
questions will almost always yield training, it’s about establishing
the answer “No, and we should guardrails through critical knowl-
do something about it.” Alerting edge sharing. If we advocate that

8 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


people only need better training, Remediation items are a done after 45 minutes, and we’ve
we are again putting the onus on way to document surprises we had postmortems scheduled for
the person to have to know bet- have uncovered during the post- two hours that still ran long. We
ter next time instead of provid- mortem and to fix shortcomings have a general rule to implement
ing helpful tooling to give better in the system to support the op- remediation items within 30 days
information about the situation. erator next time something simi- but have had unhelpful tickets
With access to information, a per- lar happens. They are not able to closed after a year of inactivity.
son can make informed decisions prevent the future from happen- Altogether, it’s clear: it
about what actions to take. ing and they are not a guarantee would be ludicrous to think we
that a particular outage will nev- can estimate and fit problems of
Remediation items er happen again, but they help arbitrary complexity into a fixed
After the discussion is done and equip actors and enable appro- frame. The key to success is es-
everybody has shared ideas priate responses. tablishing a process that is adapt-
about improvements to the cur- able, and continuously revisiting
rent system, it’s time to talk about Summary it to ensure its helpfulness in its
remediation items. Remediation This is a quick rundown of how current form. We often have oth-
items are tasks (and most likely we implement blameless post- er facilitators sit in on postmor-
tickets in your ticketing system) mortems at Etsy. Over the last tems and then give the facilitator
that are specifically about reme- five years, we have iterated on feedback after the meeting. We
dying the item; each has to have this process quite a lot but it has write  tooling  around how we
an owner. While writing those been working great for us. It’s want the process to look. And
tickets, it is often useful to adhere important to keep in mind that we try hard to be as prepared as
to something like the SMART cri- this is not a process written in possible for the next outage, as
teria for specification and achiev- stone. A lot of inspiration for our the next outage will come and
ability. Especially after having un- process has come from academic there’s nothing we can do to pre-
covered a potentially scary part work as well as books like Sidney vent it. And that’s fine. We just
of our system, we want to refrain Dekker’s  Field Guide to Under- have to make sure we learn as
from complete rewrites or wholly standing “Human Error”, which much as possible from them.
replacing a specific technology. is a great start if you are looking
Remediation items should to dive into the topic. But we are
be time-constrained, measur- also constantly thinking about
able, relevant to the actual prob- how to make it better and where
lems at hand, and fix a very spe- it needs improvement. Though
cific thing. It’s not helpful if an we establish a process, no two
engineer ends up with a high-lev- postmortems are the same. We
el ticket like “make the webserver generally have a one-hour meet-
faster,” or something that can’t ing for our debriefings and split
reasonably be achieved in a it up as 30 to 40 minutes of time-
constrained amount of time like line discussion, 10 to 15 minutes
“rewrite the data layer of our ap- of discussion, and five to 10 min-
plication.” A good rule of thumb utes of writing down remedia-
is that we should be able to com- tion items.
plete one remediation item with- While this often works well,
in 30 days. We may finish many it sometimes doesn’t work at all.
sooner or have some still open Debriefings can illuminate an in-
after six months. This is okay and cident more complex than origi-
simply means that we have to nally thought. Remediation items
figure out if our time constraints can end up fixing the wrong thing
make sense and whether or not or be deprecated by a change in
a remediation item that has been the system before they’re even
in the queue for six months is a implemented. We’ve run out of
good and sufficiently specific time before getting to remedia-
task. tion items and had to schedule a
follow-up, we have (rarely) been

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 9


Read online on InfoQ

How Different Team Topologies


Influence DevOps Culture

Matthew Skelton has been building, deploying, and operating commercial software systems
since 1998. Co-founder and principal consultant at Skelton Thatcher Consulting, he specialises in
helping organisations to adopt and sustain good practices for building and operating software
systems: continuous delivery, DevOps, aspects of ITIL, and software operability. He is co-editor
of Build Quality In, a book of Continuous Delivery and DevOps experience reports.

The relationship between teams, individual motivation, and


effectiveness of software delivery is something I have been interested
in for several years, going back to when I was team lead on a large,
multi-supplier programme of work to rebuild a key software system for
a major financial institution in London.

Since then, I have worked on sev- our clients that the choice of the communication structures of
eral software systems for organi- tooling for DevOps should real- these organisations”. Given that
sations with differing team con- ly be informed by Conway’s law an IT organisation itself is a kind
figurations and team cultures, to produce the best outcomes of system, it follows from Con-
and the relationship between for organisations. Conway’s law way’s law that the topology of
team configuration — “topology” —  brilliantly described by Ra- that system will be shaped by the
as I call it — and organisational chel Laycock in the book  Build kinds of communication that we
capability has become some- Quality In  — is the weirdly baf- allow or encourage to take place.
thing of a fascination for me. fling-but-believable observa- There is increasing evidence that
In my talk at QCon London tion by Mel Conway in 1968 Conway’s law is hard to bypass.
in March 2015 on continuous that “organisations which design Since 2013, a few others
delivery, I described how we’ve systems… are constrained to pro- and I have been collecting and
found through working with duce designs which are copies of documenting different team to-

10 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


pologies used by organisations
to make DevOps work for them
— see  http://devopstopologies.
com/ for the current catalogue of
team topologies. We have found
these patterns to be useful for
clients grappling with the chang-
es to teams needed to make
DevOps work, because the topol-
ogies help us to reason about the
kinds of responsibilities and cul-
tures that organisations need to
have to make the different team
structures work.
No single DevOps culture
It has become increasing-
ly clear to me over the past few
years of work with many differ-
ent organisations that the idea
Fig. 1: Organizations with several different products and services,
of a single, identifiable DevOps
with a traditional ops department, or whose applications run entire-
culture is misplaced. What we’ve
ly in the public cloud are suitable for Type 3 topology.
seen in organisations with ef-
fective or emerging DevOps
practices is a  variety  of cultures
to support DevOps. Of course,
some aspects of these different
cultures are essential: blameless
incident post-mortems, team
autonomy, respect for other peo-
ple, and the desire and opportu-
nity to improve continuously are
all key components of a healthy
DevOps culture.
However, in some organisa-
tions, certain teams collaborate
much more than other teams,
and the type and purpose of
communication can differ from
that in other organisations. In
fact, we’ve sometimes seen the
need for some organisations
to  reduce  the amount of collab-
oration between certain teams Fig. 2: Type 2 topology suits organizations with a single main Web-
in order to maintain a separation based product or service.
between logically distinct parts
of the whole computer-human
system. This is driven by the need
to anticipate Conway’s law (the • infrastructure as a service these three are the patterns we
so-called “inverse Conway ma- (“Type 3” on  devopstopolo- have seen most frequently. Let’s
noeuvre”). gies.com), explore the differences in culture
In the 20 or so organisations • a fully-shared responsibility between these topologies.
we’ve worked with over the past (“Type 2”), and
few years to develop DevOps • site reliability engineering Infrastructure as a
practices, three team patterns (SRE) team (“Type 7”). service
or topologies stand out as being Other team topologies also work Say we know that the virtual-en-
the most commonly used: well for some organisations, but vironment provisioning capabil-

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 11


infrastructure team and the dev
teams but, as with public cloud,
only limited interaction between
the infrastructure providers and
the devs.

We build it, we run it


Conversely, in organisations with
end-to-end product-aligned or
KPI-aligned teams that own the
whole stack (including infra-
structure), infrastructure people
should work in close collabora-
tion with application developers
and testers, and rightly so: any-
one on the team might get wo-
ken at 2 a.m. to attend to a live
service incident.
Team members with differ-
Fig. 3: Type 7 suits organizations with a high degree of maturity or
ent skills thus work closely to-
rigorousness with respect to operational excellence.
gether, drawing on each other’s
strengths for different aspects
of delivery and operation. Such
a “we build it, we run it” team
would need to be independent
of a neighbouring team working
on an unrelated subsystem or
service, probably to the extent of
using different tools for applica-
tion code, testing, build, or pro-
visioning (if needed). Thinking
again of Conway’s law, we set up
the work environment for min-
imal cross-team collaboration
and communication.

Site reliability
engineering
The culture is again different for
organisations that use a SRE team
Fig. 4: Type 5 can function as a precursor to Type 2 or 3 topologies, in the Google model (sometimes
but beware the danger of a permanent DevOps team preventing called WebOps). The SRE team is
communication between dev and ops. willing to take on all production
responsibility (on call, incident
ity needs to be consumed “as a ately isolated from the applica- response, etc.)  —  as long as the
service” by application-develop- tion-dev teams and would share software produced by the dev
ment teams; to prevent Conway’s less with them than internally team meets stringent operation-
law from driving too much cou- amongst themselves, at least on al criteria.
pling between dev teams and the level of code and tooling. The dev team’s collabora-
the infrastructure team, we have (We’d still want shared lunchtime tion with the product develop-
recommended limiting commu- pizza sessions across the teams ment team is chiefly limited to
nication between the dev teams to advocate for role rotation and helping meet the operational
and the infrastructure team. learn about new approaches.) criteria and providing feedback
In this case, we anticipate The culture here would on run-time behaviour via met-
that the infrastructure team see sharing of some infrastruc- rics, incident reports, etc. The
members are somewhat deliber- ture-level metrics between the dev team is deliberately isolated

12 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


from some aspects of how the soft-
ware runs in production to allow it
ship from the top, it makes sense to
adopt a team topology for DevOps Topologies help
to focus on new features, although that fits most easily with the current
this needs a high degree of maturity
from product owners to ensure that
team incentives and drivers. In ef-
fect, we get IT leaders to ask them-
us to reason
operational criteria are properly pri-
oritised.
selves which team topology is best
for their organisation given the cur- about the kinds
rent inter-team culture.
The effect of team
topology on DevOps
For instance, if the dev team is
driven strongly by capital-expendi-
of responsibilities
culture
To the extent that in a successful
ture (CapEx) product spending and
the ops team by an operational-ex- and cultures that
DevOps-enabled organisation all penditure (OpEx) “business as usual”
teams will pull together towards a
common goal, clearly there is a kind
budget, perhaps the SRE (Type 7)
model might work: the contract of
organizations need
of overarching DevOps culture at
work (the respect, autonomy, lack
operability the SRE team requires
before taking on a piece of software to have to make
the different team
of blame, etc. mentioned earlier). acts as a good division between
However, in these three team mod- the CapEx and OpEx worlds.
els in which DevOps flourishes, the On the other hand, if a newly
nature of each DevOps culture on
the ground is somewhat different,
recruited head of technology wants
a fully embedded DevOps model
structures work.
and we are wise to understand and but realises that the dev and ops
anticipate the different feel of each teams are too far apart in skills and
of these models and cultures. What focus to immediately begin collab-
works for one e-commerce compa- oration, perhaps enabling a tempo-
ny may not work at all for a second rary DevOps team for a short period
e-commerce company, due to dif- of time (12 months? 18 months?)
ferences in skills, motivation, tech- can foster a culture of collaboration.
nology, or even personalities! We’ve seen organisations
where the sysadmins (ops) have
Choose a team structure to been reluctant to adopt essential
fit a culture? practices such as version control
Some organisations are unfortu- and test-first development of in-
nately crippled with inter-team frastructure code and other com-
rivalries, conflicting goals, poor panies whose developers believed
leadership, and politics (most of us that monitoring and metrics or de-
have worked at one or more of these ployment was beneath them and
places, I guess). The more perceptive so had little interest in operational
of these organisations are looking concerns. In such cases, some sort of
to adopt practices like continuous catalyst is needed before a DevOps
delivery and DevOps in order to ad- culture can take hold: an internal or
dress their poor performance, but external team can help here.
they typically have major difficulties This approach of choosing a
in making team changes due to the team topology to fit the current
underlying conflicts. In one organ- culture needs to balance a desire
isation, we found that the project to achieve a new, more effective
managers had a financial incentive DevOps culture with realism about
to deliver 20 story points a month where the teams currently are. Cru-
while the IT operations team had a cially, the team topology — and,
financial incentive to close live inci- therefore, the flavour of DevOps
dent tickets within a fixed time: un- culture — should be seen as some-
surprisingly, this led to all kinds of thing that evolves over time as skills,
weird behaviours and conflicts. technologies, capabilities, and busi-
For these kinds of organisa- ness needs change.
tions, unless there is bold leader-

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 13


Read online on InfoQ

Leadership, Mentoring, and Team Chemistry

Michael Biven is a lead systems engineer at Ticketmaster and a former firefighter who has taken
his experience in leadership and emergency management from the fire service and put it to use
in technology since 2000. Before Ticketmaster, he helped build amazing experiences for anyone
and everyone at Ticketfly, Change.org, Beats Music, and Apple.

Before servers and services became my job I worked in the fire service.
During that time, I received one of the best pieces of advice ever given
to me even though it wasn’t offered as such. It was a challenge from
one of the captains in the firehouse: “What did you do today that you
would be proud of?”
The fire service succeeds be- we use to do our job, but there teams that at times appear to be
cause it has a culture that’s bal- will always be some new tech- at odds with each other.
anced between pulling your own nology to learn around the cor- Military strategist John
weight and being team oriented. ner. Boyd held that you consider
There is a natural pressure to not DevOps focuses our atten- “people, ideas, technology — in
let the rest of your people down, tion on the differences between that order.” Every few decades or
and you do that by holding your- the teams that make up an or- so, some new technology or pro-
self to the highest standard pos- ganization. The name itself high- cess is brought along as a way to
sible. lights the tension that has exist- reduce the number of firefighters
Compare that with our pro- ed between development and needed to protect a community.
fession, where we place the focus operations. It encourages empa- But in the end, it always comes
on technology and process. You thy by forcing us to address com- down to the same fact: to do the
may think that attention makes munication and understanding job, you need the people. And
sense as it’s focused on the tools the needs between the various when we need people, we have

14 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


to have the soft skills to motivate
and lead them. To have a success-
stability. A successful unstable
team has either been lucky or is As a leader your
ful team, we need those skills in heading towards burnout.
place even before we think about
unleashing them on the work
Chet Richards in Certain to
Win writes that Boyd described
job becomes not
ahead.
Leadership, mentoring, and
leadership as “the art of inspiring
people to enthusiastically take ac- keeping the system,
team chemistry are the points tion towards uncommon goals”,
for creating successful teams. As
a firefighter, I was lucky enough
the idea being that they’ll of
course accomplish these com-
site or service up, it
to learn these by example from
some amazing people. These
mon goals on their own. As a
leader, you become respon-
becomes keeping
your team up.
ideas were not just part of our sible for your team. The team
culture, they were expected — members’ needs, problems, and
and they’re what I want to share. failures become yours, but re-
member that their successes are
Leadership their own. It means that you will
Any team requires both members need to be a good listener while
and leaders. Nothing earth-shat- also providing honest and direct
tering there. But what makes feedback. You need a calm and
a great leader? Most of us can positive outlook on the work at
probably think of someone like hand no matter what difficult
that, but we probably don’t think challenges you are facing. Your
too much about what made the job is not to keep the system, site,
person a great leader. or service up — it becomes keep-
In my view, a great leader is ing your team up.
someone who is consistent, fair, In the firehouse, I saw two
honest, direct, and aware of what easy ways to present that calm
is going on. The consistency pro- and positive outlook. And like
vides a dependability that will most things from civil service,
help anchor a team. The fairness, they can be summed up as short
honesty, and directness respect phrases.
the team by treating the mem- The first is “complain up and
bers as professionals and by clear praise down”. You should never
communication. Awareness pro- complain about anything in front
vides feedback on both the im- of or to your team. You’ll only end
pact the team is making and on up looking like a complainer and
the influence of outside sources foster a more negative opinion of
back onto the team. you. If you need to raise an issue,
What we cannot have is the bring it up with whomever you
rapid change and chaos that is report to. This also leads to tak-
common in the work we do re- ing care of your own problems.
flected in the teams themselves. Don’t let your actions or inac-
High turnover, micromanaging, tions bring anyone else the need
and high tempo of change in dai- to “complain up”.
ly or weekly goals will kill a team. The second phrase is “praise
To counter this, the lead- in public and reprimand in pri-
ers in each team who step up to vate”. Anytime you thank some-
take on different roles need to one, do it publicly. This can be in
provide stability. This includes of- an e-mail to the team or an “at
ficial (tech leads, managers, and name” message in a chat room.
directors) and unofficial (experi- But if you need to bring up a neg-
enced engineers who’ve earned ative issue, do so in private. At
the respect of their peers) lead- times, you may need help from
ers. Teams only succeed amid the team to deal with an issue
chaos when they’re founded on — anything from providing ad-

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 15


ditional guidance on a technical and train together. That last bit In the fire service, this is a
task to helping someone adjust is something that’s sorely miss- major part of bringing in new
to working on the team. This is ing. An annual retreat or a hack people. It provides support and
just an extension of the taking day alone is not sufficient to ad- introduces the new firefighters
care of your own people and of dress these points. You have to to their team. At the same time,
the team handling its own prob- have them do the work togeth- this allows more experienced
lems. er, encourage team lunches, and members to pass to the next
The other important role provide opportunities for them generation the things you can
a leader performs is to clearly to share their knowledge among only learn by doing the job.
communicate to everyone the themselves. There is a bit of inertia that
current situation, the challeng- This also means that when discourages all of this, and we
es ahead, and the goals for not there is work to be done to let have to fight it. We tend to stay
only the team but for the entire the team do it. Good leaders in our perspectives and technol-
organization. This provides the are willing to let other people’s ogy silos without stepping out-
common orientation you need ideas take the lead. Let that mix side to see how other people or
if you want to take advantage of team skills and backgrounds tools have approached similar
of the initiative of individuals to take form. problems. We end up disregard-
reach your goals. If you have a I have a specific memory ing the results of smart people
team of capable people and have of a attending to motor-vehicle who have already addressed the
this type of clear and transparent accident where a college-aged issues we’re working on. Then
communication going in both di- drunk driver hit a couple in a pick- we have ageism, which makes
rections, they should be able to up truck and sent the truck off an us bleed knowledge as we lose
prevail against almost any prob- overpass to land upside down on peers who just happen to have
lem they face. the railroad tracks below. It was a years of experience.
horrific accident. I was on the first
Team chemistry of two fire engines that arrived Communication
One of the best descriptions I’ve along with a chief and the oth- As in any successful relationship,
seen of what this looks like is er emergency services (EMS and you’ll need to have honest and
from Gene Kranz, Apollo 13 flight police). The couple didn’t survive direct communication. You will
director, speaking at the National the crash but we still needed to otherwise create a false working
Air and Space Museum on April remove them from the vehicle. environment. If there is an issue,
8, 2005: In most fire departments, the bring it up. The worst thing you
There, we learn the differ- color of your helmet indicates can do is to tell someone nothing
ence between the “I” and the “we” your role. You’re on an engine or is wrong when in fact there is an
component of the team, because a truck. You’re an officer or you’re issue. In the firehouse, we were
when the time comes, we need a chief. What I remember is a more than just a team. We were
our people to step forward, take bunch of white (chiefs) and red in effect a family and we wouldn’t
the lead, make their contribution, (officer) helmets trying to extri- let any issue, hurt feelings, or
and then when they’re through, cate the couple from the pickup complaint go unaddressed for
return to the role as a member of while about half a dozen fire- long. We would either deal with
the team…. [The chemistry of a fighters stood back watching. Let an issue out in the open among
team] amplifies the individual’s your people do the work. everyone or address it in private.
talents as well as the team’s tal- If you don’t deal with the small
ent. Chemistry leads to commu- Mentoring stuff while it’s small, you risk al-
nication that is virtually intuitive, The other way to help build that lowing it to fester into something
because we must know when chemistry and to keep the insti- much bigger — something that
the person next to us needs help, tutional memory of an organiza- could require a more drastic res-
a few more seconds to come up tion going is to make mentoring olution.
with an answer. a key part of your teams. Each of these ideas is ap-
The fact that this type of This includes mentoring plicable not only to our individ-
chemistry has to be cultivated both new members and existing ual organizations but to our field
over time is a separate subject. people who have stepped up as a whole. Events like  DevOps
But the short of it is that you can’t into more senior or merely differ- Days  are amazing opportunities
have a team perform at that level ent roles. Even though you’re no to share our experiences and to
without having the members do longer the new person, you can mentor someone for the little bit
the work, respond to outages, still be mentored. of time we have with them. Just

16 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


remember to not pass up the amount of flexibility. You could
chance to get some mentoring of say they are the pinnacle of the
your own while you’re there. results-only work environment
and self-managed teams. As a
Recap firefighter, I never needed to
If you find yourself wanting to look at a kanban or have a task
improve how your development assigned to me to know what
team works, you will need peo- needed to be done. The fire ser-
ple to lead and champion those vice, along with the other emer-
changes, and mentors to share gency services, succeeds by
these ideas with your people. taking care of their people with
If you don’t then it will become these ideas — after that, every-
just another attempt to use a dif- thing else usually falls into place.
ferent approach (think agile or It lets them answer the question,
lean) to improve the output of “What did you do today that you
the team. And I think that many would be proud of?”
people reading this have had
unsuccessful experiences when
those types of changes come
down from management as a
“We’re doing this now. Here is a
class or book. Now make it work.”
approach.
The point is, none of this is
magic or new. It’s about doing
the work needed and treating
each other with respect. Our
problem is that we make things
way more complicated than they
need be and we routinely fail to
see the obvious in front of us.
If you like this type of ap-
proach, I’d suggest reading al-
most anything written by Leo D.
Stapleton (retired Fire Commis-
sioner and Chief of Department
in Boston) or on Col. John Boyd. I
also recommend the recent Team
of Teams: New Rules of Engage-
ment for a Complex World by Gen.
Stanley McChrystal. All of these
provide examples of teams suc-
ceeding in some of the most cha-
otic and demanding situations
you can imagine. All of them pro-
vide examples of teams taking
a more holistic attitude where
more top-down approaches
have failed.
I want to point out that
these ideas don’t exclude
self-managed organizations. The
fire service is very structured,
practically paramilitary, but
made up of several self-man-
aged teams that each have a fair

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 17


Read online on InfoQ

Infrastructure Code Reviews

Chris Burroughs works at AddThis where he focuses on the intersection of infrastructure,


systems performance, and developer productivity. He writes or reviews code every day.

We were facing an issue with an open-source distributed database.


Like many such systems, it uses a gossip protocol to coordinate which
nodes are responsible for which data (and makes sure all nodes agree).

It also used gossip to detect tracked down to a side effect of some internal review and test-
failed nodes and to ensure that a new feature). Getting spooky ing, pushed it to our production
client requests went to nodes errors on  restart  is particularly clusters. The patch worked (!),
that were up. When no nodes re- nerve-wracking because rou- fixing a vexing production prob-
sponsible for a particular range of tine operations like upgrades or lem before a long holiday. How-
data were available, the database changing settings to help debug ever, after posting the patch with
could not satisfy the request and the problem become high-stress the works-in-production seal of
instead returned an error to the affairs. approval, someone pointed out
client. Fortunately, we found a re- that it would prevent the cluster
This system was working port of a similar problem with from even starting in certain con-
well for us until nodes start- this open-source product and figurations. Simply reading the
ed going haywire, returning a a solution that someone had code and reasoning about the
stream of availability errors on sketched out. I cleaned up and system had uncovered a funda-
restart (the cause was eventually rebased the patch and, after mental flaw we had not discov-

18 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


ered despite running it on doz- why it was made. Code review happens before the change is
ens of production nodes. helps break down small enclaves deployed, authors have incen-
of a particular style (“this is obvi- tive to craft small changes that
Why review code? ously team X’s code”) and helps can be readily understood (since
Code review is usually pitched as spread consistent best practic- the change will not be deployed
a software “development” prac- es throughout an organization. until someone else understands
tice. However, as infrastructure Also, as a matter of professional it) and reviewers have a chance
— including configuration man- development, it is a great way to to make meaningful suggestions
agement, deployment scripts, learn from your peers. before the code runs in produc-
provisioning manifests, packag- Infrastructure code can be tion (this is far more rewarding for
es, and runbooks — becomes difficult to test as it is rarely dom- the reviewer). However, adding a
code, those artifacts become inated by pure functions and it new and blocking step to the de-
testable as code. Testability is an almost by definition interacts velopment process is a risky, po-
obvious virtue whose benefits extensively with the local OS and tentially disruptive change. Post-
are now well accepted by prac- remote systems. While testing push review still realizes many of
titioners. Just as infrastructure and code review never need be the benefits of review in general
code serves as a common lan- exclusive practices, review of in- and requires no initial changes to
guage for both development-fo- frastructure code has the signifi- anyone’s process.
cused and operations-focused cant advantage of  always  being The most important part
teams, review is the common applicable. Code may be diffi- is to review  all  changes, and
forum for code discussions and cult to test or require extensive adjust the ratio of pre-push to
brings with it another set of vir- scaffolding before the first unit post-push review over time. An
tues. test can even be written, but all organization can start with most-
There are several ways that changes can immediately be re- ly post-review (to minimize dis-
engineering organizations re- viewed. Code review might even ruption) and gradually switch to
view the source code that they be necessary to get that test scaf- mostly pre-review as attitudes
write. Historically, all of the folding to work for infrastructure towards review shifts and tool-
source code might be printed code. At AddThis, we use a com- ing improves. Infrastructure re-
out and then reviewed by ex- bination of  ChefSpec  (quickly positories may also tend towards
perts before the code is run. En- running verification like “Was smaller, time-based state chang-
gineers might sporadically meet the correct instruction passed to es such as the membership of a
as a group to review proposed the configuration-management cluster or the desired version of
changes at a high level (perhaps system?”) and Serverspec (create a package. These changes may
doubling as an architectural, a VM and check “Was the right have a different trade-off for the
product, or deployment proce- directory tree really created?”) for cost of blocking versus value of
dure summary). Engineers might testing our configuration-man- review than large Web services.
also audit an entire codebase for agement recipes. Figuring which It may be tempting initial-
security or compliance reasons. one to use while satisfying other ly to formalize code review as a
These are all fine practices, but desirable properties like mini- one-way street, with senior en-
this article is about the type of mizing duplication takes much gineers pontificating to a cap-
code review where each change more nuance than a simple unit tive audience of more junior
is continuously reviewed as it is test. Debate over the proper test engineers. This is a mistake. All
developed, with results such as cases is often responsible for engineers have some expertise
sending a patch to a mailing list more commentary than the ac- in particular non-overlapping
or a GitHub pull request. tual code. areas, and the empathy required
There are several straight- to review code well is exactly
forward reasons for reviewing Reviewing in practice the sort of non-technical skill
code. Most directly, the process When trying to first introduce that you want to encourage in
can discover bugs, which are dra- systematic code review, orga- your engineers. Matt Welsh, a
matically cheaper to correct be- nizations often trip on when well-known systems researcher
fore the software enters produc- to insert the review. Should it and computer-science profes-
tion. Code review also improves be pre-push review (before the sor whose seminal SEDA paper
the “bus factor” for components change has landed in the author- even has its own Wikipedia page,
by ensuring that at least two peo- itative repository) or post-push described his first code reviews
ple understand how each change (some time after the change has (by people young enough to be
works and, more importantly, landed)? Since pre-push review his students) as a life-altering ex-

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 19


Review with
perience that dramatically im- these stories are relevant to the
proved the quality of his work. code at hand and not simply the
Emphasizing that code re- most fun stories to tell.) Finding
tests provides view is an exercise in empathy
puts people in a positive mind-
that your reviewers inadvertently
write long tales is a great prob-

the confidence set. It is perilously easy to fall into


the trap of giving negative feed-
lem to have but you should prob-
ably find a better home for them,
back along the lines of “This isn’t such as an internal wiki or blog,
necessary for how  I would do it.” That type of
feedback isn’t fun to receive and
than to keep them buried inside
review comments.

systematic [system] if acted upon might only make


the code resemble the reviewer’s Linting: More than just
curly-brace wars
own code style rather than make
refactoring. it any better. Instead, judge a pro-
posed change on its own terms:
Most of the engineers I work
with primarily write in Java or
Does it do what it says it does? other JVM languages. Java does
Is the problem adequately diag- not have a reputation as a par-
nosed or is this a shot in the dark? ticularly terse language and it
Does the change actually solve is not uncommon for its lines to
the problem? Does the commit extend far beyond the traditional
message have enough context 80-character limit. Java develop-
for why this change was made so ers also tend to prefer IDEs that
that it will make sense later? aggressively take care of format-
That doesn’t mean that ting. For a little while, it was a
there is no room for idiosyncra- running joke that, despite all of
sies or styles. Code review can be this, every IDE placed a bright-
a vehicle to promote whatever red vertical line at 80 columns by
values your engineering team default and no engineer would
wants to emphasize. Perhaps ever disable the red barrier.
your team really likes (well-com- It has in the past also been
mented!) tricks in the tradition common to configure a style
of  Hacker’s Delight  or, after re- checker as part of the build, but
peatedly being burned, values to only occasionally check its out-
production robustness. What- put. This was the case when I was
ever the values, code review is a preparing a patch for a project
way to reinforce them day to day. that had integrated linting into
Since code review is, by ne- its review system. It felt a bit awk-
cessity, written down, it can en- ward to break up one line with a
gage the storytelling necessary series of string concatenations,
to ingrain your culture. It is one so I left it longer than the lint tool
thing to point out that maybe a wanted. But the tool still stuck
function ought to be a bit more a bright-yellow “Lint Warnings
efficient, and it’s quite another TXT3 Line Too Long” on the diff.
to tell a story about how a few The feedback was quick and sim-
lines of optimized bit twiddling ple: “Please fix this lint warning.”
resulted in huge real-world per- Initially, I didn’t feel great about
formance gains. Relating how a feedback over whitespace (even
package-management conflict when self-aware of our follies, it’s
caused a painful weekend out- hard not to think code you just
age that was fiendishly difficult wrote is pretty slick) and I had,
to debug is not only a good way after all, considered the warning
to explain why a test case would and thought my taste and style
be an addition but also reinforc- judgment were decent. But as I
es the importance of testing in became more familiar with this
general. (Provided of course that particular project, I came to ap-

20 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


preciate the strict linting. There and pushing the checks into  the
were over 300 thousand lines of compiler. Review with tests pro-
code and they all looked the same. vides the confidence necessary for
More importantly, patch review systematic refactoring and fixes
never bogged down over style nits while each linting review in turn
(except when the first-time submit- improves the quality of reviewed
ter deliberately ignored the rules!). code.
As engineers start critiquing
each other’s patches (perhaps for Testing towards
the first time), someone inevita- continuous delivery
bly will disagree over whitespace, The final piece of infrastructure
curly braces, or some other pe- after linting is integrating testing
dantic style issue. If your company into the review flow. This carries
has not already agreed on a style a lot of the same basic benefits of
guide, this can trigger some lively linting (reviewers can see the re-
discussions. sults and feel certain everything
This discussion is a good ran correctly). But by inserting test-
thing for all of the expected rea- ing into an already asynchronous
sons — for example, that consis- process, you gain the ability to
tency makes code easier to read. do  even more  automated testing.
It also lets you capture all of those Integration tests, Serverspec tests
formatting rules (or at least all of (including spinning up multiple
them that are free of false posi- VMs), and data-based regression
tives) as code so that computers tests can all take far longer than is
can catch them quickly. This makes reasonable for a single developer
code review more efficient as re- to run on the workstation for ev-
viewers can focus on more inter- ery commit. Fortunately, running
esting things because linting has multiple expensive tests in parallel
already checked the formatting. and reporting the results is exactly
It’s also just plain healthy for an or- what continuous-integration sys-
ganization to know how to agree tems are good at.
about prickly cross-team issues Review, linting, and testing
that lack glamour. At AddThis, our combined will result in a self-rein-
Java style for opening-brace place- forcing cycle where trunk is always
ment was settled by a Mortal Kom- expected to be stable. Review (like
bat tournament (full disclosure: I linting and testing) is a check in the
lost). release pipeline, and when a diff
Moving towards automat- fails, the follow-up fix (bug fix in
ic mechanical checks also gets code, linting rules with fewer false
your team in the habit of system- positives, or more stable tests) can
atically making changes across also be reviewed. And if trunk is
your entire codebase. This might always stable, why  not  release af-
start with trailing whitespace, but ter every commit? The practice of
snowballs in value as it progress- small, frequent deploys has been
es to fixing entire classes of bugs well known since 10 deploys a day,
with agreed-upon static analysis as have the virtues: accelerated
settings. That project to which I time to market, easier analysis of
submitted a patch with consistent problems, and quicker feedback.
column length also checks the per- By emphasizing the importance of
missions of every file, undefined an always-stable trunk, continuous
properties in dynamic languages, delivery also reinforces the impor-
and arguments to printf-style func- tance of thoughtful code review
tions. Google is known for not only and adds another software quality
persistent refactoring of this sort flywheel.
but for taking it one step further

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 21


Read online on InfoQ

Build Quality In: Book Review and Interview

 by Manuel Pais

THE BOOK AUTHORS


Matthew Skelton has been building, deploying, and operating commercial software systems
since 1998. Co-founder and principal consultant at Skelton Thatcher Consulting, he specializes in
helping organizations to adopt and sustain good practices for building and operating software
systems: continuous delivery, DevOps, aspects of ITIL, and software operability. He is co-editor
of Build Quality In, a book of continuous delivery and DevOps experience reports.

Steve Smith is an agile consultant and Continuous Delivery specialist at Always Agile
Consulting Ltd. Steve is a co-author of the continuous delivery and DevOps book Build Quality
In, a co-organizer of the monthly London Continuous Delivery Meetup group, a co-organizer
of the annual PIPELINE conference, and a regular conference speaker. Steve blogs at www.
alwaysagileconsulting.com/blog.
 

Build Quality In is Steve Smith and Matthew Skelton’s collection of


experience reports (including their own) on Continuous Delivery (CD)
and DevOps initiatives. The book features contributions from in-house
technical and process leaders to external consultants who implement
technical and organizational improvements.

22 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


Since the book focuses on getting • Alex Wilson  and  Benji We- automation, monitoring, and
experiences into the light (and ber (Unruly) discuss applying predictability and explains
into words), readers shouldn’t good software engineering how effective changes in be-
expect a thoroughly structured and Continuous Delivery havior require leading by ex-
or consistently edited piece of practices in a full-stack team. ample.
work. Instead, there’s a welcom- An incremental approach to • Jan-Joost Bouwman  (ING)
ing mix of styles and prose akin monitoring, redesigning a looks at moving from a pro-
to the diversity of organizations monolith app, and canary re- cess-driven approach based
and working environments rep- leasing resulted in business on ITIL and CMMi frameworks
resented. growth. to a content-driven approach
Reports range from pure CD • Anna Shipman  (UK’s Gov- that’s kick-started by lean and
to pure DevOps: ernment Digital Service) tells supplemented by DevOps. He
• Chris O’Dell  (7digital) writes about building a DevOps explains how internal DevO-
on end-to-end testing slow- culture in a government en- psDays and other activities
ing down API delivery, and vironment by fostering trust can promote cross-team un-
how rollback, monitoring, among government agen- derstanding.
and blameless postmortems cies, showing work done, and • Martin Jackson  (UK Govern-
were keys to speed and mov- understanding risks. A securi- ment’s Digital Transforma-
ing from a monolith to a ser- ty perspective can become an tion) reviews the technical
vice-oriented system. ally in promoting CD practic- path to establish trust in gov-
• Niek Bartholomeus  (a large es. ernment software systems,
investment bank) explores • Amy Phillips  (Songkick) namely via stable and pre-
tackling gigantic releases that looks at slimming the release dictable development and
have to negotiate complicat- process and moving forward production environments.
ed interdependence between as bottlenecks move along • Rachel Laycock  (Thought-
applications in an extremely the delivery process. Works consultant) says Con-
centralized organization. • James Betteley  (Fourth) de- way’s law is an obstacle to CD
• Rob Lambert  and  Lyndsay scribes moving an ops team from an architecture point
Prewer  (NewVoiceMedia) from a firefighting  modus of view. Organizations swing
look at how to reduce cy- operandi  to a “scrumban” ap- from the extreme of enter-
cle time and act faster on proach with high visibility prise architecture, where de-
customer feedback. Test- and prioritization. Tracking velopers don’t understand
ing as an activity (instead of daily interruptions along with why they are doing things, to
a time-consuming phase) backlog stories provided a another extreme where fully
and monitoring throughout clear picture of work in prog- autonomous teams care only
the entire lifecycle helped ress and increased predict- about their own layers in the
achieve smaller releases. ability. stack thus causing integration
• Phil Wills  and  Simon Hil- • John Clapham  (MixRadio) nightmares.
drew  (The Guardian) write writes on pragmatically em- • Matthew Skelton  (a large
about quick feedback loops bracing risk to improve cy- ticket-booking website) dis-
from customer input via mul- cle time and on the need to cusses collaboration, improv-
tiple deploys per day and change culture and working ing the flow and visibility of
how changing traditional habits to gain real benefits work, and how technology
roles (dev, QA, ops, and re- from DevOps and CD initia- may trigger cultural change.
lease managers) can bring tives. • Steve Smith  (Sky Network
about faster product-focused • Jennifer Smith  (Thought- Services) examines growing
software delivery. Works consultant) explores a CD pipeline across an orga-
• Marc Cluet  (consultant) shortening the feedback nization and how organiza-
notes how company size, loop from running systems tional structures can prevent
from tech-savvy startups to by making delivery teams re- cycle-time reduction.
risk and pain-averse corpo- sponsible for production sup- • InfoQ asked the authors
rations, influences DevOps port. about the process of produc-
adoption. He also covers the • Sriram Narayanan (Thought- ing the book and the state of
causality between steps in Works consultant at  a large CD and DevOps.
DevOps adoption. ticket-booking website) holds
forth on build and release

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 23


zational problems such as poor the book. By and large, I think we
How did Build Quality In come
to life? Why did you focus on inter-team communication or achieved that, although most of
practical examples instead of a overly centralized authority of- the examples are Web-based, so
more prescriptive book? ten lead to rework that can be far desktop, on-premise, and em-
more expensive than reworking bedded systems are not really
software, and it’s interesting how represented — but I’m work-
Steve Smith: Matthew and I had many Build Quality In stories talk ing to address that gap in other
discussed writing a book togeth- about organizational change as a ways. :)
er for some time, and just after prerequisite for success.
the first  PIPELINE conference  in Matthew:  Organizations
April 2014, we committed to do- building differentiating software
ing it together. I had the idea of systems that want those systems What were your favorite sto-
co-editing an anthology of arti- to be effective over any reason- ries? Which ones helped you
cles, and Matthew had the idea able timeframe need to address “see the light” on a particular
of a CD/DevOps division. On the organizational agility if the qual- problem, topic, or context?
train home, I asked the awesome ity of their software is to be sus-
Chris O’Dell to be our first con- tained. It’s simply not possible Steve:  I genuinely loved all of
tributor, and Matthew came up to build and evolve high-quality the contributions. Chris O’Dell
with the Build Quality In title. software in the longer term if talked about an API re-engineer-
Matthew Skelton:  Steve the organization fails to address ing that shrunk cycle times, Rob
and I realized that there were things like team motivational Lambert and Lyndsay Prewer
many people we knew doing re- factors, software operability, on- talked about a company-wide
ally great work in organizations call responsibility, etc. vision that inspired a move from
that are more typical of where annual to weekly releases, Alex
most people work than the ex- Wilson and Benji Weber talked
cellent but unusual examples of about how they have enhanced
Netflix, Facebook, Google, Etsy, What approach did you use to extreme-programming practic-
and so on. We wanted to demon- select the stories that made it es, Anna Shipman talked about
strate that CD and DevOps are into the book? DevOps and the Badger of De-
accessible to all kinds of different ploy at the heart of UK Govern-
organizations and all kinds of dif- ment... and I found out after
ferent people. We went with a set Steve:  We had a wish list on a 6,000 words that I had been try-
of experience reports because Trello board of CD practitioners ing to solve the wrong problem
“warts and all” is often more we knew from London Continu- for three years.
convincing than a theory-led ap- ous Delivery, PIPELINE, and the Matthew:  It’s really diffi-
proach, and also because there broader CD community. We ex- cult to pick favorite stories, as
is no one-size-fits-all recipe for plained to potential contributors they are all so inspiring! I think
CD and DevOps — there are too our desire to highlight both the my top three are the story from
many variables. importance and inherent chal- Chris O’Dell on 7digital’s met-
lenges of CD/DevOps and our rics-driven approach to CD, Anna
promise to donate of 70% of our Shipman’s honest appraisal of
royalties to Code Club, and asked DevOps within the UK Govern-
Agile practitioners often talk for their involvement. The vast ment Digital Service team, and
about building quality into majority of people said yes, and John Clapham’s experience of
software but you are talking we were really fortunate both introducing DevOps at Nokia
about building quality into an Dave Farley and Patrick Debois Entertainment. All three really
organization. What’s the differ- agreed to write forewords for us. show how the transformation
ence if any? We will always be enormously they helped make was a real mix
grateful for the time people gave of technology and cultural or or-
Steve: The idea behind the lean up to participate. ganizational change, not one or
and CD principle of Build Quality Matthew:  We wanted a the other.
In is to eliminate rework, which broad cross-section of angles,
is an enormous drain upon lead organizations, cultural challeng-
times and productivity. Organi- es, and people represented in

24 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


Matthew:  I would say sion. There needs to be a clearly
What are the major obstacles
to successful DevOps and CD that CD is easier to define than defined business problem that
adoption? DevOps due to the Humble and people can relate to, and one
Farley book, but not that CD is that people can see is exacer-
easier. The context — includ- bated by existing silos within the
Steve:  People. Organizational ing teams, skills, technologies, company. From that, a CD and/or
change is the single biggest bar- end-customer relationships, reg- DevOps culture can spring, but it
rier to CD adoption. ulatory demands, and market needs a lot of work to get it right.
Matthew:  I’ve spoken to demands — really dictates how Matthew:  A DevOps cul-
people from dozens of organi- effective a team or organization ture can best emerge by simply
zations over the past five years can be with either DevOps or CD demonstrating that it is more ef-
who’ve been trying to improve or both. fective than a siloed culture. Use
their delivery and operations ca- metrics to track deployments,
pabilities. The biggest blocker to uptime, performance, etc. and
DevOps and CD I’ve seen is the challenge the siloed culture to be
lack of clarity on who should be InfoQ:  Despite the intangi- more effective. However, some
doing what within an organiza- bility of DevOps, do you find silos are actually useful: Google’s
tion and how those responsibil- common practices or patterns SRE model for operating soft-
ities might need to change over that those organizations with ware is arguably an effective silo,
the coming two or three years; I a healthy DevOps culture ex- it’s just that the organizational
recently wrote about this, look- hibit? conditions have been set up in
ing at  how DevOps team topol- order for that silo to work well.
ogies affect culture. There are Steve:  There are certainly com-
many ways to make DevOps and mon DevOps practices and pat-
CD effective, but closing our eyes terns, and there is overlap with
and ears to the need to change CD. I simply look for an environ- Steve, along with Dave Farley,
team skills or responsibilities is ment in which people are free to you were part of one of the first
not one of them! talk to one another and collabo- projects where CD practices
rate on problems, regardless of and, in particular, the deploy-
role or team. ment pipeline took shape. Did
Matthew:  Yes, organiza- you realize back then that you
Would you say that CD is easier tions with a healthy DevOps cul- were doing things radically
than DevOps since Continuous ture place a strong emphasis on differently from most organi-
Delivery (the book by Jez Hum- metrics-driven decisions, have a zations?
ble and Dave Farley) is rather clear understanding of who does
prescriptive on which practices what and why, and avoid a blame Steve:  The LMAX deployment
to adopt while DevOps is more culture, thereby encouraging pipeline I worked on with Dave
of a work in progress for each people to be specific almost to between 2008-2010 was certain-
organization without a defini- the point of pride in how they ly a mature, high-quality pipe-
tive definition? screwed up. The five pillars of line but CD practices have been
DevOps — CALMS — are equal- around for some time. Dave and
Steve: I don’t believe CD is easier ly emphasized, too, rather than his co-author Jez Humble intro-
than DevOps. Our main motiva- a focus on only automation or duced CD practices to various
tion behind Build Quality In was metrics, for example. ThoughtWorks clients in the
to show people how difficult CD mid-2000s, and in 2007, I worked
and DevOps can be as well as on a team at Reed Elsevier that
the successes they can produce. pushed new features to a website
I would say that CD and DevOps If “culture eats strategy for every few days. I would say CD
are both enormously contextual, breakfast” how can a DevOps is like extreme programming: it
and as a well-defined collection culture ever take over a siloed feels like things are moving slow-
of practices and principles, CD culture? ly at times and does not seem
is harder to misinterpret than that radical, but actually the lack
DevOps. Steve:  CD and/or DevOps need of feature branching, manual
to be powered by a business vi- regression testing, rework etc.

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 25


means you are delivering new
functionality very quickly. When organizational barriers. What Matthew, collaboration, work-
you are suddenly not doing CD, would you do differently in a flow, and visibility are at the
it becomes quite obvious how similar situation? core of the story you contrib-
opportunity costs pile up around uted, despite all the technical
you. improvements you implement-
Steve: If I were in the same situ- ed and promoted. When can
ation again, I would start with a technology be the messenger
shared consensus on a business for cultural changes and when
Was it difficult or demotivat- problem and a CD-focused solu- not?
ing for you to work on projects tion, rather than have one guy
with little or no concepts of CD running around nagging people Matthew:  Technologies that
after that? to adopt CD... and I would mea- we’ve seen work well as mes-
sure the whole value stream from sengers for cultural change are
Steve:  I guess I’ve avoided that the outset to make sure I didn’t things that help to shine a light
problem. Between 2007 and again miss the real constraint for on how well (or badly) the pro-
2014 I worked for companies that three years. duction system works: metrics,
adopted CD practices to vary- monitoring, log aggregation,
ing extents, and since 2014, I’ve dashboard, and so on. These
run my own consultancy to help kinds of tools generally bring a
companies understand how they You’ve written a lot about CD greater awareness of problems
can adopt CD for themselves. patterns and anti-patterns. with software systems, highlight-
Have you collected any DevOps ing what needs improving and
patterns and anti-patterns as resulting in more useful conver-
well? sations with less guesswork.
What has changed in the CD
world in the past five years? Steve:  I’ve not collected any
How far are we from CD be- DevOps patterns or principles,
coming a commodity like con- as my writing is based on expe- You mention printing the ta-
tinuous integration? rience and (shock, horror) I’ve ble of contents from Continu-
not worked in a “DevOps organi- ous Delivery and sticking it on
Steve:  The biggest changes in zation”. I’ve worked in plenty of a wall, which triggered many
CD in the past five years have companies in which developers, conversations. It sounds like
been: PaaS, e.g. Netflix on AWS; testers, and operations staff col- this doubled as a cheap visual
containerization, e.g. Docker; laborated closely but I don’t see roadmap for continuous im-
and build feature branching, e.g. anything reusable there other provement. In comparison, do
GitHub Flow. All of those have than reminding people to talk you reckon that large organi-
led to the evolution of CD prac- to each other. I have to admit zations often get little value
tices, and I think that’s perfectly I’m quite down on DevOps these from formal process-improve-
natural. Neither continuous in- days, and I think it jumped the ment projects that inherit the
tegration nor CD can become shark somewhere in the past year silo problems of their regular
commodities, as they are hetero- or two. I think Patrick Debois et projects?
geneous practices rather than al built an amazing global com-
homogenous tools. munity over a five-year period Matthew:  The headings in the
and a lot of hard problems were Humble and Farley CD book are
shared, but everywhere I look I great, and yes, we continue to
see DevOps tools, certification use them as a simple CD road-
Your contribution to the book programs, etc. I’m somewhere map for several organizations.
really brings home the inter- between Matthew’s famous “it’s The problem with the formality
dependence between CD and raining DevOps” comment and of many large-scale change pro-
DevOps (and other DevX re- Dave Farley’s gentle debunking grams is that they do not address
lationships). Perfecting your of cargo-cult DevOps. some of the key contributors to
pipeline did not significantly success with CD and DevOps, in-
reduce cycle time because of

26 Patterns of DevOps Culture // eMag Issue 36 - Nov 2015


cluding team empowerment and of the software remains higher Steve: I think it’s widely acknowl-
engagement, true end-to-end than with project-based funding, edged now that the IT industry is
responsibility, and a rejection of because the team has more con- in the midst of a diversity crisis.
deadline-driven project-based text for short-term versus lon- We both wanted to contribute
funding. ger-term fixes or workarounds, to Code Club because we feel it’s
and is empowered to clean up important that children learn to
after an important delivery date. program regardless of their gen-
der or ethnicity.
What’s the impact on IT teams Matthew:  Code Club does
and individuals’ motivation amazing work in the UK to help
when organizations treat IT as Matthew, you wrote “NoOps get kids interested in and aware
a provider rather than a part- != no operations and DevOps of software systems as things
ner? != developers running produc- that they can create and run. I
tion”. It sounds like in the IT think it’s really important that
Matthew:  The question of world, we’re still too often technology is something acces-
whether “partner” or “provider” is looking for shortcuts to prob- sible to future generations, in ev-
a better model for DevOps miss- lems and skipping the need for ery sense of the word. It’s particu-
es the point, I think. The crucial genuine collaboration. Do you larly good that over 40% of Code
thing is: do we have trust in the agree? Club members are girls, because
organizations with which we we desperately need more diver-
work? As clients/customers or as Matthew:  Too many businesses sity in IT in the UK, and gender
suppliers, do we meet the needs that rely on IT solutions still have diversity is a good place to start.
of the other organization? Do we no idea about what software
listen well enough to what they really is and where the value is
suggest? A little less JFDI and a added; they think that once the
bit more humility works well for commercial teams have specified Will there be a sequel to Build
both clients and suppliers in a a new feature or product that all Quality In?
DevOps context. the hard work is done, and that IT
just needs to implement it. To be
fair, we technologists have a poor Steve:  Possibly. I’m always
track record at explaining why happy to work with Matthew
Can you expand on why you and how the building of software and I love engaging with the CD
recommend program-based systems needs a much more col- community to find out what peo-
funding rather than proj- laborative approach, and Build ple are up to. But not in 2015, and
ect-driven budgets in the Quality In is a small attempt to probably not in 2016.
book? help rectify that problem. We Matthew:  We’re thinking
need to remember that collab- about a possible follow-up book,
Matthew: Project-based funding oration literally means “working but probably not until late 2016
tends to compromise the quality together”, and that working to- or 2017. In the interim, we might
of software, prioritizing instead gether does not simply mean a expand on the experience of the
an often-arbitrary deadline or division of tasks from on high. contributors in the current book
face saving and bonus collecting with additional blog posts or arti-
for project managers and others cles. However, if people are really
involved in “delivery”. The integri- keen, maybe we can be persuad-
ty of the product or service suf- You have generously donated ed to do something sooner. :)
fers as a result. The wider horizon 70% of the book proceedings
offered by program-based or to Code Club, a not-for-profit
product-based funding means organization that runs a UK-
that the technical teams are giv- wide network of free volun-
en a view of the bigger roadmap teer-led after-school coding
in order to make more-informed clubs for children aged 9-11.
implementation decisions. This Why is this cause so close to
often means that the quality your hearts?

Patterns of DevOps Culture // eMag Issue 36 - Nov 2015 27


PREVIOUS ISSUES

28
Advanced DevOps
Toolchain

In this eMag we provide both implementation exam-


ples and comparisons of different possible approach-
es on a range of topics from immutable infrastructure
to self-service ops platforms and service discovery. In
addition, we talk about the Docker ecosystem and the
different aspects to consider when moving to this in-
creasingly popular system for shipping and running ap-
plications.

33
Cloud
Migration

35 iOS9 for Developers

This eMag provides a concise review of all that is essential


for developers to know for building apps for the latest re-
lease of Apple’s mobile OS. This eMag is essentially based on
the video sessions from WWDC that Apple made available to
In this eMag, you’ll find practical advice from leading
practitioners in cloud. Discover new ideas and consider-
ations for planning out workload migrations.

Business, Design and


Technology: Joining Forces

32
developers and tries to enrich them with links to reference for a Truly Competitive
material that has become meanwhile available online. Advantage

This eMag offers readers tactical approaches to build-


ing software experiences that your users will love. Break
down existing silos and create an environment for
cross-collaborative teams: placing technology, business
and user experience design at the core.

Das könnte Ihnen auch gefallen