Sie sind auf Seite 1von 44

Security for Modern

Engineering

Information Security & Risk Management

Microsoft IT
Published: 2016
Copyright Information
The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication.
Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft
cannot guarantee the accuracy of any information presented after the date of publication.
This white paper is for informational purposes only. Microsoft makes no warranties, express or implied, in this document.
© 2016 Microsoft Corporation. All rights reserved.

1
Contents
1 Acknowledgements ............................................................................................................... 5

2 Forward ................................................................................................................................... 6

2.1 Bret Arsenault................................................................................................................................................. 6

2.2 Sue Barsamian................................................................................................................................................ 7

3 Introduction............................................................................................................................ 8

3.1 Setting the scope .......................................................................................................................................... 8

3.2 The SDL is our foundation ......................................................................................................................... 8

3.3 The challenge of modern engineering ................................................................................................. 8

3.3.1 The modern engineer ........................................................................................................................ 9

3.3.2 The Microsoft IT model ..................................................................................................................... 9

3.4 Our journey .................................................................................................................................................. 10

4 A closer look at the challenges ........................................................................................... 10

4.1 DevOps culture ........................................................................................................................................... 10

4.2 DevOps and security................................................................................................................................. 11

4.3 Additional requirements ......................................................................................................................... 12

4.3.1 Continuous assurance ..................................................................................................................... 12

4.3.2 Intelligent automation .................................................................................................................... 13

5 Our approach........................................................................................................................ 13

5.1 Knowledge management........................................................................................................................ 14

5.1.1 CALM board........................................................................................................................................ 14

5.1.2 Technical Control Procedures ...................................................................................................... 14

5.1.3 Guidance factory ............................................................................................................................... 17

5.2 Automation .................................................................................................................................................. 18

5.2.1 Static security analysis .................................................................................................................... 18

5.2.2 Dynamic security analysis .............................................................................................................. 19

2
5.2.3 Runtime detection and prevention ............................................................................................ 20

5.3 Implementation .......................................................................................................................................... 21

5.3.1 Static analysis ..................................................................................................................................... 21

5.3.2 Fortify SCA and intelligent automation.................................................................................... 22

5.3.3 Fortify SCA implementation process ......................................................................................... 22

5.3.4 Fortify SCA deployment architecture ........................................................................................ 23

5.3.5 Shortfalls and opportunities ......................................................................................................... 24

5.3.6 VSTS integration ............................................................................................................................... 24

5.3.7 Dynamic analysis............................................................................................................................... 26

5.3.8 WebInspect deployment architecture ...................................................................................... 27

5.3.9 Runtime detection and protection............................................................................................. 27

5.3.10 Automation factory .......................................................................................................................... 29

5.4 Metrics focused on driving the right behavior ............................................................................... 30

5.5 User experience .......................................................................................................................................... 36

5.5.2 Taking security to engineers ........................................................................................................ 37

6 Future of application security............................................................................................. 39

7 Lessons learned .................................................................................................................... 39

7.1 Partner with engineers ............................................................................................................................. 39

7.2 Focus on the willing .................................................................................................................................. 40

7.3 Be thoughtful about selecting technology ...................................................................................... 40

7.4 Build your process first, then focus on tools ................................................................................... 40

7.5 Integrate your tools into the engineers’ world............................................................................... 40

7.6 Build a relationship with your vendor ................................................................................................ 41

7.7 Be mindful of business impact.............................................................................................................. 41

7.8 Keep up with changing technology .................................................................................................... 41

8 Conclusion ............................................................................................................................ 41

9 Appendix A: Resources ........................................................................................................ 43

3
9.1.1 SDL ......................................................................................................................................................... 43

9.1.2 Modern engineering and DevOps ............................................................................................. 43

4
1 Acknowledgements
Authors

Anmol Malhotra
Talhah Mir

Contributors

Aaron Clark
Glenn Leifheit
Jonathan Griggs
Manish Prabhu
Shoham Dasgupta

Reviewers

Andrew Marshall
Brijesh Desai
Bruce Jenkins
Dave Christiansen
Karen Luecking
Michael Howard
Ralph Hood
Rob Polly

5
2 Forward
2.1 Bret Arsenault
The pace at which business is moving today requires that
technology be more agile, to keep up with the rapidly
evolving needs of companies and organizations around the
world. Technology companies need to ensure that security is
keeping pace with the speed of software, and address the
security gaps created by moving to agile workflows. While
security has always been a primary focus for us at Microsoft,
today’s threat landscape demands that we adapt the way we
address security as a business. We work constantly to ensure
that security is top-of-mind for everyone at the company. It’s
clear that to build a strong security posture, we must engage
everyone from our engineering teams all the way through to
Corporate Vice President
our senior leadership.
and
Chief Information Facing new pressures, modern engineering teams are leading
Security Officer the transformation to agile development and are delivering
Microsoft what customers need, as they need it. With an agile
methodology, Microsoft IT provides the flexibility and speed
with which solutions are released in as short a time as
operationally feasible. To properly land the value of these
accelerated development cycles, companies need to ensure
that they have the right security processes and automated
tools in-place to address new risk exposure that is created by
a high-speed development environment. Importantly,
leadership must also make sure we are creating a security
culture and driving the right behavior with engineers -
enabling them to succeed, while delivering the best possible
products to our customers.

Microsoft’s Information Security and Risk Management team


(ISRM) has been fortunate to partner closely with Hewlett
Packard Enterprise (HPE) to accelerate some of our emerging
modern engineering security plans. Using HPE Fortify SCA to
conduct static security analysis of our applications, and HPE
WebInspect for dynamic web application security testing, we
are taking the right steps to protect our development
environment effectively and efficiently, as we stay agile for
business success.

6
2.2 Sue Barsamian
The rapid growth of the app economy and the increasing
pressure to innovate has put the software developer in the
driver’s seat in modern IT. Developers are now deeply involved
in every part of the software development lifecycle as the
boundaries between software and hardware continue to blur
and infrastructure moves to the cloud. Developers are now
responsible for driving innovation and keeping up with the
increasing need for a faster time-to-market. This notion has
challenged the traditional development lifecycle, pushing for
more agile processes and greater collaboration across
development, QA, security, and operations. Securing the
software development process has never been easy, but in the
Senior Vice President and midst of such seismic shifts in software development,
General Manager, application security is more challenging than ever.
Security Products
In this faster-paced new development lifecycle, security
Hewlett Packard Enterprise organizations must adapt to becoming a natural part of the
development process or they risk getting in the way. Even
worse, they could be left behind as applications become more
complex and more vulnerable than ever. The Microsoft ISRM
team has taken a unique and aggressive approach to this
challenge by partnering with development organizations to
build security into the process while staying true to the
discipline of the acclaimed Microsoft Security Development
Lifecycle (SDL). By teaming with us in HPE Security Fortify,
Microsoft has enabled effective, unobtrusive application
security automation at scale that provably secures their
applications and saves time and money during development.
We are excited to share the experience and lessons learned by
bringing together the world’s largest software company and
the leading application security solution. Together we have
built a world class Application Security program that could
provide a model for helping you secure the applications that
run your business.

7
3 Introduction
3.1 Setting the scope
Microsoft’s ISRM organization, which is part of Microsoft IT, has a mission to ensure that all of
the company's information and services are protected, secured, and available for appropriate
use through innovation and a robust risk management framework. Microsoft is committed to
building and implementing best-in-class security programs and processes and is constantly
working to reduce exposure to cybersecurity risks.

ISRM supports Microsoft’s overall security mission by providing key security services that help to
protect Microsoft’s corporate systems, services, data, and users. The service lines through which
we deliver these services include risk management, threat and vulnerability management,
identity and access management, security and incident management, and security monitoring.

Across Microsoft IT and throughout the company, the ISRM team is continuously evolving the
security strategy and taking actions to protect key assets and the data for our organization. One
primary focus for the team is to protect line-of-business (LOB) applications for Microsoft IT.
ISRM drives the SDL for IT applications.

3.2 The SDL is our foundation


The SDL is a foundational framework for Microsoft, and it defines the basis for how we drive
security in our software engineering processes. This whitepaper will not delve into the details of
a software security assurance process such as the SDL, but instead, this paper will showcase how
we approach enhancing the SDL process in response to the rapidly shifting challenges that
security organizations face in today’s modern engineering landscape.

For more detailed resources related to the SDL model, including books and websites, see
Appendix A.

The SDL defines the standards and best practices for providing security and privacy for new and
existing LOB applications currently under development or being planned for development. IT
LOB applications are a set of applications that are vital to running an enterprise organization
including accounting, legal, finance, human resources, payroll, supply chain management, and
resource planning applications, among others.

3.3 The challenge of modern engineering


Software engineering teams in the modern world are under tremendous pressure. Continuous
customer demand for new capabilities and competitive pressures for differentiation necessitate
significantly shorter time-to-market schedules while maintaining the highest quality in software
applications. To address this demand, modern engineering teams often adopt agile
development methodologies, embrace DevOps (a merging of development and operations), and

8
maintain development infrastructure that support continuous integration/continuous delivery
(CI/CD).

3.3.1 The modern engineer


Engineers in the modern engineering world must play multiple roles. Everything from gathering
customer feedback and requirements, design, coding, testing, deploying to production, and
even support, are all under the purview of a modern engineer.

Just as the SDL is agnostic to any specific development methodology, practice, or tool, the
concepts in this showcase whitepaper apply to this modern engineering world, broadly
speaking. Our goal is to empower modern engineers with a set of tools, guidance, and
processes to empower them to write, deliver, and maintain more secure applications and
services.

3.3.2 The Microsoft IT model


Microsoft IT has been on a journey to adopt a modern engineering model. Because business
customers are demanding faster and faster turnaround on solutions and feature requests, gone
are the days when a business waited for a quarter or longer for new features, solutions, or bugs
fixed in their applications. To respond to this growing need for efficiency and quicker delivery,
Microsoft IT has been transitioning to a modern engineering model. This transition includes
merging development and operations roles (DevOps) and using agile development principles,
practices, and tools to shorten release cycles.

With an agile methodology, Microsoft IT provides the flexibility and speed with which solutions
are released in as short a time as operationally feasible. Agile teams are receiving faster
customer feedback though an iterative design and feature approach, and mature agile teams
often release every day or even multiple times a day. While this is great for business
enablement, this poses a huge challenge for security in terms of how to effectively and
efficiently drive security and privacy in these CI/CD scenarios. For example, consider a security
process that takes two weeks to complete sign off on a release. This model plainly fails when
applied to an agile application which may take, for example, a single week to ideate, create, and
be ready for release. Additionally, the more traditional security approach – to review every
application release – worked well when release cycles spanned months, but this approach is
highly inefficient against modern engineering practices where schedules are much more
condensed.

Given the ubiquity of customer data and critical data, security and privacy are of utmost
importance to consumers. For example, would you feel comfortable using a banking application
on your mobile phone if security and privacy aspects were overlooked by the engineers?
Security can be friction, but it can’t be completely ignored either.

9
So, this is our challenge:

“How can we make security low friction (efficient) while maintaining its effectiveness
in this new world of modern engineering?”

This challenge demands that the security culture and approach are modernized and adapted for
shorter release cycles and sprints. Security teams must support decentralized security processes,
but they must also drive greater automation and move beyond point-in-time assessment
practices. Under these modern engineering challenges, they need to adopt a solution that can
scale and that can provide continuous assurance.

3.4 Our journey


The ISRM team has been on a journey to evolve and enhance our approach to the SDL so that
we are more aligned to DevOps and to modern engineering practices. The intent of this
whitepaper is to share some of our lessons learned to date and to hopefully spark a dialogue in
the security community. We recognize that there is no perfect solution – every business has its
own unique circumstances and factors that impact its security requirements. The journey to align
to this changing engineering culture keeps us motivated to address the challenges it throws our
way.

We will also discuss some of the key trends we see in application security that have started to
redefine not just how we look at application security but how application security processes
such as the SDL are completely re-scoped. For example, with development roles merging with
operations, application security processes also need to evolve to effectively secure the Ops in
DevOps. Finally, we’ll close by sharing some of the lessons we learned in this journey of driving
security into modern engineering practices.

4 A closer look at the challenges


4.1 DevOps culture
An emerging aspect of IT culture, DevOps is defined in several ways across the industry. The
following is one example:

Gartner “DevOps represents a change in IT culture, focusing on rapid IT service delivery


through the adoption of agile, lean practices in the context of a system-oriented approach.
DevOps emphasizes people (and culture), and seeks to improve collaboration between
operations and development teams. DevOps implementations utilize technology —
especially automation tools that can leverage an increasingly programmable and dynamic
infrastructure from a life cycle perspective.”

We don't want to single out DevOps, or say that the concepts discussed in this whitepaper work
only with DevOps, because it can be difficult to identify a common definition of DevOps. For the
10
purpose of this discussion, the important aspect of the DevOps movement is to recognize
certain fundamental principles that define for us what we consider “Modern Engineering.” For
more information, see Appendix A.

The principle focus is around people and culture. Development roles and operational roles have
merged, and the expectation from a service or software engineer is not just to develop and test
code, but also to be able to deploy and operate the code effectively. Engineers have full control
over the runtime environment so they can build with predictability. This avoids the "throw it over
the wall" mentality that can occur when development hands off to operations.

4.2 DevOps and security


One of the single biggest reasons teams adopt a DevOps model is to enable CI/CD to address
business demand. The key principles of focus are:

 Speeding up the pace of innovation by shortening the release cycles using agile
methodology, and maintaining control over the entire technology stack (from code
through to the infrastructure and operational practices.)
 Enabling faster feedback loops from customers which can result in application features
and bug fixes.

FIGURE 1 COMPONENTS OF D EVO PS

Considering these principles as key components of modern engineering, anything that gets in
the way of this process is friction. More traditional approaches to software security assurance,
that rely on gates, are seen as friction in modern engineering practices. Here are a few examples:

11
4.2.1.1 Manual security assessments
Typical white box code or black box security assessments last anywhere from few days to few
weeks depending on the size and complexity of the application. With application development
sprint cycles shrinking to days, just engaging security teams and scheduling code reviews is a
challenge, let alone reviewing anything on time. This time-consuming process is
counterproductive for the needs of business and engineering teams because it is clearly a speed
bump for fast-paced release cycles.

4.2.1.2 Security compliance/Attestation processes


Any security attestation processes with lengthy questionnaires may seem complete, but they can
also be seen as merely a compliance checkbox exercise with little or no impact on the security of
the system being engineered. This not only adds friction but adds limited value to the
engineering teams, especially if the process mandates attestation for every release. At the very
least, self-attestations help convey a set of baseline expectations and drive awareness. However,
self-attestation for every release is not an effective security control within the fast pace of
modern engineering release cycles.

To better understand this challenge within Microsoft, we decided to do multiple feedback


sessions with our engineering teams. We did this through Voice of Customer (VoC) sessions to
understand the pain points and challenges the engineering teams experienced.

4.3 Additional requirements


When you look a little deeper at these more traditional software security practices that require
manual review and at the demands of modern engineering, two fundamental requirements
emerge: continuous assurance and intelligent automation. Continuous assurance is needed to
maintain a secure posture and intelligent automation can help scale and keep pace with faster
release cycles.

4.3.1 Continuous assurance


With old protracted development lifecycles, doing point-in-time assessments to help define a
security assurance level of software may have been sufficient, especially when a software
application went through infrequent changes in production. But with continuous releases, point-
in-time assessments don't work as effectively. We have to provide continuous assurance that is
embedded in the process. Software can be viewed as a living entity that requires continuous
assurance after release, particularly in light of CI/CD. DevOps and security cannot be disparate
silos any longer (a fact now understood in the industry and coined as DevSecOps.) We need to
start thinking about DevOps as a culture that is not only about merging development and
operations, but also about how security responsibilities are merged and shared over time.
Security teams will become the provider of these continuous assurance services ranging from
network, host, and application security that engineering teams will consume to maintain secure
applications.

12
4.3.2 Intelligent automation
It's natural for any security team to look at some of the challenges articulated thus far and
assume that automation is the solution. It’s correct that automation is a big part of any solution.
However, automation is very easy to get wrong and very hard to get right. A common response
is for security teams to find a wide range of automation tools and “throw them over the wall” at
engineers. In our experience, this doesn't always work for two primary reasons:

 Running tools isn’t enough


If tools are pushed as a quick fix, using them can turn into a compliance activity rather
than a way to reduce risk. Teams may run the tools, but perform little action based on
the output of the tools. Thus, we’ve lost the end goal of automation. To do it right, it’s
important to carefully drive selected metrics to help engineers take action effectively.
The tools alone aren’t a quick fix, but should instead be a part of a well-thought-through
solution to advance security goals.
 Tool fatigue is common
The feedback we received from VoC sessions revealed that engineers can start to
experience tool fatigue if too many tools are thrown at them for compliance. Careful
planning and deployment of tools is key to ensure this does not become a pain point.

It’s important to understand that the solution to the challenge of maintaining CI/CD and security
compliance is not just automation. We needed intelligent automation that can yield the kind of
information that engineers can act on to drive positive impact.

5 Our approach
Our approach to the challenges is itself an agile one in which we are evolving the solution
through incremental feedback and updates. We are on this journey to evolve and to enhance
how we integrate security with modern engineering and how we enable our engineering teams
to succeed. The following are the four core tenets around which we are building our solution:

FIGURE 2 FOUR CORE T ENETS

13
5.1 Knowledge management
Having a sound knowledge management process is one of the key aspects of any effective
security solution. Underneath any SDL process are the requirements, which we call Technical
Control Procedures (TCPs). Defining clear technical requirements for engineering teams is the
foundation of an SDL program, and over the years our Control Assessment Library and
Methodology (CALM) board has continuously refined our requirements.

5.1.1 CALM board


A successful knowledge management solution ensures that there is a governance process to
constantly refine and refresh the requirement set. We do this by keeping our TCPs up-to-date
through our CALM board. Subject matter experts (SMEs) from across the security teams sit on
this board. The CALM board meets monthly and makes updates to specific TCPs so that
engineers have the most up-to-date controls. They continuously evolve the library, and they
make sure controls are relevant with the changing security landscape.

5.1.2 Technical Control Procedures


TCPs are the actionable security controls that engineering teams must implement during the
development of applications. TCPs are the minimum bar for security standards that all LOB
applications must meet. TCPs are technical or process-oriented and are defined to be agnostic
of technology. The following are a few examples of areas these requirements cover:

14
FIGURE 3 AREAS OF COVERAGE FOR TCPS

TCPs are built around positive instead of negative attributes and are focused towards
engineering teams. We identify what the known positive controls are that a system should
implement to be secure. In our taxonomy of a TCP, we define (among other attributes):

 How to implement Implement the control or safeguard in an application.

 How to verify Verify the technical control procedure to measure if it is correctly


implemented or needs improvement.

 Why to implement Align to company security standard(s) and policy for compliance
measurements.

The following are few example categories from the TCP library and their underlying procedures:

15
TABLE 1 TCP CATEGORIES

TCPs are the backbone of everything we drive with the engineering teams. The following are
areas of the SDL to which TCPs are integral:

 Core guidance to our development engineers


TCPs are the avenue by which we deliver core guidance to our engineers about how
controls must be implemented and about their applicability. TCPs are the super set of
security controls, and we hold our engineering teams accountable to them through the
SDL process.
 Security control assessment criteria
When an application is selected for a security assessment, all the applicable TCPs based
on the context of the application are tested and verified for correct implementation. Any
identified failures result in risk findings for the engineering teams to fix.
 Tool evaluation
Security tools evaluations are fundamentally focused around which TCPs the tools will
help automate (either for detection, correction, or both). This mapping to TCPs not only
allows us to optimize our manual process, but also helps reveal where we have adequate
tool coverage.
 Compliance alignment to company security policy
TCPs must be aligned to the company’s standards and security policy. This provides the
rationale behind “Why.” Alignment helps business and risk stakeholders determine the
risk in omitting the implementation of TCPs. The alignment function’s biggest payoff is
simplifying the engineering team’s interaction with standards. For example, multiple
standards may have a requirement for encryption, and the TCPs allow us to combine
each of those requirements into a single actionable control for the engineering team.

16
FIGURE 4 ALIGNMENT TO COMPANY POLICY

5.1.2.1 TCP maintenance


As described earlier, TCPs are maintained by our CALM board and are authored in a technology
agnostic fashion. We review our full set at least quarterly – and also the methodology by which
the assessment occurs – and if we have carefully defined them, there should be few changes. For
example, TCPs that define requirements for input validation should rarely change. However,
when it comes to implementation details that are technology-specific, such as implementing
security controls in Azure, we update the implementation guidance that supports the given TCP
more often. Because this implementation guidance can change more often, we needed to find a
method to address areas in which engineers have frequent questions. The need is also amplified
by the fact that development platforms such as .NET and Azure are updating more frequently.
To keep up with such a pace, we are experimenting with the concept of a “guidance factory.”

5.1.3 Guidance factory


To keep pace with modern engineering, the old form of "guidance documents" just aren't as
efficient. We don't have the luxury to spend several months (or even weeks) to publish an essay
about "Writing secure code for [fill in your platform here]." TCPs are the next step and provide
an excellent minimum bar, but sometimes, engineers need bite-sized snacks of guidance. We
created a guidance factory where we could take requests on demand and turn them around into
bite-sized chunks in a timely fashion.

We started piloting a guidance factory approach in 2015. In this pilot, an analyst posts quick
guidance in response to questions from the engineering team to a SharePoint site. We

17
leveraged that site as a key input to our TCP maintenance process and continue to evolve them
over time. With this agile approach, engineers receive up-to-date guidance that delves deeper
than our currently published TCPs in a timely fashion, and in the long term we leverage the
guidance produced into the TCPs.

5.2 Automation
To enable teams for CI/CD and continuous assurance for security, automation must be a focal
point in the solution. In support of this, our focus is to develop low-friction security services. We
have identified the following three classes of services as key building blocks for our solution to
deploy and maintain:

FIGURE 5 T HREE CLASSES OF SECURITY SERVICES

5.2.1 Static security analysis


Static security analysis discovers security issues by scanning the code against a set of rules. It is
one of the most valued activities in any mature SDL process. When static analysis is run in an
Integrated Development Environment (IDE), engineers are exposed to potential security issues
while they are coding the application. This instant feedback to engineers makes static analysis
highly effective. The following foundational security principles (that static analysis promotes) are
why we have focused so much on this security service capability:

5.2.1.1 Code hygiene


Known security issues in the applications should not be present in the code we write and deploy.
Granted, this is easier said than done, but this is our aspiration. Engineers are accountable to
ensure code hygiene is addressed, and by leveraging static analysis they can do this effectively
and efficiently. This is not to say that static analysis tools are perfect or that they are guaranteed
to catch all issues. However, having a clean scan that reports zero issues – from a static analysis
tool that addresses all known true positives – is what we consider basic hygiene.

5.2.1.2 Code coverage


Manual line-by-line security assessment is a tiring activity (the human eye must examine lines of
code for days), and it is also time-consuming and expensive. Additionally, manual inspection can
vary greatly based on the experience and knowledge of the security analyst who looks at the
code, which affects the consistency of the code reviews. Static analysis tooling, however, is

18
consistent in terms of their rules and can cover millions of lines of code in a short period of time.
This type of security service can provide the scale and code coverage that we needed. In fact,
combining static analysis with human code review is one of the best ways to ensure all aspects
of the application have been vetted. By leaving the common and tactical issues to the
automated tool, we found that our most experienced security SMEs can focus their time on
other important aspects such as design issues and business logic issues in higher-risk
applications.

5.2.1.3 Transparent security


We have found that static analysis is most effective when it is integrated into the build process
for the application. This makes the security control a part of the engineering process and makes
it transparent to the end user.

5.2.1.4 Moving security upstream


The traditional security model verified code security after a piece of software was code
complete, which is costly to fix. Static analysis allows us to move the code verification into the
engineers’ IDEs so that security issues can be caught closer to the time code is written instead of
waiting for the security team to catch all issues after the application is code complete.

5.2.1.5 Security education


As engineers get used to running static analysis for their day-to-day code development, they are
exposed to security concepts and rules as they triage and learn from regular scan results. This
helps to raise the security awareness within the engineering teams.

5.2.2 Dynamic security analysis


Dynamic security analysis examines how code runs during the execution of a program. By
reviewing how the code responds to and interacts with other system components, a dynamic
security analysis tool can uncover security issues within the runtime. Dynamic security analysis
discovers security issues by mimicking an attacker’s behavior on the actual application. We
believe this service complements static analysis, and a combination of the two yields the best
results in terms of completeness

The following are security principles that dynamic analysis addresses and reasons why we think
it is an essential part of the overall security automation strategy:

5.2.2.1 Hygiene
Dynamic security analysis helps engineers to test an application during runtime. As they test
their application workflow, dynamic security analysis is another tool or service in their arsenal to
leverage and verify that the application has fixed security defects that only surface when you run
the application.

19
5.2.2.2 Coverage
Dynamic security analysis can uncover different types of issues that may be non-discoverable to
static analysis or can help validate issues discovered during static analysis. For example, the
impact of environment and server-level configuration issues are easier to detect by running
dynamic security scans. Potential information leaks that can occur as a result of interaction with
the application can also be easier to discover through dynamic analysis versus static analysis.
These issues can have severe impact and may be overlooked without effective dynamic
scanning.

5.2.2.3 Security education


Similar to static analysis, dynamic security analysis also helps to raise security awareness within
the engineering teams.

5.2.3 Runtime detection and prevention


A significant focus of the SDL is on proactive security controls and processes during
development to ensure that the LOB applications we develop are secure from the ground up.
There is a growing need to protect applications in a production environment as the threat
landscape continues to evolve. Relying on static and dynamic analysis alone may not be
sufficient. Code deployed to production may have gone through the SDL and static and dynamic
analysis. However, due to “operational drifts” (where, for example, configurations of a runtime
environment are altered), bug fixes, evolving threat landscape, and the fact that agile teams are
always evolving code, there must be another layer of protection in a production environment.

Runtime detection and protection for application – a technology that we think is still maturing in
the industry – provides continuous assurance during application runtime. This is a nascent space,
but it is a must-have service capability for the future and will become a crucial part of the overall
application security strategy.

We have identified the following risk areas that will benefit from a runtime detection service:

5.2.3.1 DevOps and cloud


Engineering teams continue to move toward a DevOps model, and also towards the cloud. This
further shortens the end-to-end time to market (TTM) for a solution. It’s important that
engineers use a detection service that hooks into the runtime and provides continuous security
assurance and telemetry. This monitoring data enables the engineering teams and security
teams to identify potential bad actors on the application layer. We believe this is a crucial
component of securing the Ops in DevOps for modern engineers.

5.2.3.2 Plugging the gap for SOC


Currently, Security Operations Center (SOC) for most organizations have visibility into host,
network, and the end points from a monitoring standpoint. If an attack occurs on the application
layer, very little is detected. A view from just the network is not enough anymore.

20
5.2.3.3 Legacy applications
Runtime detection can be an incredibly powerful tool to protect legacy applications which may
no longer have any engineering support and are “sunset” applications. Such applications tend to
use older technology stack which were less secure and easy targets for attacks. There could be
cases where some applications are so old that they might not have gone through any software
assurance processes whatsoever.

5.2.3.4 Compensatory control


Due to time and resource constraints that agile teams face, even when application vulnerabilities
are known, changing code to address new and existing threats could take time – often weeks or
months – especially under non-agile methodologies. In such cases, applications that have known
security issues may choose to implement runtime detection as a compensatory control for the
time they need to fix the issues and beyond.

5.3 Implementation
5.3.1 Static analysis
There is no such thing as perfect security, but the pursuit of thoroughness is important because
one vulnerability is all it takes to compromise an application. In ISRM, we use HPE Fortify SCA to
conduct automated static security analysis of our applications. After evaluating many static
analysis tools, we chose Fortify SCA because of its thoroughness, its ability to facilitate
collaboration, and most importantly, its coverage against our TCPs.

Fortify SCA is a static application security solution. It traverses your code, identifies potential
security issues, and compiles a list for you to review. A human determines which issues are
serious flaws that must be fixed and which issues are false positives or low priority and can be
deprioritized or ignored.

Fortify offers two desktop IDEs to view and aggregate issues. Our engineers use the Visual
Studio plugin to view Fortify SCA results inside their development environment, while most of
our analysts use the Fortify proprietary IDE, Audit Workbench. Everyone accesses scan results
from the same Fortify Software Security Center (SSC) server, which ensures that all comments
and reviews are preserved from one scan to the next. Build engineers will configure regular
scans which will automatically upload results to the Fortify SSC server, ensuring everyone has
recent scans to work from.

We started the journey with Fortify in 2014, and we use the following three capabilities with
Fortify for our needs in ISRM:

 Build integration
This is our most preferred method. We’ve integrated Fortify SCA static analysis with the
application build processes. Based on the schedule and release cadence, static analysis

21
runs as well. All the results from the scan are then automatically uploaded to the
Software Security Center (SSC) server.
 Self-hosted scans
Self-hosted scans are one-time scans. This provides a scanning method for those
applications that either do not have a build environment for integration or that are one-
time applications. Examples are marketing campaign applications that focus on a specific
event or applications with a shelf life of less than six months, for example. These
applications leverage self-hosted scanning to perform Fortify SCA static security
scanning. Results are then uploaded to the SSC server. Engineers are encouraged to
utilize this feature so they can see scan results within their IDE before their code is
uploaded to build servers.
 Static security analysis on-demand
On-demand static security analysis provides engineering teams with white glove
treatment. A central security team scans the code on behalf of the engineering teams
and helps the team triage the issues. This is similar to HPE’s Fortify on Demand service,
but it is run within the organization by our central security team. We developed this
capability primarily to handle venture integration scenarios. For example, this capability
comes in handy when your company inherits LOB applications as a result of an
acquisition and the engineering teams managing those applications have little or no
exposure to static analysis. You will need to hand hold them to get them started and
slowly move to self-hosted scans and finally to build integration.

5.3.2 Fortify SCA and intelligent automation


Vetting Fortify SCA and enabling our development teams to use it is our best example of
intelligent automation. Introducing automation into a development environment reduces the
burden on engineers of using security tools and processes. Throwing a tool over the wall and
expecting that a team will not only use the tool, but benefit from the data, may not set a
development team up for success. Time is a precious commodity to engineers and onboarding a
new tool can be far outside their scope. In this sense, a security team needs to become a “team
enabler,” and must work with development teams to find the best methods to fit the tool into
their workflow. It must become a part of their environment and must work within their
established processes. To that end, and focusing on team enablement, we partnered with our
engineering teams to help integrate the tool into their build processes.

5.3.3 Fortify SCA implementation process


We enabled our engineering teams to succeed with Fortify SCA during implementation by
designing two process tracks: one for the build teams and one for development teams. Error!
Reference source not found. illustrates the steps we’ve followed for each track, including the
phases that are utilized for training.

22
5.3.3.1 Build track
The build track is where the heavy lifting occurs. We work with the build owner to ensure that
Fortify SCA is installed on their build environment and is configured and scheduled to run based
on their release cadence. We then run the first scan and hand off to the development track.

5.3.3.2 Development track


Once the build automation is complete, and we receive the first baseline results, we triage the
results together with the engineering teams. We meet with all stakeholders and help them
understand the review process so that they can begin to own the process in their environment.

5.3.4 Fortify SCA deployment architecture


Our Fortify SCA deployment currently has nine Fortify SSC servers running with their own
database on a shared database server. Each Fortify SSC server is reachable with a distinct URL
and is independent of the others. One of the Fortify SSC servers is dedicated to our one-time
scan activity to ensure that we have good separation before teams fully onboard the tool. We
use views in each database to collect data for reporting, which is then migrated at regular
intervals to the reporting warehouse. Based on our experience, we chose to use three separate
User Acceptance Test (UAT) environments:

 Environment we use for testing the current version of Fortify SSC.


 Environment we use for all the patches Fortify provides us to test that our issues are
fixed before they are rolled into a major release.
 Environment we use for testing underlying operational changes or updates.

FIGURE 6 UAT ENVIRONMENTS

23
5.3.5 Shortfalls and opportunities
We faced several challenges when we began to deploy Fortify SCA in our large enterprise space.
The following are some of the shortfalls, including ways that Fortify SCA is helping us address or
will be addressing these shortfalls in the future.

5.3.5.1 SSC hierarchy


Fortify SSC does not currently allow for load balancing and horizontal scaling. Because of the
size of our organization, and because we were unable to balance the load of our Fortify SSC
servers, we knew we would have to manually scale the solution. We currently manage nine
Fortify SSCs in our environment for static analysis and two for dynamic analysis. Each group in
Microsoft IT is so large (from an application portfolio perspective) that initially we chose to
separate by group, which was an easy distinction to keep the load manageable. We further
addressed load issues by identifying which group uses which Fortify SSC server in reports. Even
through reorganizations and group name changes, we have been able to manage the load
consistently by using this approach.

Achieving better horizontal scaling is still an opportunity. However, the performance of the
Fortify SSC servers has improved greatly over the past several releases and made this scaling
less necessary.

5.3.5.2 Differential scanning


Currently Fortify SCA scans the entire code base with each scan to determine code changes and
any security issues in the code. With Agile and DevOps picking up pace of development, security
teams are looking for ways to deliver results in a shortened timeframe. The ability to perform
differential scanning on only the code that has changed since the last scan is key to making
scans run faster and more efficiently.

While using Fortify SCA as our analysis tool, we found an opportunity to openly communicate to
the Fortify team about our needs as an enterprise organization along with our feedback. In
doing so, we’ve found tremendous value in moving beyond a customer/vendor relationship. The
partnership we’ve built with the Fortify team in our journey has yielded important advances in
our ability to secure our code. We believe our relationship with the Fortify team, and
continuously sharing information, has uncovered valuable new insights that not only helped us,
but helped Fortify as well.

5.3.6 VSTS integration


Much like the rest of the industry, Microsoft is moving many of our key resources into the cloud,
including moving software engineering to Visual Studio Team Services (VSTS). This platform has
become a focal point for our engineering teams. As a security organization, ISRM knew that it
was necessary to find ways to integrate our programs and the scan results into this platform to
help enable the engineering teams. With the support of the Microsoft Engineering group behind
VSTS, we worked together with the Fortify team to design and build a VSTS extension that

24
allows our engineering teams to remain in compliance with our internal security policies without
slowing them down. This integration of Fortify within Microsoft Visual Studio Team Services
highlights one of the most beneficial achievements to come out of our relationship with the
Fortify team.

The goal with VSTS integration was not merely the basic use of Fortify in VSTS like you would
use any source repository system. This was about building a solution that allows minimal input
from the user and that would enable the teams to use Fortify with less friction in a deployment
phase. We wanted to make the Fortify scan feel more like doing a hosted build in VSTS.

Another significant aspect is the ability to enable Fortify scanning through a virtual machine,
which is preconfigured and ready for teams to use directly in Microsoft Azure. Teams can pick
the Fortify build machine out of the Azure store and it will have Fortify ready to go. Scanning
machines will spin up on demand instead of requiring dedicated machines that are only used
periodically for scans. This on demand service improves the efficiency of both the engineering
and security teams.

FIGURE 7 FORTIFY SCA ON VSTS

25
5.3.7 Dynamic analysis
In ISRM we use HPE Fortify WebInspect to conduct dynamic web application security testing. It
complements static code analysis performed through Fortify SCA, and both use a common
Fortify SSC server for management and reporting.

Our approach for deploying dynamic security analysis was pragmatic. We rolled out WebInspect
initially as an SDL requirement to help teams discover issues during application testing. We were
very aware of the tool fatigue issue our engineers raised, so we focused on static security
analysis first and then moved to dynamic analysis once we achieved the sufficient effective
adoption of static analysis.

FIGURE 8 WEB INSPECT

In the first year, we encouraged all applications (with web interfaces) to run WebInspect by self-
scanning and uploading the results for compliance. Multiple application teams used this model,
and some lessons we learned included the following:

 Low quality scans The quality of scans submitted to Fortify SSC were low. Either they
were incomplete scans or misconfigured scans. We realized that engineers must be
trained in the tool to run effective scans. For example, running a basic crawl without
configuring macros can yield incomplete or duplicative scans of the same resource.
 Resources for scans Applications teams complained that there was a lack of dedicated
servers from which to conduct self-scans, particularly for larger applications and services.

These lessons paved the way to make changes in our dynamic security scanning strategy. We
decided to onboard higher risk applications to a dynamic security scanning service. This service
uses a security team to engage with the engineering team once and provide that team with
automated, scheduled macro-based scans. Because the scans are led and configured by a

26
security team, we are better assured of the quality of the scans and their results. We are also
evaluating use of APIs to further streamline scans scheduling and kick off process.

5.3.8 WebInspect deployment architecture


The diagram below describes the two main implementation methodologies for WebInspect at
ISRM.

 The right side of the diagram illustrates continuous dynamic scanning for high-risk
applications. These scans are run as periodic scheduled scans on the WebInspect
Enterprise sensors controlled from the WebInspect Enterprise server.
 The left side illustrates dynamic scanning for all other applications. These scans are run
by engineering teams on their own infrastructure and are uploaded to the WebInspect
Enterprise server.

FIGURE 9 WEB INSPECT DEPLOYMENT A RCHITECTURE

5.3.9 Runtime detection and protection


Currently there are two technology solution types to tackle application runtime security
detection and protection challenges. These are Web Application Firewall (WAF) and Runtime
Application Self-Protection (RASP). ISRM looked into both these technology solution types to
address the runtime detection and prevention challenge. We believe RASP is the more
promising of the two, although RASP technology is evolving and more work is needed before it
becomes the mature solution we need.

27
5.3.9.1 WAF
A WAF works by sitting in front of an organization’s web application stack. It analyzes all
incoming traffic for attack patterns and blocks any suspected malicious requests from reaching
the web application itself.

However, typical WAFs suffer from a number of limitations, including – but not limited to – the
following:

 Utilizes signature-based pattern matching and blacklist filtering, which can be bypassed
as attacks become more sophisticated.
 Lacks insight into application logic, data, and event flows because it does not have
application context.
 Has high false positive rate from list of detected malicious activity.
 Has relatively limited vulnerability category coverage.
 Requires extensive coverage and maintenance for firewall rules to stay in sync with
application changes.

Even with these limitations, WAFs can provide a level of protection that justify their cost
(including operational maintenance) and can be an effective control in many scenarios.

5.3.9.2 RASP
RASP is a fairly new security technology. It is hooked into an application or application runtime
environment and can be capable of controlling application execution to prevent real-time
attacks as well as detection.

We have been evaluating a RASP technology from HPE Fortify called Application Defender to
provide runtime detection and protection. Our evaluation for RASP has been focused around
the following key features:

 Enterprise management How does the technology scale for an enterprise deployment?
 Performance and scalability How does the technology impact the application
performance?
 Agent installation and maintenance How easy is it to install and manage agent
updates as new rules for attacks roll out? Does it require server restart?
 Platform support Does the technology works seamlessly for on premise applications
and cloud applications, particularly Platform as a Service (PaaS)?
 Ruleset quality What is the current ruleset coverage when mapped to our TCPs, and
what is the update frequency of the ruleset?

It’s clear that the deployment and evaluation of any tool that sits in a production environment
needs to be carefully vetted from all aspects listed above. We have been working closely with
the Fortify team to vet solution details and to work on bridging the gaps to make sure this
technology is ready for our needs.

The following are two key areas we see that require enhancements:

28
 Azure PaaS support
Applications running in Infrastructure as a Service (IaaS) and PaaS (Web Roles)
environments can be supported, but the deployment automation is yet to be
determined.
App Defender also has challenges with uniquely identifying the Azure PaaS hosts
which prohibits effective monitoring.
 Seamless agent upgrades
An update to the agent installed on the host application can require IIS and
application server restart, depending on the application type. This can be a significant
operational burden for engineering teams.

We plan to test the on-premise solution for App Defender. We are testing it against different
scenarios and applications to identify what it would take to deploy this technology in the least
disruptive way for the engineering teams. We intend to deploy an end-to-end scenario where
App Defender is configured in “Detection Mode Only” at first, and then integrate the feed into a
security information and event management (SIEM) technology such as HPE Security ArcSight.
Our goal is to directly enable our engineering teams with the detection capabilities that can be
incorporated into their telemetry systems to not only gauge the operational state of their
application but also the security posture.

A solution like App Defender can provide very advanced application monitoring capability. It is a
key component of enabling continuous assurance, and also helps share the responsibility of
security with the engineering teams which enables our modern engineers to gain the
appropriate security insight into their applications.

5.3.10 Automation factory


Having the ability to license third-party enterprise tools like Fortify SCA and WebInspect help
security teams like ours focus on operational aspects of the service while ignoring the costs
associated with engineering and support. But this isn’t to say that internally developed custom
built tools or scripts aren’t useful. Much like the rest of the industry, Microsoft IT is moving its
workloads and application portfolio to Azure. Azure has enabled DevOps to really flourish by
effectively democratizing the infrastructure. Engineers can now control virtual networks and host
configuration through code changes. This not only opens up tremendous opportunities for
engineering teams but, if leveraged correctly, it enables security teams to drive more effective
continuous assurance.

To address this shift, we’ve made it a priority to develop what we call the “Secure DevOps Kit for
Azure” which contains a set of tools, extensions, code snippets, and other automations.

Most specifically, the Secure DevOps Kit for Azure contains the following components:

29
 A package of scripts and programs that ensure more secure provisioning, configuration,
and administration of an Azure subscription.
 A set of extensions, plug-ins, templates, and PowerShell modules that empower a
developer to create, check-in, and deploy code with greater security from the beginning.
 A tool that can capture snapshots of “secure” state for a subscription, for the application
resources, watch for drift, and that can enable operational security compliance.
 A package that would grant the visibility to enterprise IT teams about the shape and
form of all of the above along with extensions that can provide application layer events
and alerts for individual service line teams.

FIGURE 10 SECURE D EVOPS K IT

In the spirit of agile development, this kit is a work in progress. We are hoping to refine many of
the ideas as we iterate and enhance it over sprints with regular feedback from our engineering
community.

5.4 Metrics focused on driving the right behavior


With a foundation of robust knowledge management in place, together with intelligent
automation using industry tools and internally developed custom tools and scripts, we need to
make sure we are driving the right behavior with the engineers and enabling them to succeed.
We do this by defining carefully selected metrics which are not only actionable but also drive
positive reinforcement.

It’s easy to come up with metrics such as number of bugs or number of fixes to measure
application quality (security or otherwise). However, that can be a rather antiquated model that

30
worked better with traditional engineering practices. In the modern engineering model, where
the pace of innovation and continuous assurance are the core drivers, we focused on mean-time
based metrics instead such as mean-time-to-triage (MTTT) and mean-time-to-fix (MTTF).

These metrics are driven by mean-time and measure how fast the engineers respond to security
issues by triaging them first and then by fixing them in their release cycles. Enabling our
engineers with automation is critical, but we also need to measure how fast they are leveraging
the automation to drive continuous assurance.

Automation metrics is one dimension we considered while developing an all up Application


Security (AppSec) Health Index for an application or for a portfolio of applications managed by a
team. We’ll discuss the details of our application security health index or AppSec Health Index
later in the paper, but first we want to discuss the concept of the “four stages of competence.”
This forms the backdrop for why we developed the AppSec Health Index in the form we did.

The four stages of competence model (developed by psychologist Noel Birch in the 1970s) is a
helpful illustration of the stages that an individual goes through to learn a new skill. We’ve
applied it to the process that an engineer progresses through as they learn how to achieve
security hygiene with the AppSec Health Index.

31
When learning a new skill, individuals or teams progress through each phase from “unconscious
incompetence,” (an unskilled state) to “unconscious competence” (a state of expertise).

We think this illustration of the progression of skills is a great concept and is a basis for what we
wanted to achieve when creating a health index for our AppSec program.

The AppSec Health Index measures the impact of our SDL in maintaining a secure posture of an
application portfolio. It is a qualitative metric that is used to indicate the application security
health of a team or organization. The index is derived from four underlying quantitative metrics:

32
TABLE 2 APPSEC HEALTH I NDEX QUALITATIVE METRICS

It’s important to note that the targets for each metric should be based on an organization’s
maturity and needs. For example, in some organizations, having SDL compliance for as little as
50% of the portfolio may be a reasonable target. In other organization, aiming for 100%
compliance may seem realistic. Determining the right target is not trivial and can often take
many iterations to converge. However, it’s important to stress that the above numbers are
examples only and should not be treated as targets for any company that wants to achieve the
most ideal application security posture.

If we apply the four stages of competence to our AppSec program, and factor in the AppSec
Health index metrics, the progression of skill might look like the following:

33
FIGURE 11 STAGES PROGRESSION

Finally, to calculate the single qualitative AppSec Health Index, we analyze the four quantitative
metrics to determine the conditions that set its value. One such example is to define the overall
AppSec Health Index into three tiers of Green (on target), Yellow (close to target) and Red (off
target) and set the value based on this:

FIGURE 12 APPSEC HEALTH I NDEX

34
5.4.1.1 Driving competition
The concept of using metrics to drive competition or gamification is not new in security.
Microsoft engineering teams love to compete to be at the top and to be the best among their
peers. We approached this by publishing the AppSec Health Index quarterly on a scorecard for
all engineering teams in Microsoft IT. This drove a sense of competition and encouraged teams
to mimic the best practices of the teams who are on the top of the scorecard.

5.4.1.2 Positive reinforcement


Security teams traditionally point out negative metrics and gaps in the scorecard for the
engineering teams. Although these are required at times to drive action by the teams and to
reduce risk, consistently calling out only negative metrics does not promote the positive
behavior we want to ultimately instill. We used the natural alternative approach where the
metric also reflected positive reinforcement for the teams who were doing better. We weren’t
surprised to learn that showcasing positive behavior goes even further if highlighted in the right
way. Our AppSec Health Index helps us surface and drive positive reinforcement. For example,
we started publishing the AppSec Health Index by using a quarterly risk scorecard across all our
business groups. This initiative not only drove a sense of competition and positive behavior, but
it also increased health indexes across those groups.

5.4.1.3 Auxiliary metrics


Finally, we also developed an auxiliary metric specifically around trending information for
“Fortify Top 10.” As more and more applications began using automated static security
scanning, we wanted to identify what the top trending Fortify issues were across our
engineering community and within specific engineering teams. This metric helps uncover
potential pain points such as a high number of unfixed issues in a certain category or in a certain
pocket of the business which may require special attention. We have driven multiple targeted
campaigns and programs to deal with the pain points this metric revealed.

35
FIGURE 13 SAMPLE - FORTIFY TOP 10 VULNERABILITIES

We also found “Bug density data” valuable to determine an engineering team’s security maturity
and to enable the conversations with the teams to figure out corrective actions.

FIGURE 14 SAMPLE - B UG D ENSITY ( PER 10 K LOC)

5.5 User experience


So far we have discussed having the right knowledge management, deploying in-place
intelligent automation, and finally capturing it all with effective metrics to drive the right
behavior within our engineering community. All of this could be in vain if we don’t take a very
explicit look at overall user experience. Users in this context are our software engineers who
interact with and use this solution.

Oftentimes, security teams undermine the importance of end-to-end user experience while
creating and deploying security solutions. Although as an industry, we’ve made significant

36
progress, we think the security industry as a whole has been behind on this – just look at the
plethora of security solutions in the industry that treat user experience as an afterthought!
Security is critical, but it doesn’t mean we have an excuse to ignore user experience. We focused
on simplicity and user experience when thinking about our end-to-end solution. Some of the
concepts that helped orient our work on this included:

5.5.1.1 If it compiles, it complies


How can we integrate security seamlessly with the engineering experience so it does not feel
like a bolted-on solution but is instead part of the experience? For example, using security static
analysis in build environments, we can define the rules that automate TCP verification as part of
the compile and build processes. If all controls have been implemented accurately, the code just
complies cleanly. It’s important to note that this is an aspiration state. We don’t believe we are
yet at a point where all known security issues can be effectively detected during build in an
automated capacity.

5.5.1.2 The secure way should be an easy way


Always look for opportunities to define the secure way as an easy way. For example, using
automation is the easy way. Running Fortify in build environments instead of employing a
manual assessment saves weeks of time. And instead of hiring pen testers to test an app over
several weeks, periodic WebInspect scans can save time and resources.

User experience is a journey and we are still working toward achieving the optimal end-to-end
user experience for our process. If engineers are treated like customers and if they are engaged
during tool evaluations, process redesign, and overall user experience discussions, we believe
this can yield very powerful results.

5.5.2 Taking security to engineers


5.5.2.1 Fortify integration with IDE
One of the reasons engineers value Fortify is that it is in their Integrated Development
Experience (IDE); namely, Visual Studio (VS). Visual Studio is where engineers spend most of
their time, and having security plugged right into the environment is very convenient for them
to use and interact with. They can view the results of the scan in their familiar interface and can
triage and collaborate by responding to the issues within the same UI. The following is a
screenshot taken from our training material which we use to articulate the value of Fortify to our
engineering teams.

37
FIGURE 15 FORTIFY P LUGIN FOR VISUAL S TUDIO

5.5.2.2 Fortify build-integration


By integrating Fortify in the build process, we took another step to take security to engineers
directly. This integration promotes transparency and helps security become even more a part of
the overall engineering experience.

5.5.2.3 Reducing unnecessary security questionnaire


We looked closely at the SDL process and the underlying security attestation questionnaire that
engineering teams complete for their application. We uncovered many duplicate and
unnecessary questions and saw an opportunity to reduce noise by keeping only the questions
we really need. For example, when we evaluated our initial risk assessment process, it had grown
to an alarming 93 questions just to determine the initial risk score. Those 93 questions spanning
two questionnaires were reduced to two questions in one questionnaire. The end result was an
efficient and quicker security process which reduces friction even further.

We went a step further by also making the questionnaire intelligent. When the engineering team
opts to run static and dynamic security tools, certain control questions (that validate the
implementation of TCPs) are automatically hidden, which makes the questionnaire shorter and
smarter, and therefore, enhances the user experience.

5.5.2.4 Reducing unnecessary security and privacy requirements


Similar to reducing security friction from a process point of view, we continuously look for ways
to ensure that our TCP set (security and privacy requirements) are refreshed and streamlined on

38
an ongoing basis. During the last process enhancement we did in ISRM, we were able to reduce
106 varying technical requirements (TCPs) to 73 requirements.

5.5.2.5 Reducing time to engage security


How easy is it to engage your security team? What is the average time it takes to engage the
security team for the organization? These were the questions we asked when we looked at the
very first step of our process: engagement. We realized that our initial intake process, from
discovering a new application to giving them SDL guidance, could take several days. It was
obvious that this had to drastically change. We discarded the interim processes and a tool set
that weren’t needed or that didn’t add any value and reduced the overall time to initiate the SDL
process by 83%. We enabled our engineers to self-provision the SDL process without waiting for
the central team to provision one for them.

There is much more work we need to do as we integrate security more closely to engineering
systems and processes. One of the big initiatives we are driving is evaluating how we can use
Visual Studio Team Systems (VSTS) work items to drive security activities more effectively and
efficiently.

6 Future of application security


The scope of application security has to be redefined. Traditionally, application security was seen
as the practice of protecting the application code. In the modern engineering era, this is an
extremely limited view. The same set of engineers who write the code have to be concerned
with operational practices, host configuration, and network in addition to the code. In light of
this, we are repositioning ourselves from virtual silos of “application security,” “infrastructure
security,” and “network security,” to a single unit focused on “Secure Engineering.” The details of
the strategy behind this shift is outside the scope of this paper, but it’s important to emphasize
that we believe modern engineering practices such as those described in this paper require a
shift toward holistic thinking about security engineering.

7 Lessons learned
We have learned so much from our engineering teams and from our partnerships – and we have
a lot more yet to learn. The following are some of the more important lessons we’ve learned so
far.

7.1 Partner with engineers


It’s critical to partner with your engineers. They need to be stakeholders in the process and not
just consumers of the process. When you evaluate your tools, don’t evaluate just within your
security team but extend it to the engineers. Pay special attention to what is working for them
and what isn’t. They can offer important feedback on not only your process and requirements

39
but also can identify gaps to share with your tool vendor to make the tool work even better for
your engineers. Engineers need to love the tools even more than security does.

7.2 Focus on the willing


Any security process relies on its participants to be successful. Adopting new methods and
processes is not an easy endeavor, and it’s important to focus on the willing. Listen to your
engineers who are positive and willing to adopt robust strategies for your security program and
focus on their needs. As more engineers engage with the process and become stakeholders,
others will follow.

7.3 Be thoughtful about selecting technology


Because automation is such a significant part of our strategy to secure modern engineering, we
learned key lessons along the way about evaluating and licensing third-party technology that
are worth sharing. First, technology is not a one-size-fits-all solution, nor is it a quick fix in terms
of creating a security process. Second, be mindful about customizing any tool to work for your
engineering experience and don’t merely throw it over the wall to your engineers. Involve them
in your vetting of any new third-party tool and listen to what works for them and what doesn’t
before you make any decisions about onboarding a tool.

7.4 Build your process first, then focus on tools


It’s a common mistake to jump right into technology. It’s tempting to think that a tool can solve
all your security problems. The reality is that you need to have your process built first and then
evaluate tools to support that process with the right metrics. The main reason is that it’s rare
that you’ll need just one piece of technology. If you are not conscientious about how you bring
in technology to support your process, every new technology you introduce could end up
corrupting your process and leave you with a Frankenstein-like end result.

Avoid an overly architected, rigid process. In fact, the nimbler the process the better. For
example, have at least your knowledge base well defined so you know exactly what you are
evaluating and how it will help you streamline your process in driving software security
assurance.

7.5 Integrate your tools into the engineers’ world


If it complies, it compiles. If you start with this as your North Star, you quickly discover that you
must bring security right into the engineers’ world – not the security world! This not only helps
to reduce friction for your engineers, it also enables them to move faster by integrating your
tools into the world in which they live and operate. If they don’t need to exit their IDE to use the
tool, they are more likely to use the tool and tool fatigue is less of a road block.

40
This is our aspiration because there are security issues, such as poor authorization controls or
design issues, that tools have a very hard time detecting. Nevertheless, it’s an aspiration worth
driving towards – automating the detection of all those security issues that can be automated.

7.6 Build a relationship with your vendor


A tool that meets your needs today may not be able to keep pace with the needs of tomorrow.
We didn’t want to just license a tool for our security needs, but instead wanted to develop a
relationship our tool vendor. To this day, we maintain a very open dialogue with HPE Fortify not
only about how we use their security products, but also about what we would like to see to
meet the ever-changing needs of our engineers. These could include obvious needs such as
supporting newer technologies or frameworks. Another advantage of this relationship has been
the opportunity to test features or new technology that HPE Fortify has developed in answer to
other customer’s requirements.

7.7 Be mindful of business impact


Keeping business impact always in perspective may seem obvious, but it’s not always the easiest
to practice. Security solutions exist to enable business to achieve its goals in a secure and
trusted manner. If security does not affect the business in a positive meaningful way, then it’s
going to be an irritating speed bump that runs the risk of becoming a secondary consideration.

The easiest approach we found was to invite your business stakeholders (the engineering teams)
into the discussion. This discussion should include not just the processes and technology, but
the development of metrics such as mean-time-to-triage and mean-time-to-fix to measure
success. If the business can align to your metrics and agree that driving to defined targets will
help create an impact, you’ve got a natural win-win situation.

7.8 Keep up with changing technology


Technology is incredibly fickle. Not only is it important to maintain your vendor relationships to
continue to drive your requirements, but it’s important to constantly look at new players coming
into the field who may have more novel solutions. Additionally, once you’ve built your security
processes, take care not to become complacent with respect to changing technology. As
important as a robust security process is, it’s equally important to keep an eye on the horizon of
emerging technology, and to be aware that your security model may need to continually adapt
to new innovations in the industry.

8 Conclusion
The modern engineering world presents ongoing challenges for us as we adapt to a DevOps
model while also navigating the changing security landscape. Our approach has been to enable
our engineers with security solutions that reduce friction as much as possible. We want to

41
improve and enhance the engineering experience for our engineers with respect to software
security assurance. We feel very comfortable with our iterative approve on our solution because
this is very much the agile way – to jump in and iterate on the solution over time. We’ve done
the heavy lifting already of creating the building blocks for our foundational components of
knowledge management, automation, metrics, and user experience. Making these strategic
shifts was the difficult part, and now we can build on that momentum.

In addition to the challenges we talked about, we also see big opportunities for advancing
application security in this age of modern engineering. There has never been a better time to
push security automation and develop integrated security services for engineering teams as they
think about operating in a DevOps environment and push for CI/CD scenarios. Similar to how
development, test, and operation roles have merged to shape today’s modern engineer, we feel
that a software security assurance program can yield much better results than before if the
processes are baked seamlessly into the engineering process. Security teams should leverage
this momentum of automation to further expand the scope and coverage in their organization
in an effective and efficient manner.

42
9 Appendix A: Resources
9.1.1 SDL
Security Development Lifecycle
https://www.microsoft.com/sdl

Application security in the changing risk landscape


https://f5.com/Portals/1/PDF/security/f5-ponemon-report-2016.pdf

The Security Development Lifecycle: SDL: A Process for Developing Demonstrably More Secure
Software (Developer Best Practices)

https://www.amazon.com/Security-Development-Lifecycle-Developing-
Demonstrably/dp/0735622140/ref=sr_1_1?ie=UTF8&qid=1473087879&sr=8-
1&keywords=security+development+lifecycle

9.1.2 Modern engineering and DevOps


Creating a culture of modern engineering within Microsoft IT
https://msdn.microsoft.com/en-us/library/mt709101.aspx

DevOps—Application Lifecycle Management – Microsoft


https://www.microsoft.com/en-us/cloud-platform/development-operations

The science of DevOps decoded


http://www.gartner.com/smarterwithgartner/the-science-of-devops-decoded/

43

Das könnte Ihnen auch gefallen