Sie sind auf Seite 1von 16

Technical Due-Diligence Approach

Technical Due-Diligence Procedures for

evaluating new systems and software

Technical Due-Diligence Approach

1.0 Introduction
This document is intended for technical reviewers and evaluators who are
participating in the evaluation of a potential acquisition or purchase of a system
or software to become part of the INC Research Enterprise Architecture

Purpose and Scope

This document is divided into two major areas: the product, and the architectural
fit. We will primarily focus on the following areas:
1. The scalability, flexibility and maintainability of the released software
2. Metrics that will enable the evaluator to develop simple models to
estimate cost of purchase as well as operational cost for the users of
the software product(s).
3. The capability (or lack thereof) for the product(s) to be deployed in a
production environment.
4. The maturity and capabilities of the software or system.

Technical Due-Diligence Approach

2.0 The Product

What were the factors that drove the desire to purchase? Did we review the inhouse capabilities and perform a tradeoff analysis?
What are the major business drivers for the system or software?
Does the system or software appear to be stable? If less than 6 months old,-have we validated stability? What load testing and reliability testing was
Does the complexity of the system make it a potential costly model to support?
(Open support costs spreadsheet)
Does the system or software fit within the standards or our desired architecture? If
not are we prepared for additional costs to support new architecture?
Does the system have a single consistent, coherent architecture? Documented?
Are the number and types of components reasonable?
Is there a published API? How has the API been used or extended? We should
request a copy of API and validate complexity.
What is the minimum level of experience required to effectively leverage the
software or system?
Is there a software architecture document for the product? Is it current?
Interviewer should request a copy.
Are there design guidelines for the product? Is it current? Have the guidelines
been followed throughout the product development lifecycle? Interviewer should
request a copy.
Have technical architecture risks been either mitigated or have been addressed in
a contingency plan? Are new architecture risks managed, analyzed and
documented once they are discovered?
Does the system or software appear to be "over or under designed"? If so, why?
E.g. inspect:
a. Interactions between objects
b. Interactions between tasks and processes

Technical Due-Diligence Approach

c. Interaction between physical nodes or networks

What is the profile for users of the product?

What is the profile for operators of the product?
What is the profile for implementers of the product?
Is the conceptual complexity of the system appropriate given the skill and
experience of its:
a. Users
b. Operators
c. Implementers

Does the system or software have a consistent system-wide security facility? Do
all of the security components work together to safeguard the system?
Is it possible for a malicious user to?

Enter the system

Destroy critical data
Consume all resources

How does the architecture ensure scalability? In what area does contention
generally occur and what is the usage profile under which contention is created?
How does the application perform in multi-CPU environments or other distributed
deployment scenarios? What were the deployment environment goals for the
product at inception? Have the goals been met?
What is the normal usage profile (number of users, class of machine, wait time,
What is the resident memory footprint for a normal usage profile?
If the system utilizes redundant processors or nodes to provide fault tolerance or
high availability, is there is a strategy for ensuring that no two processors or
nodes can 'think' that they are primary, or that no processor or node is primary?

Technical Due-Diligence Approach

Can disk space can be reorganized or recovered while the system is running?
Have the responsibilities and procedures for system configuration have been
identified and documented?
Are processes sufficiently independent of one another that they can be
distributed across processors or nodes when required? Is this currently used in a
production environment? If so, what customers are using it?
Have processes which must remain co-located (because of performance and
throughput requirements, or the inter-process communication mechanism (e.g.
semaphores or shared memory)) been identified, and the impact of not being
able to distribute this workload has been taken into consideration?
Do estimates of system performance exist (modeled as necessary using a
Workload Analysis Model), and do these indicate that the performance
requirements are not significant risks?
How have system performance estimates been validated, especially for
performance-critical requirements?
Can diagnostics routines can be run to debug? What diagnostic routines are
Does the system monitor operational performance itself (e.g. capacity threshold,
critical performance threshold, resource exhaustion)?
Are the actions taken when thresholds are reached defined?

How are they

Are the policies and procedures for network (LAN, WAN) monitoring and
administration defined?
Is there is event tracing facility that can enabled to aid in troubleshooting? Is the
overhead of the facility understood?
Is the application designed to fail gracefully? What approaches have been
utilized to assure a graceful failure?
For each error or exception, does a policy define how the system is restored to a
"normal" state?
Is there a consistently applied policy for handling data or database unavailability,
including whether data can still be entered into the system and stored later?

Technical Due-Diligence Approach

User Interface and Views into the Product

What are the major components of the user interface layer?
How can the user interface be extended to include new pages and navigation?
What is the level of effort required for this level of change?
What is stored in session?
How are URLs constructed and managed?
How is control flow determined at run-time?
Can the major components of the user interface layer be easily ported to other
frameworks or technologies?
Is there a navigation map for the application? Make sure to request copy.

Middle Tier Components

How are business-rules stored and evaluated? How many business rules are
generally contained in a production instance of the application?
Is the subsystem and package partitioning and layering logically consistent?
Do the classes in a subsystem support the services identified for the subsystem?
Are the relationships between key entity classes well defined and easily
Does the name and description of each class clearly reflect the role it plays?
Does the description of each class accurately capture the responsibilities of the
Are the key entity classes and their relationships consistent with the business
model (if it exists), domain model (if it exists), requirements, and glossary

Persistence Management and Database Components

What are the major components of the data layer?

Technical Due-Diligence Approach

Can the application be easily ported to other database platforms? Why or why
Have all persistent classes that use the database for persistency been mapped
to database structures?
Do many-to-many relationships have an intersecting table?
Have primary keys have been defined for each table, unless there is a
performance reason not to define a primary key.
Has the storage and retrieval of data has been optimized? If yes, than what
optimization approaches were used?
What tables have







Where denormalization has been used, how are update, insert and delete
scenarios managed to ensure the denormalization does not degrade
performance for those operations.
Have indexes have been defined to optimize access?
Has the impact of index updates been considered in the other table operations.
If yes, how so?
Does a plan exist for maintaining validation constraints when the data rules
Has the database been designed so that modifications to the data model can be
made with minimal impact?
What stored procedures and triggers have been defined?
Does the persistence mechanism uses stored procedures and database triggers

External Interfaces and Integration

What integrations are available out-of-the-box for communication or data sharing
with other enterprise systems?

Technical Due-Diligence Approach

What custom integrations with enterprise systems (ERPs, etc), if any, have been
performed for customers? What was the level-of-effort for the customizations?
Who were the customers?
Does the system support any level of integration via web services, SOAP,
CORBA or XML-RPC? What integrations have been performed in the past?
Who were the customers?
How does the application support integration with portals? What features or
capabilities are not available in a portal-based environment? What interface is
provided for portal integration?
What is the approach for linking (shallow/deep) into the product? How is security
and authentication defined?
Is there a capability to define transactions or units of work? How is data integrity
and error-handling implemented for situations when transactions cross system
If data is exchanged between systems, is there is a policy for how systems
synchronize their views of the data?
Has the product ever been included in a federated search system? If so, what
was the mechanism used to enable federated search? If federated search has
not been done, is it possible? How would it be achieved?

Installation and Upgrades

Has the process for upgrading an existing system without loss of data or
operational capability been defined? Is this tested?
Has the process for converting data used by previous releases defined? Is this
Is the amount of time and resources required to upgrade or install the product
well-understood and documented?
Can the functionality of the system be activated by major functional area, or even
one use case at a time?
How is data integrity maintained during upgrades or patches? Is there a
consistent approach to managing these scenarios?

Technical Due-Diligence Approach

Managed Service Provider Characteristics

How does/Can the system accommodate multiple users in a single instance?
Does the product require customization? If customization is required, what is the
level of effort? What needs to be customized?
What is the impact of multiple companies in a single instance on the database?
Is there functionality available that will monitor and/or profile activity based on
company and users?
Is it possible to configure the product so different SLAs can be accommodated
within a single distributed instance of the product?
What are the performance characteristics of the product on 2,4,6,8 CPU
machines? How does the application scale as the number of CPUs is

Quality Assurance
At what level are the following testing types implemented for the product:

Function test: Tests focused on validating the

target-of-test functions as intended, providing
the required services, methods, or use cases.
This test is implemented and executed against
different targets-of-test, including units,
integrated units, applications, and systems.
Security test: Tests focused on ensuring the
target-of-test data (or systems) are accessible
only to those actors for which they are intended.
This test is implemented and executed on
various targets-of-test.
Volume test: Testing focused on verifying the
target-of-test's ability to handle large amounts of
data, either as input and output or resident
within the database. Volume testing includes
test strategies such as creating queries that
would return the entire contents of the database,
or that would have so many restrictions that no

Technical Due-Diligence Approach

data is returned, or where the data entry has the

maximum amount of data for each field.

Usability test: Tests that focus on:

human factors


consistency in the user interface

online and context-sensitive help

wizards and agents

user documentation


training materials

Integrity test: Tests that focus on assessing the

target-of-test's robustness (resistance to failure),
and technical compliance to language, syntax,
and resource usage. This test is implemented
and executed against different targets-of-test,
including units and integrated units.
Structure test: Tests that focus on assessing
the target-of-test's adherence to its design and
formation. Typically, this test is done for Webenabled applications ensuring that all links are
connected, appropriate content is displayed, and
no content is orphaned.
Stress test: A type of reliability test that focuses
on evaluating how the system responds under
abnormal conditions. Stresses on the system
could include extreme workloads, insufficient
memory, unavailable services and hardware, or
limited shared resources. These tests are often
performed to gain a better understanding of how
and in what areas the system will break, so that
contingency plans and upgrade maintenance
can be planned and budgeted for well in


Benchmark test: A type of performance test

that compares the performance of a new or
unknown target-of-test to a known referenceworkload and system.
Contention test: Tests focused on validating
the target-of-test's ability to acceptably handle
multiple actor demands on the same resource

Technical Due-Diligence Approach

(data records, memory, and so on).

Load test: A type of performance test used to
validate and assess acceptability of the
operational limits of a system under varying
workloads while the system-under-test remains
constant. In some variants, the workload
remains constant and the configuration of the
system-under-test is varied. Measurements are
usually taken based on the workload throughput
and in-line transaction response time. The
variations in workload usually include emulation
of average and peak workloads that occur within
normal operational tolerances.
Performance profile: A test in which the targetof-test's timing profile is monitored, including
execution flow, data access, function and
system calls to identify and address both
performance bottlenecks and inefficient

Configuration test: Tests focused on ensuring

the target-of-test functions as intended on
different hardware and software configurations.
This test might also be implemented as a
system performance test.
Installation test: Tests focused on ensuring the
target-of-test installs as intended on different
hardware and software configurations, and
under different conditions (such as insufficient
disk space or power interruptions). This test is
implemented and executed against applications
and systems

What are the release criteria that must be met before shipping the product?
Is code-coverage a part of the quality assurance process for the product? If so,
what level of code coverage is required before release?
Is there traceability between change requests and test cases? If so, how is
traceability achieved?

Technical Due-Diligence Approach

What documentation ships with the product?
In what formats is documentation available?
Is there a style guide? Interviewer should request a copy if available.

How many lines of code are in the product?
How many classes are in the product?
How many files are in the installation package?
What is the average number of customer (internal or external) bugs fixed per
What is the average code-churn per release?
How many user-interface screens are there?

Operational Product Cost

What is the average bandwidth utilization per user?
What is considered a small, medium and average deployment in terms of data,
configuration and external system integrations?
How many concurrent users can the recommended system requirements support
in an average deployment?
Is there a function or model that captures the average number of concurrent
users for a fixed number of employees or users? If so, what is the function or

Technical Due-Diligence Approach

Product Management
Is there a product roadmap? What is the expected product evolution for the next
1, 2 and 5 years?
How stable is the roadmap? How many changes have been made to the
roadmap in the past year?
What is the products expected life time? Do the products and techniques on
which the system is based match its expected life span?
Have similar solutions within the common application domain have been
considered? What solutions and have been considered? When is the last time a
survey of the market was performed?
What versions are currently in the installed base? How often to clients upgrade?
How often are new versions released?
Is the product release cycle in lock-step with the development cycle? If not, what
is the lag or lead?
What is the sunrise/sunset plan for the product?
Are all licensing requirements satisfied?
Is there a mechanism for rolling customizations back into the product line?
Who is the typical customer for this product? What is the average number of
users per installed customer?
How many customers are considered shelf-ware?
How are new releases, upgrades, patches and service packs communicated to
How is customer feedback gathered and integrated into the product roadmap?
How are requirements captured and managed?
How are change-requests captured and managed?
How are requirements and change requests communicated to the development
How often are customers and other stakeholders polled for new requirements or

Technical Due-Diligence Approach

How do you guarantee that you are building the right product?
What technologies and tools are used for requirements and change request
How are architecture and design decisions made? What sorts of activities are
Are code reviews performed in the organization? How are the reviews
How many releases have been made in the past year?
How many service packs have been made in the past year?
How many hot-fixes or patches have been made in the past year?
How long has the current process been in place?
How are risks managed, tracked and reported?
How are defects managed within the software development team?
How are defects tracked and prioritized?
What are the reporting lines for Quality Assurance? Does the QA group report to
the head of development, or some other manager?
How many managers are there in the QA group?
How many QA Engineers are there in the QA group?
Does the organization have performance engineers?
What are the average tenure and experience levels of the quality assurance
Are consultants or other external users current used or used in the past? If so,
How are risks identified, managed and tracked?
Can the QA group hold the release because the quality requirements or release
criteria have not been met?

Technical Due-Diligence Approach

Is QA testing, functional or otherwise, automated?

How are identified and fixed bugs tested and regressed?
How often are regression tests executed?
Is code coverage performed?
Is memory testing performed?
Is thread-safety tested?
Are installs or releases tested and assigned quality requirements?
How is QA notified that new files/directories/components have been added to the
How are defects classified and triaged?
How are patches/hot fixes tested and integrated into the regression plan?
How many releases/configurations are tested per year?
Is performance testing a part of the QA process? How often are performance
tests run?
How are performance tests created, executed and validated?
How are test cases and plans managed and organized?

What performance testing tools/frameworks are used?
What functional test tools are used?
What memory/profiling tools are used?
What hardware/software environments are available for testing the product?
How are customers updated on the status of their change request or issue?
How are change requests captured from users?

Technical Due-Diligence Approach

How are issues communicated to the user population?

How many help desk engineers are on call at any one time?
What is the help desk call rate during business hours?
What is the help desk defect generation rate?