Beruflich Dokumente
Kultur Dokumente
1.0 Introduction
Audience
This document is intended for technical reviewers and evaluators who are
participating in the evaluation of a potential acquisition or purchase of a system
or software to become part of the INC Research Enterprise Architecture
Security
Does the system or software have a consistent system-wide security facility? Do
all of the security components work together to safeguard the system?
Is it possible for a malicious user to?
a.
b.
c.
How does the architecture ensure scalability? In what area does contention
generally occur and what is the usage profile under which contention is created?
How does the application perform in multi-CPU environments or other distributed
deployment scenarios? What were the deployment environment goals for the
product at inception? Have the goals been met?
What is the normal usage profile (number of users, class of machine, wait time,
etc)?
What is the resident memory footprint for a normal usage profile?
If the system utilizes redundant processors or nodes to provide fault tolerance or
high availability, is there is a strategy for ensuring that no two processors or
nodes can 'think' that they are primary, or that no processor or node is primary?
Can disk space can be reorganized or recovered while the system is running?
Have the responsibilities and procedures for system configuration have been
identified and documented?
Are processes sufficiently independent of one another that they can be
distributed across processors or nodes when required? Is this currently used in a
production environment? If so, what customers are using it?
Have processes which must remain co-located (because of performance and
throughput requirements, or the inter-process communication mechanism (e.g.
semaphores or shared memory)) been identified, and the impact of not being
able to distribute this workload has been taken into consideration?
Do estimates of system performance exist (modeled as necessary using a
Workload Analysis Model), and do these indicate that the performance
requirements are not significant risks?
How have system performance estimates been validated, especially for
performance-critical requirements?
Can diagnostics routines can be run to debug? What diagnostic routines are
available?
Does the system monitor operational performance itself (e.g. capacity threshold,
critical performance threshold, resource exhaustion)?
Are the actions taken when thresholds are reached defined?
tested?
Are the policies and procedures for network (LAN, WAN) monitoring and
administration defined?
Is there is event tracing facility that can enabled to aid in troubleshooting? Is the
overhead of the facility understood?
Is the application designed to fail gracefully? What approaches have been
utilized to assure a graceful failure?
For each error or exception, does a policy define how the system is restored to a
"normal" state?
Is there a consistently applied policy for handling data or database unavailability,
including whether data can still be entered into the system and stored later?
Can the application be easily ported to other database platforms? Why or why
not?
Have all persistent classes that use the database for persistency been mapped
to database structures?
Do many-to-many relationships have an intersecting table?
Have primary keys have been defined for each table, unless there is a
performance reason not to define a primary key.
Has the storage and retrieval of data has been optimized? If yes, than what
optimization approaches were used?
What tables have
performance?
been
denormalized
(where
necessary)
to
improve
Where denormalization has been used, how are update, insert and delete
scenarios managed to ensure the denormalization does not degrade
performance for those operations.
Have indexes have been defined to optimize access?
Has the impact of index updates been considered in the other table operations.
If yes, how so?
Does a plan exist for maintaining validation constraints when the data rules
change?
Has the database been designed so that modifications to the data model can be
made with minimal impact?
What stored procedures and triggers have been defined?
Does the persistence mechanism uses stored procedures and database triggers
consistently?
What custom integrations with enterprise systems (ERPs, etc), if any, have been
performed for customers? What was the level-of-effort for the customizations?
Who were the customers?
Does the system support any level of integration via web services, SOAP,
CORBA or XML-RPC? What integrations have been performed in the past?
Who were the customers?
How does the application support integration with portals? What features or
capabilities are not available in a portal-based environment? What interface is
provided for portal integration?
What is the approach for linking (shallow/deep) into the product? How is security
and authentication defined?
Is there a capability to define transactions or units of work? How is data integrity
and error-handling implemented for situations when transactions cross system
boundaries?
If data is exchanged between systems, is there is a policy for how systems
synchronize their views of the data?
Has the product ever been included in a federated search system? If so, what
was the mechanism used to enable federated search? If federated search has
not been done, is it possible? How would it be achieved?
Quality Assurance
At what level are the following testing types implemented for the product:
Functionality
human factors
esthetics
user documentation
Reliability
training materials
Performance
What are the release criteria that must be met before shipping the product?
Is code-coverage a part of the quality assurance process for the product? If so,
what level of code coverage is required before release?
Is there traceability between change requests and test cases? If so, how is
traceability achieved?
Documentation
What documentation ships with the product?
In what formats is documentation available?
Is there a style guide? Interviewer should request a copy if available.
Maintenance
How many lines of code are in the product?
How many classes are in the product?
How many files are in the installation package?
What is the average number of customer (internal or external) bugs fixed per
release?
What is the average code-churn per release?
How many user-interface screens are there?
Product Management
Is there a product roadmap? What is the expected product evolution for the next
1, 2 and 5 years?
How stable is the roadmap? How many changes have been made to the
roadmap in the past year?
What is the products expected life time? Do the products and techniques on
which the system is based match its expected life span?
Have similar solutions within the common application domain have been
considered? What solutions and have been considered? When is the last time a
survey of the market was performed?
What versions are currently in the installed base? How often to clients upgrade?
How often are new versions released?
Is the product release cycle in lock-step with the development cycle? If not, what
is the lag or lead?
What is the sunrise/sunset plan for the product?
Are all licensing requirements satisfied?
Is there a mechanism for rolling customizations back into the product line?
Who is the typical customer for this product? What is the average number of
users per installed customer?
How many customers are considered shelf-ware?
How are new releases, upgrades, patches and service packs communicated to
customers?
How is customer feedback gathered and integrated into the product roadmap?
How are requirements captured and managed?
How are change-requests captured and managed?
How are requirements and change requests communicated to the development
group?
How often are customers and other stakeholders polled for new requirements or
enhancements?
How do you guarantee that you are building the right product?
What technologies and tools are used for requirements and change request
management?
How are architecture and design decisions made? What sorts of activities are
involved?
Are code reviews performed in the organization? How are the reviews
organized?
How many releases have been made in the past year?
How many service packs have been made in the past year?
How many hot-fixes or patches have been made in the past year?
How long has the current process been in place?
How are risks managed, tracked and reported?
How are defects managed within the software development team?
How are defects tracked and prioritized?
What are the reporting lines for Quality Assurance? Does the QA group report to
the head of development, or some other manager?
How many managers are there in the QA group?
How many QA Engineers are there in the QA group?
Does the organization have performance engineers?
What are the average tenure and experience levels of the quality assurance
staff?
Are consultants or other external users current used or used in the past? If so,
who?
How are risks identified, managed and tracked?
Can the QA group hold the release because the quality requirements or release
criteria have not been met?
Technology
What performance testing tools/frameworks are used?
What functional test tools are used?
What memory/profiling tools are used?
What hardware/software environments are available for testing the product?
How are customers updated on the status of their change request or issue?
How are change requests captured from users?