Beruflich Dokumente
Kultur Dokumente
If this occurs, the most common solution is to migrate the source data to the new platform into
data structures that are constructed identically to that of the source system. Doing so allows
you to shut down the old system and bring up the new one with confidence and without losing
historical data. The cost of developing a set of reports to access the historical data on the new
platform tends to be far cheaper than the cost of migration in nearly every scenario.
The milestone of the Strategy phase is the Data Migration Strategy Document, which outlines
the intentions of the overall migration effort. In this document, we outline the reasons for our
conclusions about whether data migration is worthwhile. The data quality research performed
in this phase is still at a very high level, and in no way suggests that the team has gained a
thorough understanding of the specific data cleansing issues they will face later in the project.
Legacy report audits: A complete set of legacy reports from the old OLTP system is
reviewed. A comprehensive list of field usages should be compiled from these
reports.
User feedback sessions: These sessions should be scheduled for topics such as
data model audits, reviews of interview notes and the results of the legacy data
analysis and legacy report audits.
The
Complete
Data
Migration
Methodology- Design
The Design phase is where the bulk of the actual
mapping of legacy data elements to columns takes
place. The physical data structures have been frozen,
offering an ideal starting point for migration testing.
Note that data migration is iterative it does not
happen in a single sitting.
The mapping portion of a data migration project can
be expected to span the Design phase through
Implementation. The most important resources for
validating the migration are the users of the new
system. Unfortunately, they will be unable to grasp
the comprehensiveness of the migration until they
view the data through the new applications. We have
concluded from experience that developing the new
reports prior to new forms allows for more thorough
validation of the migration earlier on in the project
lifespan. For instance, if some sort of calculation was
performed incorrectly by a migration script, reports
will reflect this. A form typically displays a single
master record at a time, whereas reports display
several records per page, making them a better
means of displaying the results of migration testing.
It is necessary to perform data mapping to the
physical data model. With the physical data structures
in place, you can begin the mapping process.
Mapping is generally conducted by a team of at least
three people per core business area (i.e. Purchasing,
Inventory, Accounts Receivable, etc.). Of these three
people, the first should be a business analyst,
generally an end user possessing intimate knowledge
of the historical data to be migrated. The second
team member is usually a systems analyst with
knowledge of both the source and target systems.
The third person is a programmer/analyst that
performs data research and develops migration
routines based upon the mappings defined by the
business analyst and the systems analyst,
cooperatively.
Access control models used by current systems tend to fall into one of two classes: those
based on capabilities and those based on access control lists (ACLs). In a capability-based
model, holding an unforgivable reference or capability to an object provides access to the
object (roughly analogous to how possession of your house key grants you access to your
house); access is conveyed to another party by transmitting such a capability over a secure
channel. In an ACL-based model, a subject's access to an object depends on whether its
identity is on a list associated with the object (roughly analogous to how a bouncer at a
private party would check your ID to see if your name is on the guest list); access is conveyed
by editing the list.
Security testing is a process to determine that an information system protects data and
maintains functionality as intended.
The six basic security concepts that need to be covered by security testing are:
1. Confidentiality.
2. Integrity.
3. Authentication.
4. Authorization.
5. Availability.
6. Non-repudiation.
Shorter work cycle: The time between specifying a requirement in detail and
validating that requirement is now on the order of minutes, not months or years, due
to the adoption of test-driven development (TDD) approaches, greater collaboration,
and less of a reliance on temporary documentation.
Greater flexibility is required of testers: Gone are the days of the development team
handing off a "complete specification" which the testers can test again. The
requirements evolve throughout the project. Ideally, acceptance-level "story tests" are
written before the production code which fulfills them, implying that the tests become
detailed requirement specifications.
Greater discipline is required of IT: It's very easy to say that you're going to work
closely with your stakeholders, respect their decisions, produce potentially shippable
software on a regular basis, and write a single test before writing just enough
production code to fulfill that test (and so on) but a lot harder to actually do them.
Systems and Development Life Cycle (SDLC) is a process of process used by a systems
analyst to develop an information system, including requirements, validation, training, and
user (stakeholder) ownership. Any SDLC should result in a high quality system that meets or
exceeds customer expectations, reaches completion within time and cost estimates, works
effectively and efficiently in the current and planned Information Technology infrastructure,
and is inexpensive to maintain and cost-effective to enhance.
Computer systems are complex and often (especially with the recent rise of Service-Oriented
Architecture) link multiple traditional systems potentially supplied by different software
vendors. To manage this level of complexity, a number of SDLC models have been created:
"waterfall"; "fountain"; "spiral"; "build and fix"; "rapid prototyping"; "incremental"; and
"synchronize and stabilize".
SDLC models can be described along a spectrum of agile to iterative to sequential. Agile
methodologies, such as XP and Scrum, focus on light-weight processes which allow for rapid
changes along the development cycle. Iterative methodologies, such as Rational Unified
Process and Dynamic Systems Development Method, focus on limited project scopes and
expanding or improving products by multiple iterations. Sequential or big-design-upfront
(BDUF) models, such as Waterfall, focus on complete and correct planning to guide large
projects and risks to successful and predictable results. Other models, such as Anamorphic
Development, tend to focus on a form of development that is guided by project scope and
adaptive iterations of feature development.
Product lifecycle management (PLM) Benefits:
Product lifecycle management (PLM) is the process of managing the entire lifecycle of a
product from its conception, through design and manufacture, to service and disposal.PLM
integrates people, data, processes and business systems and provides a product information
backbone for companies and their extended enterprise.
'Product lifecycle management' (PLM) should be distinguished from 'Product life cycle
management (marketing)' (PLCM). PLM describes the engineering aspect of a product, from
managing descriptions and properties of a product through its development and useful life;
whereas, PLCM refers to the commercial management of life of a product in the business
market with respect to costs and sales measures.
Product lifecycle management is one of the four cornerstones of a corporation's information
technology structure.
All companies need to manage communications and information with their customers
(CRM-Customer Relationship Management).
Their suppliers (SCM-Supply Chain Management).
Their resources within the enterprise (ERP-Enterprise Resource Planning) and
Their planning (SDLC-Systems Development Life Cycle).
In addition, manufacturing engineering companies must also develop, describe, manage and
communicate information about their products.
Benefits:
Increases productivity, as the team shares best practices for development and deployment,
and developers need focus only on current business requirements
Improves quality, so the final application meets the needs and expectations of users
Increases flexibility by reducing the time it takes to build and adapt applications that support
new business initiatives.
The highest usage scenarios of Reverse Semantic Traceability(RST) method can be:
Validating model changes for a new requirement: given an original and changed
versions of a model, quality engineers restore the textual description of the
requirement, original and restored descriptions are compared.
Validating a bug fix: given an original and modified Source code, quality engineers
restore a textual description of the bug that was fixed, original and restored
descriptions are compared.
Integrating new software engineer into a team: a new team member gets an
assignment to do Reverse Semantic Traceability for the key artifacts from the current
projects.
Breadth Testing:
A test suite that exercises the full functionality of
a product but does not test features in detail.