Sie sind auf Seite 1von 11

c 

 
  

The process model is a core diagram in structured analysis and design. Also called a data flow
diagram (DFD), it shows the flow of information through a system. Each process transforms
inputs into outputs.

The model generally starts with a context diagram showing the system as a single process
connected to external entities outside of the system boundary. This process explodes to a lower
level DFD that divides the system into smaller parts and balances the flow of information
between parent and child diagrams. Many diagram levels may be needed to express a complex
system.

Primitive processes, those that don't explode to a child diagram, are usually described in a
connected textual specification. This text is sometimes referred to as a mini-spec. It textually
describes how the outputs are generated from the inputs.

When drawing data flow diagrams, the designer adds an entry for each data flow or store into a
data dictionary. The data dictionary integrates the stack of diagrams into a cohesive model by
defining all the names and data composition.

The balancing process ensures that data is conserved between diagram levels. If flow A enters a
parent process, its child diagram should have flow A coming into that diagram. Likewise flow B
leaving the child diagram should balance with flow B leaving the parent process. Data
decomposition can occur within the data dictionary so flow A into the parent process will
balance if flows X and Y enter the child diagram and A = X + Y in the data dictionary.


A    in software engineering is an abstract model that describes how data are
represented and accessed. Data models formally define data elements and relationships among
data elements for a domain of interest. According to Hoberman (2009), "A data model is a
wayfinding tool for both business and IT professionals, which uses a set of symbols and text to
precisely explain a subset of real information to improve communication within the
organization and thereby lead to a more flexible and stable application environment."[2]

A data model explicitly determines the structure of data or |   . Typical applications
of data models include database models, design of information systems, and enabling exchange
of data. Usually data models are specified in a data modeling language.

Communication and precision are the two key benefits that make a data model important to
applications that use and exchange data. A data model is the medium which project team
members from different backgrounds and with different levels of experience can communicate
with one another. Precision means that the terms and rules on a data model can be interpreted
only one way and are not ambiguous.
cA data model can be sometimes referred to as a data structure, especially in the context of
programming languages. Data models are often complemented by function models, especially
in the context of enterprise models.

ÿ c  c ÿ







c ÿ


Process modeling involves graphically representing the function, or process which


includes the capturing, manipulating, storing, and distributing of data between a system and its
environment and between components within a system. Data flow diagram is a traditional
process modelling technique for structured analysis and design which produces a significant
impact on the quality of the systems development process.
Y  
 ÿ      


  c 

There are three sub-phases of the analysis phase of the systems development life cycle:

1) Requirements determination

2) Requirements structuring

3) Generating alternative systems and selecting the best one

On the first sub-phase, D 


 | 
 , gathering of information takes place and
when the analysis team enters the second phase, D 
 ||   , those information
are being organized into a meaningful representation of the existing information system. Also in
that phase, the requirements needed in the replacement system are being identified.

Process model is one of the three major complementary views of an information system. When
combined together, process, logic, timing, and data models can provide a thorough
specification of an information system and a basis for the automatic generation of many
working information system components.

    

Deliverables for Process Modeling


1.Y Context data flow diagram (DFD)
A context diagram shows the scope of the system, representing which elements
are inside and which are outside the system.

2.Y DFDs of current physical system (adequate detail only)


Data flow diagrams of the current physical system specify which people and
technologies are used in which processes to move and transform data, accepting inputs
and producing outputs.

3.Y DFDs of current logical system


Technology-independent, or logical, data flow diagrams of the current system
show what data processing functions are performed by the current information system.

4.Y DFDs of new logical system


The data movement, or flow, structure, and functional requirements of the new
system are represented in logical data flow diagrams

5.Y Thorough descriptions of each DFD component


Entries for all of the objects included in all diagrams are included in the project
dictionary.

The deliverables of process modeling are simply stating a   you learned during the
requirements determination and in the later steps in the systems development life cycle, the
project team members will make decisions on exactly a the new system will deliver these
new requirements in specific manual and automated functions.

  
ÿÿ

Data flow diagrams are versatile diagramming tools because it only involves only four
different symbols. It is different from flow charts in a way that data flow diagrams are more
useful in depicting purely logical information flows. Conversely, flow charts depict the details of
physical systems. Data flow diagrams are easier to use than flow charts.
Y
  

Y Process- the work or actions performed on data so that they are transformed, stored, or
distributed.

Y Data Store- data at rest, which may take the form of many different physical
representations.

Y Source/sink- the origin and/or destination of data sometimes refer to as external


entities.

Y Data flow- the data that move together.



  

Differences between sources/sinks and processes

Y    



     !"
Y

     



Y                 #   $   $
      

Y           

Data Flow Diagramming Rules

There is a set of rules that must be followed in drawing data flow diagrams. These rules are
used to evaluate its correctness.

There are two DFD guidelines that apply most of the time:

Y 2   |  ||  


    |    ||

Process transform inputs into output, as data enters the process it is being
manipulated and then as a result it produces new data as the output.

Y -  |    


|

Every process has a unique name. There is no reason to have two processes with
the same name, same with data stores and sources/sink. A data flow name
represents a specific set of data, and another data flow has even one more or
one less piece of data must be given a different, unique name.

Rules Governing Data Flow Diagramming


Process:

A.Y No process can have only outputs. It is making data from nothing (a miracle). If an object
has only outputs, then it must be a source.
B.Y No process can have only inputs (a black hole). If an object has only inputs, then it must
be a sink.
C.Y A process has a verb phrase label.

Data Store:

A.Y Data cannot move directly from one data store to another data store. Data must be
moved by a process.
B.Y Data cannot move directly from an outside source to a data store. Data must be moved
by a process which receives data from the source and places data into data store.
C.Y Data cannot move directly to an outside sink from a data store. Data must be moved by
a process.
D.Y A data store has a noun phrase label.

Source/Sink:

H.Y Data cannot move directly from a source to a sink. It must be moved by a process if the
data are of any concern to our system. Otherwise, the data flow is not shown on the
DFD.
I.Y A source/sink has a noun phrase label.

Data Flow:

J.Y A data flow has only one direction of flow between symbols. It may flow in both
directions between a process and data store to show a read before an update. The latter
usually indicated, however, by two separate arrows since these happen at different
times.
K.Y A fork in a data means that exactly the same data goes from a common location to two
or more different processes, data stores, or sources/sink (this usually indicates different
copies of the same data going to different locations).
L.Y A join in a data flow means that exactly the same data comes from any or more
different processes, data stores, or sources/sink to a common location.
M.Y A data flow cannot go directly back to the same process it leaves. There must be at least
one other process which handles the data flow, produces some other data flow, and
returns the original data flow to the beginning process.
N.Y A data flow to a data store means update (delete or change).
O.Y A data flow from a data store means retrieve or use.
P.Y A data flow has a noun phrase label. More than one data flow noun phrase can appear
on a single arrow as long as all of the flows on the same arrow move together as one
package.

  

ÿ 

A ‘ ‘  
 is a process through which you define and evaluate the business
needs of your network system. Also called    , identifies the information needs
of an organization this where every software projects starts.

Purpose of requirements defines what the software is supposed to do. The software
requirements serve as the basis for all the future design, coding, and testing that will be done
on the project.

 
cÿ ÿ 

D 
 |  | |  || 
|       
   |
|
 
 |   ||| 


c ÿ

 

MY A trained software practitioner called the %     communicates
with knowledgeable users to understand what the requirements are. Most times,
the client will have a brief idea of what they need in the proposed system.
MY The Analyst's jobs is to flesh it out, add implied requirements and mandatory or
regulatory requirements the client may not be aware of and create a document
called the Software Requirements Specifications, or SRS.
MY At the end of the process, the SRS turn out to be the blue print of the product. A
reference point for the client, the project manager, the tester and the designer. The
SRS should ideally restrict itself to specifying ͞What͟ the product should do rather
than ͞How͟ to do it. Never include implementation details such as database
structure, architecture, and so on.


 
cÿ ÿ ÿ

 

Ideally, an SRS should include at the least the following information.

MY   %  

  D 
 | are the ͞features͟ software has, or will have.

Example Requirements for a shopping cart are Browse Shop. Detailed product view, Checkout,
view cart, My Account.

Implied Requirements are the requirements that the customer has missed or those
that are required to support the main features.

For example, if the client has asked for a Shopping Site, the analyst includes
requirements for the Shopping Cart, such as a View Cart, Checkout and Delete from Cart.

MY    %  



How efficient is the software product? Is it high performance, is it reliable, how fast is it,
does it consume a lot of system resources?

These are questions dealt with in the non-functional requirements. Novice programmers
generally fail to address these requirements. These requirements directly affect the quality of
the product.

MY   %  



In many industries, there may be regulations within which the software should comply.

For example, the tax laws of the country in which an accounting product has to be
deployed. Language preferences, password encryption laws, URL standards, email standards.
These are some of the regulatory requirements which the client may not be aware of. The
analyst has to include these requirements of applicable to the industry or country in which the
software product is to be deployed.

MY       %  


Will this product interact with other software or hardware?

The analyst needs to list the minimum requirements of these interfaces.

Required states and modes.


CSCI capability requirements.
CSCI external interface requirements.
CSCI internal interface requirements.
CSCI internal data requirements.
Adaptation requirements.
Safety requirements.
Security and privacy requirements.
CSCI environment requirements.
Computer resource requirements.
Software quality factors.
Design and implementation constraints.
Personnel requirements.
Training-related requirements.
Logistics-related requirements.
Other requirements.
Packaging requirements.
Precedence and criticality requirements.

Das könnte Ihnen auch gefallen