Sie sind auf Seite 1von 17

MODEL TEST PAPER

BUSINESS INTELLIGENCE & APPLICATIONS

MS 206

Attempt any five questions.

Q1 (a) What are the steps involved in designing of a decision support system?

(b) Differentiate between: -

(i) Decision support system and group decision support system


(ii) Structured decisions and unstructured decision

Q2 (a) Give the syntax of data manipulation commands of SQL Server.

(b) What is a referential integrity constraint?

(c) Which of the two i.e. OLTP and OLAP, more helpful in strategic decisions? Why?

Q3. Discuss the applications of data mining in retail sector.

Q4. Explain the following techniques of data mining: -

Q5 (a) How data coding helps in data mining? Explain.

(b) Describe how data integration can lead to higher levels of data quality?

(c) Explain the term Knowledge Discovery in Databases.

Q6 (a) How can the information technology be used to implement knowledge management
systems?

(b) Suggest the knowledge management initiatives that can be undertaken in the
universities
MODEL TEST PAPER
Business Intelligence & Applications

MS 206

Attempt any five questions.

Q1 (a) What are the steps involved in designing of a decision support system?

Ans Decision support system: - It is a computer system at the management level of an


organization that combines data, sophisticated analytical tools, and user-friendly software to
support semi-structured and unstructured decision making.

Phase Project Selection

Phase 2 Software and Hardware


Selection
Phase 5

Phase Build the database and its Build the model base and its Build the dialog subsystem
3 management management

Phase 4
Build the knowledge
Phase 6
component

Phase 7 Packaging

Phase 8 Testing and evaluation


improvement

Phase 9 User Training

Phase 10 Documentation and


maintenance

Phase 11 Adaptation
Factors to consider when designing a DSS: - One should consider the following before starting
to design a DSS: -

1. One should first determine the purpose of the DSS in terms of the decision being made
and the outputs it must supply.
2. One should determine any external sources that the DSS will communicate with and find
any data flows to and from these sources.
3. Any internal data files needed should be determined. One should determine if the data in
these files are obtained from external data sources and if it is, specify the external
sources.
4. The major processes in the DSS should be determined. If one can understand all these
considerations, we will understand our DSS as a system. One test of this understanding is
being able to draw it as a flow diagram.

The Development process of a DSS constructed by end users: - When end-users are allowed to
modify a SDSS to suit their decision needs or the decision needs of the group that they represent,
it is better to follow a construction process from the end-users point of view. A suitable
construction process developed from end-users is: -

Phase one Choosing the project or problem to be solved: Departments involved are
committed to the process of finding a suitable solution.
Phase two Selecting software and hardware: Select suitable DSS software and
hardware.
Phase three Data acquisition and management: Acquire and maintain data in the
knowledge base.
Phase four Model subsystem acquisition and management: Build the model base:
acquire and include relevant models in the model base.
Phase five Dialogue subsystem and its management: Develop the user interface.
Phase six Knowledge component: Perform knowledge engineering.
Phase seven Packaging: The various software components of the DSS will be put
together for easy testing and usage.
Phase eight Testing, evaluation and improvement: Test the DSS with sample input
and validate it to prove that the DSS is reliable.
Phase nine User training: Train users in using the DSS.
Phase ten Documentation and maintenance: Produce documentation and maintain the
DSS, and
Phase eleven Adaption: Adapt DSS to suit user needs.
Q1 (b) Differentiate between: -

(i) Decision support system and group decision support system

Ans Every manager makes decisions in the organization, either in his individual capacity or as
member of a group. In fact, organizational decisions are combination of individual and group
decisions. Both types of decisions have their positive and negative aspects. Therefore, the
question arises: what are the situations in which group decisions should be preferred. Following
is the analysis of situations for individual and group decisions.

1. Nature of the problem: - If the policy guidelines regarding the decision for the problem
at hand are provided, individual decision making will result in greater creativity as well
as efficiency. Where the problem requires a variety of expertise, group decision making
is suitable.
2. Time availability: - Group decision making is a time-consuming process and, therefore,
when time at the disposal is sufficient, group decision making can be preferred.
3. Quality of decision: - Group decision making generally leads to higher quality solution,
unless an individual has expertise in the decision area and this has been identified in
advance.
4. Climate of decision making: - supportive climate encourages group problem solving
whereas competitive climate stimulates individual problem solving.
5. Legal requirement: - Legal requirement also determines whether individual or group
decisions have to be made. Such requirement may be prescribed by governments legal
framework or by the organizational policy, rules, etc. For example, many decisions have
to be compulsorily made by board of directors or committee in companies.

(ii) Structured decisions and unstructured decision

Ans The difference between Structured decisions and Unstructured Decisions are: -

Structured decisions are the decisions which are made under the established situations
while unstructured decisions are made under the emergent situations.
Structured decisions are the programmable decisions are they are preplanned while
unstructured decisions are creative and they are not preplanned.
Structured decisions are made in the situations which are fully understood while in
unstructured decisions the situations are uncertain and unclear.
Structured decisions are generally made for routine tasks while unstructured decisions are
made for specified processes like specialized manufacturing processes while unstructured
decisions are made for general processes.
Examples of structured decision processes are: - formula for computing reorders quantity,
allocating furniture and equipment to employees. Examples of unstructured decision
processes are: - predicting direction of economy or stock market, how well suited an
employee is for a particular job.

Q2 (a) Give the syntax of data manipulation commands of SQL Server.

Ans Data manipulation commands: - Data Manipulation language (DML) is that part of SQL
which consists of a set of commands that determine which values are present in the table at any
given time.

Data manipulation language is divided into three categories: -

Retrieving data: - It means getting information out of a table. A selection of data


items stored in a table is presented on a screen. The command for retrieving data
items is SELECT.
Manipulating data: - It refers to the DML features that allow us to perform statistical
functions on data, namely averaging and summing columns and other arithmetic
functions like multiplying values in two or more columns of a table.
Updating data: - It refers to inserting and deleting rows in tables and changing values
in the columns. In other words it performs the maintenance of a database.

Insert Command: - Insert Into command is used to insert only particular values into a table.
Basically, this command adds (appends) new rows (records) in a table. The syntax of INSERT
INTO is
INSERT INTO <tablename> Values (<value>, <value>, ..);

For example, to add rows (records) to the DEPARTMENT table, type the following statement

INSERT INTO DEPARTMENT Values (10, Accounts, 1)

Select Command: - Select command is used to retrieves data from a table. This command also
provides the query capability. This means that by executing SELECT statement, information
currently in table will be shown on the screen. The syntax of SELECT command is

SELECT <columnname (s)> FROM <tablename> WHERE <columnname><operator><value>;

For example, we want to see details of only those employees who are salesmen. The SQL
command for such a query is the following:

SELECT EMP_CODE, EMP_NAME, DESIG, BASIC

FROM EMPLOYEE WHERE DESIG = Salesmen


Update Command: - Update command in the SQL uses to changes or modifies data values in a
table. The syntax is:

UPDATE <tablename> SET <column name = new value> WHERE <condition>;

For example, suppose we want to give a raise of Rs 200 to all assistants, then the SQL statements
are:

UPDATE EMPLOYEE SET BASIC = BASIC + 200 WHERE DESIG = Assistant

Delete Command: - It is used to remove specific rows meeting the condition and not the
individual field values. The syntax of DELETE command is

DELETE FROM <tablename> WHERE <condition>;

The SQL statement to delete the record of Rajiv Kumar whose emp_code is 104

DELETE FROM EMPLOYEE WHERE EMP_CODE = 104;

Q 2(b) What is a referential integrity constraint?

Ans Referential integrity: - To ensure that a value that appears in one relation for a given set of
attributes also appears for a certain set of attributes in another relation. Such a condition is called
the referential integrity. Integrity Rule 2 is concerned with the concept of foreign key i.e. with
the attributes of a relation having domains. The domains of foreign keys are those of the primary
key of another relation. For example, if a base relation includes a foreign key, then it must have a
primary key to match in some other relation, or contain a null value completely.

The value of a primary key which appears in the base table (entity set) whenever there is a
cardinality (entity relation) then the value of a primary key, which becomes a foreign key in the
entity relation, the value of foreign key and primary key should be same.

Referential integrity constraints are created in order to ensure that data entered into one table is
match able or compatible with corresponding data in the other related tables. Values form one
column are dependent on the values of columns in other tables.
Dept_code Designation
Emp_code Dept_code Name
A01 Director
100 A01 Raj
B01 Manager
102 C01 Lalit
C01 Officer
104 X01 Govind
Master File Control File
The designation code X01 in master file is not allowed, as it does not have a primary key match
in other table, namely the control file. Employee with Emp_code 104 joins the organization, and
his designation is not clearly specified at the time of entry. Hence, the designation can be a null;
the same can be updated later.

Q 2(c) Which of the two i.e. OLTP and OLAP, more helpful in strategic decisions? Why?

Ans Online Analytical Processing (OLAP) systems are targeted to provide more complex query
results than traditional OLTP or database systems. Unlike database queries, however, OLAP
applications usually involve analysis of the actual data. They can be thought of as an extension
of some of the basic aggregation functions available in SQL. This extra analysis of the data as
well as the more imprecise nature of the OLAP queries is what really differentiate OLAP
applications from traditional database and OLTP applications. OLAP tools may also be used in
DSS systems.

OLAP is performed on data warehouses or data marts. The primary goal of OLAP is to support
ad hoc querying needed to support DSS. The multidimensional view of data is fundamental to
OLAP applications. OLAP is an application view, not a data structure or schema. The complex
nature of OLAP applications requires a multidimensional view of the data. The type of data
accessed is often (although not a requirement) a data warehouse.

There are several types of OLAP operations supported by OLAP tools:

Slice
Dice
Roll up
Drill down
Q3. Discuss the applications of data mining in retail sector.

Ans Fierce competition and narrow profit margins have plagued the retail industry. Forced by these
factors, the retail industry adopted data warehousing earlier than most other industries. Over the years,
these data warehouses have accumulated huge volumes of data. The data warehouses in many retail
businesses are mature and ripe. Also, through the use of scanners and cash registers, the retail industry has
been able to capture detailed point of sale data. The combination of the two featureshuge volumes of
data and low-granularity datais ideal for data mining. The retail industry was able to begin using data
mining while others were just making plans. All types of businesses in the retail industry, including
grocery chains, consumer retail chains, and catalog sales companies, use direct marketing campaigns and
promotions extensively. Direct marketing happens to be quite critical in the industry. All companies
depend heavily on direct marketing. Direct marketing involves targeting campaigns and promotions to
specific customer segments. Cluster detection and other predictive data mining algorithms provide
customer segmentation. As this is a crucial area for the retail industry, many vendors offer data mining
tools for customer segmentation. These tools can be integrated with the data warehouse at the back end
for data selection and extraction. At the front end, these tools work well with standard presentation
software. Customer segmentation tools discover clusters and predict success rates for direct marketing
campaigns. Retail industry promotions necessarily require knowledge of which products to promote and
in what combinations. Retailers use link analysis algorithms to find affinities among products that usually
sell together. As you already know, this is market basket analysis. Based on the affinity grouping, retailers
can plan their special sale items and also the arrangement of products on the shelves. Apart from customer
segmentation and market basket analysis, retailers use data mining for inventory management. Inventory
for a retailer encompasses thousands of products. Inventory turnover and management are significant
concerns for these businesses.

Another area of use for data mining in the retail industry relates to sales forecasting. Retail sales are
subject to strong seasonal fluctuations. Holidays and weekends also make a difference. Therefore, sales
forecasting is critical for the industry. The retailers turn to the predictive algorithms of data mining
technology for sales forecasting.

The other types of data mining used in the retail industry is

Customer long-term spending patterns


Customer purchasing frequency
Best types of promotions
Store plan and arrangement of promotional displays
Planning mailers with coupons
Customer types buying special offerings
Sale trends, seasonal and regular
Manpower planning based on busy times
Most profitable segments in the customer base
Q4. Explain the following techniques of data mining: -

a) Neural networks: - It mimics the human brain by learning from a training dataset and
applying the learning to generalize patterns for classification and prediction. These
algorithms are effective when the data is shapeless and lacks any apparent pattern. The
basic unit of an artificial neural network is modeled after the neurons in the brain. The
unit is known as a node and is one of the two main structures of the neural network
model. The other structure is the link that corresponds to the connection between neurons
in the brain. The neural network receives values of the variables or predictors at the input
nodes. If there are 15 different predictors, then there are 15 input nodes. Weights may be
applied to the predictors to the condition them properly. There may be several inner
layers operating on the predictors and they move from node to node until the discovered
result is presented at the output node. The inner layers are also known as hidden layers
because as the input dataset is running through much iteration, the inner layers rehash the
predictors over and over again.

b) Genetic algorithms: - In a way, genetic algorithms have something in common with


neural networks. This technique also has its basis in biology. It is said that evolution and
natural selection promote the survival of the fittest. Over generation, the process
propagates the genetic material in the fittest individuals from one generation to the next.
Genetic algorithms apply the same principles to data mining. This technique uses a highly
iterative process of selection, cross-over, and mutation operators to evolve successive
generations of models. At each iteration, every model competes with everyone other by
inheriting traits from previous ones until only the most predictive models survives. GAs
was introduced as a computational analogy of adaptive systems. They are modeled
loosely on the principles of the evolution via natural selection, employing a population of
individuals that undergo selection in the presence of variation-inducing operators such as
mutation and recombination (crossover). A fitness function is used to evaluate
individuals, and reproductive success varies with fitness. An effective GA representation
and meaningful fitness evaluation are the keys of the success in GA applications. The
appeal of GAs comes from their simplicity and elegance as robust search algorithms as
well as from their power to discover good solutions rapidly for difficult high-dimensional
problems.

c) Decision Trees: - This technique applies to classification and prediction. The major
attraction of decision trees is their simplicity. By following the tree, we can decipher the
rules and understand why a record is classified in a certain way. Decision trees represent
rules. We can use these rules to retrieve records falling into a certain category. Because
of their tree structure and ability to easily generate rules decision trees are the favored
technique for building understandable models. Because of this clarity they also allow for
more complex profit and ROI models to be added easily in on top of the predictive
model.
In some data mining process, we really do not care how the algorithm selected a certain
record. For example, when we are selecting prospects to be targeted in a marketing
campaign, we do not need the reasons for targeting them. We only need the ability to
predict which members are likely to respond to the mailing. But in some other cases, the
reasons for the prediction are important. If our company is a mortgage company and
wants to evaluate an application, we need to know why an application must be rejected.
Our company must be able to protect itself from any lawsuits of discrimination.
Wherever the reasons are necessary and we must be able to trace the decision paths,
decision trees are suitable.

d) Case Based reasoning: - It uses known instances of a model to predict unknown


instances. This data mining techniques maintains a dataset of known records. The
algorithm knows the characteristics of the records in this training dataset. When a new
record arrives for evaluation, the algorithm finds neighbors similar to the new record, and
then uses the characteristics of the neighbors for prediction and classification.
When a new record arrives at the data mining tool, first the tool calculates the distance
between this record and the records in the training dataset. The distance function of the
data mining tool does the calculation. The results determine which data records in the
training dataset qualify to be considered as neighbors to the incoming data record. Next,
the algorithm uses a combination function to combine the results of the various distance
functions to obtain the final answer. The distance function and the combination function
are the key components of the memory-based reasoning.
For solving a data mining problem using MBR, we are concerned with these critical
issues: -
Selecting the most suitable historical records to form the training or base dataset.
Establishing the best way to compose the historical record.
Determining the two essential functions, namely, the distance function and the
combination function.

Q5 (a) How data coding helps in data mining? Explain.

Ans: It is rather straight forward to apply Data Mining modeling tools to data and judge the
value of resulting models based on their predictive or descriptive value. This does not diminish
the role of careful attention to data preparation efforts. Data preparation process is roughly
divided into data selection, data cleaning, formation of new data and data formatting.

Select data: - A subset of data acquired in previous stages is selected based on criteria stressed in
previous stages:

Data quality properties: completeness and correctness.


Technical constraints such as limits on data volume or data type: this is basically related
to data mining tools which are planned earlier to be used for modeling.

Data cleaning: - This step complements the previous one. It is also the most time consuming
due to a lot of possible techniques that can be implemented so as to optimize data quality for
future modeling stage. Possible techniques for data cleaning include:

Data normalization. For example decimal scaling into the range (0,1), or standard
deviation normalization.
Data smoothing. Discretization of numeric attributes is one example; this is helpful or
even necessary for logic based methods.
Treatment of missing values. There is not simple and safe solution for the cases where
some of the attributes have significant number of missing values. Generally, it is good to
experiment with and without these attributes in the modeling phase, in order to find out
the importance of the missing values.

New data construction: - This step represents constructive operations on selected data which
includes:

Derivation of new attributes from two or more existing attributes.


Generation of new records (samples).
Data transformation: data normalization (numerical attributes), data smoothing.
Merging tables: joining together two or more tables having different attributes for same
objects.

Data formatting: - Final data preparation step which represents syntactic modifications to the
data that do not change its meaning, but are required by the particular modeling tool chosen for
the DM task. These include:

Reordering of the attributes or records


Changes related to the constraints of modeling tools

(b) Describe how data integration can lead to higher levels of data quality?

Ans Data integration involves combining data residing in different sources and providing users
with a unified view of these data. This process becomes significant in a variety of situations both
commercial (when two similar companies need to merge their databases) and scientific
(combining research results from different bioinformatics repositories). Data integration appears
with increasing frequency as the volume and the need to share existing data explodes. It has
become the focus of extensive theoretical work, and numerous open problems remain unsolved.
In management circles, people frequently refer to data integration as Enterprise Information
Integration.
For example, consider a web application where a user can query a variety of information about
cities (such as crime statistics, weaather, hotels, demographics, etc). traditionally, the
information must exist in a sinlge database with a single schema. But any single enterprise would
find information of this breadth somewhat difficult and expensive to collect. Even if the
resources exist to gather the data, it would likely duplicate data in existing crime databases,
weather websites, and census data.

A data-integration solution may address this problem by considering these external resources as
materialized views over a vitual mediated schema resulting in virtual data integration. This
means application developers construct a virtual schema-the mediated schema-to best model the
kinds of answers their users want. Next, they design wrappers or adapters simply transform the
local query results (those returned by the respective websites or databases) into an easily
processed form for the data integration solution. When an application user queries the mediated
schema, the data-integration solution transforms this query into appropriate queries over the
respective data sources. Finally, the virtual database combines the results of these queries into
the answer to the users query. This solution offers the convenience of adding new sources by
simply constructing an adapter or an application software blade for them. It contrasts with ETL
systems or with a single database solution, which require manual integration of the entire new
dataset into the system. The virtual ETL solutions leverage virtual mediated schema to
implement data harmonization; whereby the data is copied from the designated master source
to the defined targets, field by field. Advanced data virtualization is also built on the concept of
object-oriented modeling in order to construct virtual mediated schema or virtual metadata
repository, using hub and spoke architecture.

(c) Explain the term Knowledge Discovery in Databases.

Ans It is the process of finding useful information and patterns in data. The Knowledge
discovery in databases (KDD) process is often said to be non-trivial; however, we take the larger
view that KDD is an all-encompassing concept.

KDD is a process that involves many different steps. The input to this process is the data, and the
output is the useful information desired by the users. However, the objective may be unclear or
inexact. The process itself is interactive and may require much elapsed time.

The KDD process consists of the following five steps: -

1. Selection: - The data needed for the data mining process may be obtained from many
different and heterogeneous data sources. This first step obtains the data from various
databases, files, and non electronic sources.
2. Preprocessing: - The data to be used by the process may have incorrect or missing data.
There may be anomalous data from multiple sources involving different data types and
metrics. There may be many different activities performed at this time. Erroneous data
may be corrected or removed, whereas missing data must be supplied or predicted (often
using data mining tools).
3. Transformation: - Data from different sources must be converted into a common format
for processing. Some data may be encoded or transformed into more usable formats. Data
reduction may be used to reduce the number of possible data values being considered.
4. Data mining: - Based on the data mining task being performed, this step applies
algorithms to the transformed data to generate the desired results.
5. Interpretation/evaluation: - How the data mining results are presented to the users is
extremely important because the usefulness of the results is dependent on it. Various
visualization and GUI strategies are used at this last step.

Q6 (a) How can the information technology be used to implement knowledge management
systems?

Ans The decision to implement knowledge management can be a daunting task for companies
that have no background or experience in Knowledge management (KM). It is essential to have
strong leadership support for KM to be successful. From a specified perspective, KM integrates
people, processes, and technology. KM results in changing how the business operates and
requires organizational mechanisms (incentives and policies) to change employees attitudes and
operation processes. Information technology (IT) is often used as an enabler for capturing,
sharing, and integrating knowledge. A challenge in establishing a practical approach for KM is to
develop some clear and definitive steps that are well explained and easily understood by
someone with limited applicable training or experience.

Five steps for implementation of Knowledge Management Systems are: -

Select knowledge management team.


Establish knowledge management strategy and business case.
Perform knowledge assessment and audit.
Perform information technology (IT) assessment.
Develop project plan and measurement systems.

Step 1 Select Knowledge Management Team

An executive sponsor and/or a knowledge champion are key players needed on the KM team
to advocate the program to management. A knowledge champion is the initial advocate of the
program and works with the executive sponsor. In addition, a project leader (KM manager) needs
to be identified, and is responsible for developing a knowledge strategy and business case. It is
highly recommended that these individuals have some KM training and/or experience. Many
larger organizations have appointed a Chief Knowledge Officer who builds a base of support
at all levels of management and guides strategies and polices for the activity.

Step 2 Establish Knowledge Management Strategy and Business Case


The primary objective of any corporate KM program is to support the achievement of strategic
business objectives. Therefore, the starting point for KM is to understand the organizations
business strategies. The traditional strengths, weaknesses, opportunities, threats (SWOT)
framework, provide a basis for a knowledge strategy. Firms need to perform a knowledge-based
SWOT analysis to better understand their points of advantage and weakness. After mapping the
firms competitive position, an organization can perform a gap analysis. The gap between what a
firm must do to compete and what the firm is doing represents a strategic gap. The gap between
what a firm must know and what the firm does know is the knowledge gap.

Step 3 Perform Knowledge Management Assessment and Audit

Many organizations use knowledge assessment and knowledge audit interchangeably. Most
references in this area do not make a distinction between them. A KM assessment is an
examination of the organizations health and effectiveness in using knowledge. A knowledge
audit determines and examines organizational knowledge assets including location, source, and
utilization. A good place to start a knowledge assessment is to determine the strengths and
weaknesses in the organizations ability to leverage knowledge. One can examine the current
situation in areas that are working well and areas that are not working well which
corresponds to the enablers and obstacles to KM.

The spider diagram in figure 1 is used to evaluate the health of each of the organizational factors.
The farther out on the spider the more the factor supports KM.

Figure 1: Organization Factors That Influence Knowledge Management

Step 4 Perform Knowledge Management Assessment and Audit

The information technology assessment has the following three diagnostic areas: IT assets, IT
management processes, and IT investment performance. The IT assets include applications,
technology infrastructure, and the IT organization.

The assessment of IT management processes examines the IT project management skills which
include: strategic and technical direction setting, funding, execution and review of IT projects.
The assessment of IT investment performance examines spending profiles and the impacts of
spending on the business. Although IT is an enabler in creation, capture, sharing and integration
of knowledge, it does not bring about behavior, cultural or organizational change, make people
share or search for knowledge, or create a learning organization.

Step 5 Develop Project Plan and Measurement Systems: The KM project plan lays out
execution of the program based on the results of the knowledge assessment and audit, and
technology assessment. It answers the questions of what, why, where, when and how. The plan
should include the following:

Define specific objectives


Determine deliverables
Define roles and responsibilities
Define resource requirements
Lay out schedules with milestones
Develop a communication plan
Determine metrics
Describe organizational and/or technology changes
Capture lessons learned

Q6 (b) Suggest the knowledge management initiatives that can be undertaken in the
universities.

Ans The competitive pressures universities are now experiencing, resulting from the reduction in
government financial support and the consequent need for enterprising approaches to revenue
generation brings a commercial orientation to the provision of teaching and student services.
This causes universities to measure their teaching programs, at least to some extent, as a market
commodity which is aimed to meet the needs of the customer. Thus the branding of courses
with an institutional reputation can make the product for sale much more attractive in the market
place.

Internet technologies allow the virtualization of teaching and learning. This involves two levels
of flexibility. Firstly, virtualization provides flexibility in delivery. Access to learning
opportunities is unfettered by time, place and pace restrictions, as is possible by distance
education methods, but with the provision of increased interaction provided by online
technologies. Secondly, and more powerfully, it also refers to the ability for a university to
provide services to the market which it has not itself produced. A number of models are possible
for this second dimension of virtualization, most notably brokering arrangements between
universities, such as is done by the Open Learning Agency in Australia, and the Western
Governors University in the US. In its broadest terms, virtualization allows the unbundling of
educational services by universities whereby the need for vertical integration of student services
(from information, enrolment, courses, curriculum, conferring of degrees, etc.) disappears and
the prospect of outsourcing and sharing of these across institutions is possible. The establishment
of an online call centre for a number of universities is an example. It could be argued that this
allows a more student-centered rather than an institution-centered approach to higher education.
It may be that universities will eventually become brokers and licensers of intellectual property,
via the virtualization made possible by internet knowledge management strategies.

Internet technologies transform the creation, publication, storage and consumption of knowledge,
and hence the educational materials upon which teaching and learning are reliant. The above
discussion of the characteristics of print and online materials is summarised in table 1.

Traditional print Online

Time Place Time Place

Author Fixed Can be No deadline Usually


deadline dispersed, but Dynamic dispersed
Out of date has to come Ability to Anyone can be
easily together for keep up to an author
publishing date

Publishe Publication One printing Creation Mounted on one


r dates location dates or more servers
Editions Must be Needs to be Immediately
Reprints distributed at refreshed globally
extra cost to get latest accessible
changes

Library Accession Inter-library Licensing Networked


policy loans policies access
Processing Photocopies Immediate Authentication
Cataloguing availability protocols
Lags

Reader Print form Need to travel Materials Need access to


single copies to physical never network
Availability library loaned out More choices re
(its on loan) Physical Multiple physical
Waiting environment copies environment
Wastage of always virtual
old copies available environments
No waiting
for updates
Loss of
historical
material

Table 1: Comparison of print and online documents

Das könnte Ihnen auch gefallen