Sie sind auf Seite 1von 23

Bruce Gopito R171555H

Advances in computer-based information technology in recent years have led to a


wide variety of systems that managers are now using to make and implement
decisions. These systems have been developed for specific purposes and differ
significantly from standard electronic data processing systems. Too often,
unfortunately, managers have little say in the development of these decision
support systems; at the same time, non-managers who do develop them have a
limited view of how they can be used. And the difference between success and
failure is the extent to which managers can use the system to increase their
effectiveness within their organizations. Thus, the author suggests that this is the
criterion designers and managers should jointly ascribe to in exploiting the
capabilities of today’s technologies. As much as the company is growing bigger
and more complex, human alone can’t handle the amount of manual information
circulated in the company. The risks would be too big such as data integrity due to
data entry error, information wouldn’t be able to be presented in an effective way
and decision making will be slow and not flexible. For a large company, the
amount of cost spent on MIS with an integrated information system would
definitely a small fraction ratio with the benefit of economics of scales. The
benefits obtained from an improved decisions making and risks elimination is
definitely worth the ROI

1. With the aid of a diagram(s) outline how the cost theories affect the size and
agency costs of an organization implementing Information systems.

Cost theories have often been used to support the idea that information systems can
reduce imperfection in the economic system. Online markets have repeatedly been
described as solutions to inefficiencies in the organization of transactions in
complex and uncertain settings. Traditionally, organizations have tried to reduce
transaction costs through vertical integration, by getting bigger, hiring more
employees, and buying their own suppliers and distributors.
Information technology, especially the use of networks, can help firms lower the
cost of market participation (transaction costs), making it worthwhile for firms to
contract with external suppliers instead of using internal sources. As a result, firms
can shrink in size (numbers of employees) because it is far less expensive to
outsource work to a competitive marketplace rather than hire employees
Information-related problems represent only some of the elements contributing to
transacting costs. The diffusion of information systems in society is always
associated with an increased amount of information becoming available. Moreover,
‘information society’ is not only defined by the greater amount of information
required in an ever increasing range of human activities, but also by the expanded
number of sources from which information emanates.
Information systems have become fundamental, online and deeply interactive in
every operation and decision making of large organizations. Over the last decade,
information systems have altered the economics of organizations increasing the
possibilities for organizing work, theories and concepts from economics and
sociology helping us understand the changes it brings about.
From the point of view of economics, IT changes both the relative costs of capital
and the costs of information. Information systems technology can be viewed as a
factor of production that can be substituted for traditional capital and labor. As the
cost of information technology decreases, it is substituted for labor, which
historically has been a rising cost. Organizations grow in size because they can
obtain certain products or services internally at lower cost than by using external
firms in the marketplace. By lowering the cost of market participation (transaction
costs) information technology allows firms to obtain goods and services more
cheaply from outside sources than through internal means. Information systems
can thus help firms increase revenue while shrinking in size. Organizations
traditionally grew in size to reduce transaction costs. While information systems
potentially reduce the costs for a given size, shifting the transaction cost curve
inward, opening up the possibility of revenue growth without increasing size, or
even revenue growth accompanied by shrinking size
The agency theory is widely used in economics, finance, marketing, legal, and
social sciences Jensen and Meckling (1976) defined the agency relationship as “a
contract under which one or more persons (the principal) engage another person
(the agent), to perform some service on their behalf which involves delegating
some decision making authority to the agent” pp.308. Assuming that both parties’
utility maximizes, the agents are not possible to act in the best interest of the
principal. The agency costs are generated if the organization sells equity claims on
the firms, which are identical.
It is also generated by the divergence between interests and those of the outside
shareholders, since they bear only a fraction of the costs of any non-pecuniary
benefits they take out maximizing his own utility. There are two types of conflicts
in the firm; First of all, the conflict between shareholders and managers arises
because managers hold less than a hundred percent of the residual claim.
Therefore, they do not capture the entire gain from their profit enhancement
activities, but they do bear the entire cost of these activities. For example,
managers can invest less effort in managing organizational resources and may be
able to transfer firm resources to their own, personal benefit, i.e., by consuming
“perquisites” such as a fringe benefits. The manager bears the entire cost of
refraining from these activities but captures only a fraction of the gain.
As a result, managers over indulge in these interests relative to the level that would
maximize the organizational value. This inefficiency reduced the large fraction of
the equity owned by the manager. Holding constant the manager’s absolute
investment in the firm, thereby increasing the fraction of the firm financed by debt
increases by the manager’s share of the equity and mitigates the loss from conflict
between the managers and shareholders.
However, with information systems, managers carefully analysis the agencies
contracts and carefully supervise them to be certain they pursue the interests of the
organization. Information technology can help reduce agency costs which is the
cost of coordinating many different people and activities, so that each manager can
oversee a larger number of employees without challenges. As the organization
grows in size and complexity, traditionally they experience rising agency costs. IT
shifts the agency cost curve down and to the right, enabling firms to increase size
while lowering agency costs.
In accordance to agency cost theory, information technology can reduce internal
management costs. A principle also known as owner employs “agents” or
employees to perform work to accomplish the company tasks and objectives. As
the company grows in size and scope, the coordination costs or called as agency
would also be growing. The information technology makes the manager’s task
easier to oversee and coordinate with a greater number of employee. The employee
payroll, performance, duties, roles and responsibilities can be coordinated in much
efficient manner with very few clerks and managers (Laudon, 2016) [1]

2. Outline the risks and mitigation strategies of the information processing


cycle.

Information processing cycle is a sequence of events consisting of input,


processing, storage & output. The input stage can be further broken down into
acquisition, data entry and validation. The output stage can also be further
divided into interactive queries and routine reports. A fifth stage is often
attributed to this cycle, which is the archiving or deletion of unwanted data.
Risks

Data manipulation
Human error during input especially for mathematical data which immediately
affects the output
Duplicate data

Risk management is the decision-making process involving considerations of


political, social, economic and engineering factors with relevant risk assessments
relating to a potential hazard so as to develop, analyze and compare regulatory
options and to select the optimal regulatory response for safety from that hazard.

Essentially risk management is the combination of 3 steps:


 risk evaluation,
 emission and exposure control,
 risk monitoring.
A systematic approach used to identify, evaluate, and reduce or eliminate the
possibility of an unfavorable deviation from the expected outcome of medical
treatment and thus prevent the injury of patients as a result of negligence and the
loss of financial assets resulting from such injury.’

Risk Management Definitions


“Risk management is an integrated process of delineating specific areas of risk,
developing a comprehensive plan, integrating the plan, and conducting the ongoing
evaluation.”-Dr. P.K. Gupta
“Risk Management is the process of measuring, or assessing risk and then
developing strategies to manage the risk.”-Wikipedia
‘Managing the risk can involve taking out insurance against a loss, hedging a loan
against interest-rate rises, and protecting an investment against a fall in interest
rates.”
-Oxford Business Dictionary
‘Decisions to accept exposure or to reduce vulnerabilities by either mitigating the
risks or replying cost-effective controls’- Anonymous
The future is largely unknown. Most business decision-making takes place on the
basis of expectations about the future. Making a decision on the basis of
assumptions, expectations, estimates, and forecasts of future events involves taking
risks.
Risk has been described as the “sugar and salt of life”. This implies that risk can
have an upside as well as the downside. People take a risk in order to achieve some
goal they would otherwise not have reached without taking that risk. On the other
hand; Risk can mean that some danger or loss may be involved in carrying out an
activity and therefore, care has to be taken to avoid that loss. This is where risk
management is important, in that it can be used to protect against loss or danger
arising from a risky activity.

For proper control and management of risks, as insurers, we should always keep
the following in mind with regard to any project or subject-matter of insurance:
What are the possible sources of loss?
What is the probable impact of a loss should it at all occur?
What should be done when a loss takes place? Should the loss be allowed to
enhance or something should be done to minimize it? The question of protection of
salvage in the best possible way and also the question of checking the future
possibility of such events should be considered.
The probable expenditure or the economy of loss prevention, (it should be
remembered that any extra expenditure for loss prevention would be economically
justified so long the expenditure made is smaller than or at best equal to the
savings made by way of loss reduction.
As already mentioned, in insurance the risk is isolated from the whole business
venture and the pure risk portion of it is assumed entirely by a different group of
people of an organization (insurer) in a most technical, expert and economic way.
This is possible only through the proper diagnosis of the risk in matters of finding
out the possible sources of loss and the impact of loss should it at all occur. The
question of minimizing a loss and preventing future causation of a loss should not
also lose sight of. Keeping these factors in view would come up with the question
of properly rating a risk, as this would be the basis of charging a premium or price
for running a risk. In this context of risk management the ‘mathematical valuation
of risk’ is indeed important.
7 steps of risk management are;
1. Establish the context,
2. Identification,
3. Assessment,
4. Potential risk treatments,
5. Create the plan,
6. Implementation,
7. Review and evaluation of the plan.

The risk management system has seven(7) steps which are actually a cycle.
steps of risk management process

1. Establish the Context


Establishing the context includes planning the remainder of the process and
mapping out the scope of the exercise, the identity and objectives of stakeholders,
the basis upon which risks will be evaluated and defining a framework for the
process, and agenda for identification and analysis.

2. Identification
After establishing the context, the next step in the process of managing risk is to
identify potential risks. Risks are about events that, when triggered, will cause
problems.

Hence, risk identification can start with the source of problems, or with the
problem itself.

Risk identification requires knowledge of the organization, the market in which it


operates, the legal, social, economic, political, and climatic environment in which
it does its business, its financial strengths and weaknesses, its vulnerability to
unplanned losses, the manufacturing processes, and the management systems and
business mechanism by which it operates. Any failure at this stage to identify risk
may cause a major loss for the organization. Risk identification provides the
foundation of risk management.
The identification methods are formed by templates or the development of
templates for identifying source, problem or event. The various methods of risk
identification methods are.

3. Assessment
Once risks have been identified, they must then be assessed as to their potential
severity of loss and to the probability of occurrence. These quantities can be either
simple to measure, in the case of the value of a lost building, or impossible to
know for sure in the case of the probability of an unlikely event occurring.

Therefore; In the assessment process, it is critical to making the best-educated


guesses possible in order to properly prioritize the implementation of the risk
management plan. The fundamental difficulty in risk assessment is determining the
rate of occurrence since statistical information is not available on all kinds of past
incidents. Furthermore; Evaluating the severity of the consequences (impact) is
often quite difficult for immaterial assets. Asset valuation is another question that
needs to be addressed. Thus, best educated opinions and available statistics are the
primary sources of information. Nevertheless, a risk assessment should produce
such information for the management of the organization that the primary risks are
easy to understand and that the risk management decisions may be prioritized.
Thus, there have been several theories and attempts to quantify risks.
Numerous different risk formula exists but perhaps the most widely accepted
formula for risk quantification is the rate of occurrence multiplied by the impact of
the event. In business, it is imperative to be it’s to present the findings of risk
assessments in financial terms. Robert Courtney Jr. (IBM. 1970) proposed a
formula for presenting risks in financial terms. The Courtney formula was accepted
as the official risk analysis method of the US governmental agencies. The formula
proposes the calculation of ALE (Annualized Loss Expectancy) and compares the
expected loss value to the security control implementation costs (Cost-Benefit
Analysis).

4. Potential Risk Treatments


Once risks have been identified and assessed, all techniques to manage the risk fall
into one or more of these four major categories;

Risk Transfer
Risk Transfer means that the expected party transfers whole or part of the losses
consequential o risk exposure to another party for a cost. Insurance contracts
fundamentally involve risk transfers. Apart from the insurance device, there are
certain other techniques by which the risk may be transferred.

Risk Avoidance
Avoid the risk or the circumstances which may lead to losses in another way,
Includes not performing an activity that could carry risk. Avoidance may seem the
answer to all risks, but avoiding risks also means losing out on the potential gain
that accepting (retaining) the risk may have allowed. Not entering a business to
avoid the risk of loss also avoids the possibility of earning the profits.

Risk Retention
Risk-retention implies that the losses arising due to a risk exposure shall be
retained or assumed by the party or the organization. Risk-retention is generally a
deliberate decision for business organizations inherited with the following
characteristics. Self-insurance and Captive insurance are the two methods of
retention.

Risk Control
Risk can be controlled either by avoidance or by controlling losses. Avoidance
implies that either a certain loss exposure is not acquired or an existing one is
abandoned. Loss control can be exercised in two ways.

5. Create the Plan


Decide on the combination of methods to be used for each risk. Each risk
management decision should be recorded and approved by the appropriate level of
management.
For example,
A risk (concerning the image of the organization should have a top management
decision behind it whereas IT management would have the authority to decide on
computer virus risks. The risk management plan should propose applicable and
effective security controls for managing the risks.
A good risk management plan should contain a schedule for control
implementation and responsible persons for those actions. The risk management
concept is old but is still net very effectively measured. Example: An observed
high risk of computer viruses could be mitigated by acquiring and implementing
antivirus software.

6. Implementation
Follow all of the planned methods for mitigating the effect of the risks. Purchase
insurance policies for the risks that have been decided to be transferred to an
insurer, avoid all risks that can be avoided without sacrificing the entity’s goals,
reduce others, and retain the rest.

7. Review and Evaluation of the Plan


Initial risk management plans will never be perfect. Practice, experience and actual
loss results will necessitate changes in the plan and contribute information to allow
possible different decisions to be made in dealing with the risks being faced. Risk
analysis results and management plans should be updated periodically. There are
two primary reasons for this; To evaluate whether the previously selected security
controls are still applicable and effective, and, To evaluate the possible risk level
changes in the business environment. For example, information risks are a good
example of the rapidly changing business environment.
3. With the aid of examples, outline the types of data cascading in an
organizations information systems.

Entity relationships often depend on the existence of another entity — for


example, the Person–Address relationship. Without the Person, the Address
entity doesn't have any meaning of its own. When we delete the Person entity,
our Address entity should also get deleted. Cascading is the way to achieve this.
When we perform some action on the target entity, the same action will be
applied to the associated entity.

All Java Persistence API (JPA)-specific cascade operations are represented by the
javax.persistence.CascadeType enum containing entries:

 ALL
 PERSIST
 MERGE
 REMOVE
 REFRESH
 DETACH

Hibernate supports three additional Cascade Types along with those specified by
JPA. These Hibernate-specific Cascade Types are available in
org.hibernate.annotations.CascadeType:

 REPLICATE
 SAVE_UPDATE
 LOCK

Cascade.ALL propagates all operations including Hibernate-specific ones from a


parent to a child entity
Example
@Entity
public class Person {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private int id;
private String name;
@OneToMany(mappedBy = "person", cascade = CascadeType.ALL)
private List<Address> addresses;

Note that in OneToMany associations, we've mentioned cascade type in the


annotation.
Now, let's see the associated entity Address:
1
2 @Entity
3 public class Address {
@Id
4 @GeneratedValue(strategy = GenerationType.AUTO)
5 private int id;
6 private String street;
7 private int houseNumber;
8 private String city;
private int zipCode;
9 @ManyToOne(fetch = FetchType.LAZY)
10 private Person person;
11 }
12

The persist operation makes a transient instance


persistent. CascadeType PERSIST propagates the persist operation from a parent
to a child entity. When we save the person entity, the address entity will also get
saved.
Let's see the test case for a persist operation:
1
2 @Test
public void whenParentSavedThenChildSaved() {
3 Person person = new Person();
4 Address address = new Address();
5 address.setPerson(person);
6 person.setAddresses(Arrays.asList(address));
7 session.persist(person);
session.flush();
8 session.clear();
9 }
10
When we run the above test case, we'll see the following SQL:
Hibernate: insert into Person (name, id) values (?, ?)
1 Hibernate: insert into Address (city, houseNumber, person_id, street, zipCode, id)
2 values (?, ?, ?, ?, ?, ?)

The merge operation copies the state of the given object onto the persistent
object with the same identifier. CascadeType.MERGE propagates the merge
operation from a parent to a child entity.
Let's test the merge operation:
1 @Test
2 public void whenParentSavedThenMerged() {
3 int addressId;
4 Person person = buildPerson("devender");
Address address = buildAddress(person);
5
person.setAddresses(Arrays.asList(address));
6 session.persist(person);
7 session.flush();
8 addressId = address.getId();
9 session.clear();
10
Address savedAddressEntity = session.find(Address.class, addressId);
11 Person savedPersonEntity = savedAddressEntity.getPerson();
12 savedPersonEntity.setName("devender kumar");
13 savedAddressEntity.setHouseNumber(24);
14 session.merge(savedPersonEntity);
15 session.flush();
}

When we run the above test case, the merge operation generates the following
SQL:
1 Hibernate: select address0_.id as id1_0_0_, address0_.city as city2_0_0_,
2 address0_.houseNumber as houseNum3_0_0_, address0_.person_id as person_i6_0_0_,
3 address0_.street as street4_0_0_, address0_.zipCode as zipCode5_0_0_ from Address
4 address0_ where address0_.id=?
Hibernate: select person0_.id as id1_1_0_, person0_.name as name2_1_0_ from Person
5 person0_ where person0_.id=?
6 Hibernate: update Address set city=?, houseNumber=?, person_id=?, street=?, zipCode=?
7 where id=?
8 Hibernate: update Person set name=? where id=?

Here, we can see that the merge operation first loads


both address and person entities and then updates both as a result
of CascadeType MERGE.
As the name suggests, the remove operation removes the row corresponding to
the entity from the database and also from the persistent context.
CascadeType.REMOVE propagates the remove operation from parent to child
entity. Similar to JPA's CascadeType.REMOVE, we have CascadeType.DELETE,
which is specific to Hibernate. There is no difference between the two.
Now, it's time to test CascadeType.Remove:
1
2 @Test
3 public void whenParentRemovedThenChildRemoved() {
4 int personId;
Person person = buildPerson("devender");
5 Address address = buildAddress(person);
6 person.setAddresses(Arrays.asList(address));
7 session.persist(person);
8 session.flush();
9 personId = person.getId();
session.clear();
10
11 Person savedPersonEntity = session.find(Person.class, personId);
12 session.remove(savedPersonEntity);
13 session.flush();
14 }
15

When we run the above test case, we'll see the following SQL:
1 Hibernate: delete from Address where id=?
2 Hibernate: delete from Person where id=?

The address associated with the person also got removed as a result
of CascadeType REMOVE.

The detach operation removes the entity from the persistent context. When we
use CascaseType.DETACH, the child entity will also get removed from the
persistent context.
Let's see it in action:
1 @Test
2 public void whenParentDetachedThenChildDetached() {
Person person = buildPerson("devender");
3
Address address = buildAddress(person);
4 person.setAddresses(Arrays.asList(address));
5 session.persist(person);
6 session.flush();
7
8 assertThat(session.contains(person)).isTrue();
assertThat(session.contains(address)).isTrue();
9
10 session.detach(person);
11 assertThat(session.contains(person)).isFalse();
12 assertThat(session.contains(address)).isFalse();
13 }
Here, we can see that after detaching person, neither person nor address exists in
the persistent context.

Unintuitively, CascadeType.LOCK re-attaches the entity and its associated child


entity with the persistent context again.
Let's see the test case to understand CascadeType.LOCK:
1
2 @Test
3 public void whenDetachedAndLockedThenBothReattached() {
4 Person person = buildPerson("devender");
5 Address address = buildAddress(person);
6 person.setAddresses(Arrays.asList(address));
session.persist(person);
7 session.flush();
8
9 assertThat(session.contains(person)).isTrue();
10 assertThat(session.contains(address)).isTrue();
11
12 session.detach(person);
13 assertThat(session.contains(person)).isFalse();
assertThat(session.contains(address)).isFalse();
14 session.unwrap(Session.class)
15 .buildLockRequest(new LockOptions(LockMode.NONE))
16 .lock(person);
17
18 assertThat(session.contains(person)).isTrue();
assertThat(session.contains(address)).isTrue();
19 }
20
21

As we can see, when using CascadeType.LOCK, we attached the entity person and
its associated address back to the persistent context.

Refresh operations re-read the value of a given instance from the database. In
some cases, we may change an instance after persisting in the database, but later
we need to undo those changes.
In that kind of scenario, this may be useful. When we use this operation with
CascadeType REFRESH, the child entity also gets reloaded from the database
whenever the parent entity is refreshed.
For better understanding, let's see a test case for CascadeType.REFRESH:
1 @Test
public void whenParentRefreshedThenChildRefreshed() {
2 Person person = buildPerson("devender");
3 Address address = buildAddress(person);
4 person.setAddresses(Arrays.asList(address));
5 session.persist(person);
6 session.flush();
person.setName("Devender Kumar");
7 address.setHouseNumber(24);
8 session.refresh(person);
9
10 assertThat(person.getName()).isEqualTo("devender");
11 assertThat(address.getHouseNumber()).isEqualTo(23);
}
12

Here, we made some changes in the saved entities person and address. When we
refresh the person entity, the address also gets refreshed.

The replicate operation is used when we have more than one data source, and we
want the data in sync. With CascadeType.REPLICATE, a sync operation also
propagates to child entities whenever performed on the parent entity.
Now, let's test CascadeType.REPLICATE:
1
2 @Test
3 public void whenParentReplicatedThenChildReplicated() {
Person person = buildPerson("devender");
4 person.setId(2);
5 Address address = buildAddress(person);
6 address.setId(2);
7 person.setAddresses(Arrays.asList(address));
8 session.unwrap(Session.class).replicate(person, ReplicationMode.OVERWRITE);
session.flush();
9
10 assertThat(person.getId()).isEqualTo(2);
11 assertThat(address.getId()).isEqualTo(2);
12 }

Because of CascadeType REPLICATE, when we replicate the person entity, then its
associated address also gets replicated with the identifier we set.
CascadeType.SAVE_UPDATE propagates the same operation to the associated
child entity. It's useful when we use Hibernate-specific operations like save,
update, and saveOrUpdate.
Let's see CascadeType.SAVE_UPDATE in action:
1
@Test
2 public void whenParentSavedThenChildSaved() {
3 Person person = buildPerson("devender");
4 Address address = buildAddress(person);
5 person.setAddresses(Arrays.asList(address));
6 session.saveOrUpdate(person);
session.flush();
7 }
8
Because of CascadeType.SAVE_UPDATE, when we run the above test case, then
we can see that the person and address both got saved. Here's the resulting SQL:
Hibernate: insert into Person (name, id) values (?, ?)
1 Hibernate: insert into Address (city, houseNumber, person_id, street, zipCode, id)
2 values (?, ?, ?, ?, ?, ?)

The Cascade Model is not a design-process model, a human-computer interaction


model, an information retrieval model, an interaction in information retrieval
model, nor a Web site design or other specific technology design model. The
Cascade Model is a design model for operational online information retrieval
systems. The model emphasizes the many design layers that should be considered
in relation to each other in the process of designing and implementing an
automated information system.
The model describes the layers in the design and is labeled "Cascade," because
the layers interact in a cascading manner. Design features of earlier layers
inevitably affect the success of later design features. Later features, if poorly
designed, can block the effectiveness of the earlier layers. Either way, without
integrated good design across all layers, and constantly considering the layers in
relation to each other in design and development, the resulting information
system is likely to be poor, or at least sub-optimal. In the new world of
information systems--those associated with digital libraries and databases on the
Internet, particularly--we need a unified model, for design purposes, of the
underlying network, hardware, information, database structures, search
capabilities, interface design and social context.
In conclusion changing even seemingly small things at one information system
design layer can have huge implications for the other layers. Since the layers
themselves interact in a cascade from system and information content chosen all
the way through interface design and characteristics of use, the design of such
systems must also manifest mutual knowledge between those layers. In the
development of digital libraries, the layers are proliferating and the potential for
conflict between layers multiplying (Kramer et al. 1997; Lynch & Garcia-Molina,
1995). It is not uncommon nowadays for information systems to be designed
collectively by several different individuals or groups, some of which either never
talk to each other or talk past each other. Now that so much computing power
and sophistication is entering the information world, especially in the
development of digital libraries, deep expertises are developing in each of the
layers displayed. One person cannot know all that is needed to put such a system
together effectively.
Unfortunately, people working at each layer may do an excellent job in their own
area of expertise but may fail to recognize or influence the design issues
interacting between their layer and the other layers. As a consequence, individual
layers may function very well, but work at cross purposes with the functions of
other layers.
Digital libraries cannot be fully effective as information sources for users until the
entire design process is done in a manner that involves genuine conceptual and
practical coordination among the people working on the system layers. The
information content, its database structure, and retrievable elements, should not
be selected without full consultation with experts in the subject domain and in
the information seeking behavior and context of use of the proposed digital
library information.
The interface design should meet not only general criteria of good interface
design, but should also draw on expertise in information system interface design.
That expertise will include understanding of various options in the provision of
search capabilities for the user, including front-ends, as well as understanding of
the underlying indexing and metadata structure, and how that structure can best
be represented and used in the interface. In sum, all layers of the system for
accessing and displaying digital library information should be simultaneously
designed with knowledge of what is going forward in the other layers. It takes
only one wrongly placed layer to thwart all the clever work done at every other
layer. For effective information retrieval to occur, all layers of a system must be
designed to work together, and the people doing the designing must genuinely
communicate.
4. Outline the risks in decision making providing mitigation strategies

As we well know, there are risks inherent in almost every major business decision.
Even if decision-makers opt out of an opportunity because it seems too risky, that
decision in itself can still be hazardous. Being too timid could lead to things like
new markets not being pursued, new products not being developed or allowing
competitors to gain the advantage. Therefore, it's crucial to have a detailed, data-
backed strategy in place to measure and reduce risk.
Risk mitigation strategies are designed to eliminate, reduce or control the impact
of known risks intrinsic with a specified undertaking, prior to any injury or fiasco.
With these strategies in place, risks can be foreseen and dealt with. Fortunately,
today’s technology allows businesses to formulate their risk mitigation strategies
to the greatest capacity yet. While every organization needs to identify the
strategies that are most appropriate for them, here are a few simple strategies to
perfect the process.
The four types of risk mitigating strategies include risk avoidance, acceptance,
transference and limitation.
Avoid: In general, risks should be avoided that involve a high probability impact
for both financial loss and damage.
Transfer: Risks that may have a low probability for taking place but would have a
large financial impact should be mitigated by being shared or transferred, e.g. by
purchasing insurance, forming a partnership, or outsourcing.
Accept: With some risks, the expenses involved in mitigating the risk is more than
the cost of tolerating the risk. In this situation, the risks should be accepted and
carefully monitored.
Limit: The most common mitigation strategy is risk limitation, i.e. businesses take
some type of action to address a perceived risk and regulate their exposure. Risk
limitation usually employs some risk acceptance and some risk avoidance.
References;
[01] Kenneth C. Laudon Jane P. Laudon (2016),“Management Information
Systems, Managing the Digital Firm”, Global Edition Text Book, Fourteenth Edition
pg. 121 – 122
[02] Josh Bendickson Jeff Muldoon Eric W. Liguori Phillip E. Davis, (2016),“Agency
theory: background and epistemology “, Journal of Management History, Vol. 22
Iss 4 pp. 437 – 449
[03] Mary S. Logan, (2000),“Using Agency Theory to Design Successful Outsourcing
Relationships”, The International Journal of Logistics Management, Vol. 11 Iss 2
pp. 21 – 32
[04] Naomi Wangari Mwai Joseph Kiplang’at David Gichoya , (2014),“Application
of resource dependency theory and transaction cost theory in analysing
outsourcing information communication services decisions A case of selected
public university libraries in Kenya “, The Electronic Library, Vol. 32 Iss 6 pp. 786 –
805
[05] Ogan M. Yigitbasioglu, (2010),”Information sharing with key suppliers: a
transaction cost theory
Perspective”, International Journal of Physical Distribution & Logistics
Management, Vol. 40 Iss. 7 pp. 550 – 578
Lynch, C., & Garcia-Molina, H. (1995) Interoperability, scaling, and the digital
libraries research agenda: A report on the May 18-19, 1995 IITA Digital Libraries
Workshop, August 22, 1995. Online: http://www-
diglib.stanford.edu/diglib/pub/reports/iita-dlw/main.html [Accessed: April 30,
2001]
Markey, K. (1984) Subject searching in library catalogs: before and after the
introduction of online catalogs. Dublin, OH: OCLC Online Computer Library Center.
Matthews, J.R., Lawrence, G.S., & Ferguson, D.K. (Eds.) (1983) Using online
catalogs: a nationwide survey: a report of a study sponsored by the Council on
Library Resources. New York: Neal-Schuman.
Miller, J. (Ed.) (1994) Sears list of subject headings. 15th ed. New York: Wilson.
Norman, D.A. & Draper, S.W. (1986) User centered system design: New
perspectives on human-computer interaction. London: Lawrence Erlbaum
Associates.
Payette, S.D., & Rieger, O.Y. (1998) Supporting scholarly inquiry; Incorporating
users in the design of the digital library. Journal of Academic Librarianship, 24 (2):
121-129.
Petersen, T., (dir.) (1994) Art & architecture thesaurus. 2nd ed. New York: Oxford.
Also: http://www.getty.edu/research/tools/vocabulary/ [Accessed: April 27,
2001]
Robertson, S.E. (1977) Theories and models in information retrieval. Journal of
Documentation, 33 (2): 126-148.
Rowley, J. & Farrow, J. (2000) Organizing knowledge: An introduction to managing
access to information. 3nd ed. Aldershot, Hampshire, England: Gower.
Salton, G. & McGill, J.M. (1983) Introduction to modern information retrieval. New
York: McGraw-Hill.
Saracevic, T. (1996) Modeling interaction in information retrieval (IR): A review
and proposal. Proceedings of the 59th Annual Meeting of the American Society for
Information Science, 33: pp. 3-9.
Saracevic, T. (1997) The stratified model of information retrieval interaction:
Extension and application. In Schwartz, C. & Rorvig, M. (Eds.), Proceedings of the
60th ASIS Annual Meeting, 34, (pp. 313-327). Medford, NJ: Information Today.
Saracevic, T., & Kantor, P. (1988) A study of information seeking and retrieving. II.
Users, questions, and effectiveness. Journal of the American Society for
Information Science, 39 (3): 177-196.
Schatz, B.R., Johnson, E.H., Cochrane, P.A., & Chen, H. (1996) Interactive term
suggestion for users of digital libraries: using subject thesauri and co-occurrence
lists for information retrieval. In Proceedings of the 1st ACM International
Conference on Digital Libraries. (pp. 126-133). New York: Association for
Computing Machinery.
Shneiderman, B. (1998) Designing the user interface: Strategies for effective
human-computer interaction. 3rd ed. Reading, Mass.: Addison Wesley Longman.
Shor, R. (1991) A uniform graphics front-end. Computers in Libraries, 11 (11): 48-
51.
Siegfried, S., Bates, M.J., & Wilde, D.N. (1993) A profile of end-user searching
behavior by humanities scholars: the Getty Online Searching Project report no.
2. Journal of the American Society for Information Science, 44 (5): 273-291.
Spink, A. (1997) Information science: A third feedback framework. Journal of the
American Society for Information Science, 48 (8): 728-740.
Syan, C.S. & Menon, U. (1994) Concurrent engineering: Concepts, implementation
and practice. London: Chapman & Hall.
Taube, M. (1953- ) Studies in coordinate indexing. Washington: Documentation,
Inc.
Text Encoding Initiative. [Home page] http://www.tei-c.org/ [Accessed: Arpil 27,
2001]
Vickery, B., & Vickery, A. (1993) Online search interface design. Journal of
Documentation, 49 (2): 103-187.
Weibel, S.L., & Lagoze, C. (1997) An element set to support resource
discovery. International Journal on Digital Libraries, 1 (2): 176-186. Also:
http://dublincore.org/ [Accessed April 27, 2001]
Zhang, J. & Fine, S. (1996) The effect of human behavior on the design of an
information retrieval system interface. International Information and Library
Review, 28 (3): 249-260.
Zhu, Z. (2001) Towards an integrating programme for information systems design:
An Oriental case. International Journal of Information Management, 21 (1): 69-90

Das könnte Ihnen auch gefallen