Sie sind auf Seite 1von 10

Database Systems Journal vol. 1, no.

1/2010 27

Solutions for improving data extraction from virtual data warehouses

Adela BÂRA
Economic Informatics Department, Academy of Economic Studies
Bucharest, ROMANIA
bara.adela@ie.ase.ro

Abstract: The data warehousing project’s team is always confronted with low performance in
data extraction. In a Business Intelligence environment this problem can be critical because the
data displayed are no longer available for taking decisions, so the project can be compromised.
In this case there are several techniques that can be applied to reduce queries’ execution time
and to improve the performance of the BI analyses and reports. Some of the techniques that can
be applied to reduce the cost of execution for improving query performance in BI systems will be
presented in this paper.
Keywords: Virtual data warehouse, Data extraction, SQL tuning, Query performance

1 Introduction
The Business Intelligence (BI)
already build for the second ETL process to
load data into the data warehouse.
systems manipulate data from various This paper presents some aspects of the
organizational sources like files, databases, implementation of a virtual data warehouse
applications or from the Enterprise Resource in a national company where an ERP was
Planning (ERP) systems. Usually, data from recently setup and a set of BI reports must
these sources is extracted, transformed and be developed quickly. Based on a set of
loaded into a primary target area, called views that collects data from the ERP
staging area which is managed by a system, a virtual data warehouse based on an
relational database management system. ETL process was designed. The database
Then, in order to build analytical reports management system (DBMS) is Oracle
needed in a BI environment, a second ETL Database 10g Release 2 and the reports were
(extract, transform and load) process is developed in Oracle Business Intelligence
applied to load data into a data warehouse Suite (OBI). After the development of the
(DW). There are two ways to implement a analytical BI reports, the project team run
DW: to store data in a separate repository several tests in a real organizational
which is the traditional method and to environment and measured the performance
extract data directly from the relational of the system. The main problem was the
database which manages the staging high cost of execution. These reports were
repository. The last solution is usually over 80% resource consuming of the total
applied if the ERP system is not yet fully resources allocated for the ERP and BI
implemented and the amount of data is not systems. Also, the critical moment when the
as huge as the queries can be run in a system was breaking down was at the end of
reasonable time (less than 30 minutes). The the month when all transactions from
implementation of a virtual data warehouse functional modules were posted to the
is fastest and requires a low budget then a General Ledger module. After testing all
traditional data warehouse. Also this method parameters and factors, the team concluded
can be applied in a prototyping phase but that the major problem was in the data
after the validation of the main extraction from the relational database. So,
functionalities, data can be extracted and in order to improve the query performance,
loaded into a traditional data warehouse. It’s some of the main optimization techniques
a very good practice to use the staging area are considered.
28 Solutions for improving data extraction from virtual data warehouses

2 An overview of the SQL execution session that initiated it, the cursor that is
process executed in the PGA and the SQL work
areas. The main components which affect
The query performance depends on one the query execution are the SQL work areas.
side on the technology and the DBMS that A SQL query is executed in a SQL work
are used and on the other side on the way area based on an execution plan and
queries are executed and data are processed. algorithm: hash, sort, merge. Thus, the SQL
So, first let’s take a look on the way Oracle work area allocates a hash area or a sort area
Database manages the queries. There are or a merge area in which the query is
two memory structures which are executed. These algorithms are applied
responsible with SQL processing: the depending on the SQL operators, for
System Global Area (SGA) - a shared example a sort operator uses a work area
memory area that contains data and control called the sort area to perform the in-
information for the instance and the Program memory sort of a set of rows. A hash-join
Global Area (PGA) - a private memory operator uses a work area called the hash
region containing data and control area to build a hash table from its left input.
information for each server process. If the amount of data to be processed by
The main component in the SGA that these two operators does not fit into a work
affect query optimization process is the area, then the input data is divided into
Shared pool area which caches various SQL smaller pieces. This allows some data pieces
constructs that can be shared among users to be processed in memory while the rest are
and contains shared SQL areas, the data spilled to temporary disk storage to be
dictionary cache, and the fully parsed or processed later. But the response time
compiled representations of PL/SQL blocks. increases and it affects the query
A single shared SQL area is used by performance. The size of a work area can be
multiple users that issue the same SQL controlled and tuned, but in general bigger
statement. The size of the shared pool database areas can significantly improve the
affects the number of disk reads. When a performance of a particular operator at the
SQL statement is executed, the server cost of higher memory consumption. The
process checks the dictionary cache for best solution that can be applied is to use
information on object ownership, location, Automated SQL Execution Memory (PGA)
and privileges and if it is not present, this Management which provides an automatic
information is loaded into the dictionary mode for allocating memory for SQL
cache through a disk read. The disk reads working. Thus the working areas that are
and parsing are expensive operations; so it is used by memory-intensive operators (sorts
preferable that repeated executions of the and hash-joins) can be automatically and
same statement find required information in dynamically adjusted. This feature of Oracle
memory, but this process require a large Database offers several performance and
amount of memory. So in conclusion, the scalability benefits for analytical reports
size of the shared pool leads to better SQL workloads used in a BI environment or
management by reducing disk reads, shared mixed workloads with complex queries. The
SQL queries, reducing hard paring and overall system performance is maximized,
saving CPU resources and improving and the available memory is allocated more
scalability. efficiently among queries to optimize both
The PGA is a non-shared memory area throughput and response time [1].
that is allocated for each server process that Another important component of the
can read and write to it. The Oracle Oracle Database is the Query optimizer that
Database allocates a PGA when a user creates the execution plan for a SQL
connects to an Oracle database. So, a PGA statement. The execution plan can greatly
area contains the information about: the user affect the execution time of a SQL query
Database Systems Journal vol. 1, no. 1/2010 29

and it consists in a series of operations that


are performed in sequence to execute the 3.2 Partitioning
specified statement. The Query optimizer The main objective of the partitioning
considers many factors related to the objects technique is to decrease the amount of disk
referenced and the conditions specified in activity and limiting the amount of data to
the statement such as: statistics gathered for be examined or operated on and enabling
the system related to the I/O operations, parallel execution required to perform
CPU resources and schema objects; queries against virtual data warehouses.
information in the data dictionary; Tables are partitioning using a partitioning
conditions in WHERE clause; execution key that is a set of columns which will
hints supplied by the developers. Based on determine by their conditions in which
the evaluation of these factors the Query partition a given row will be store. Oracle
optimizer decides which is the most efficient Database 10g on which our ERP is
path to access data and how to join tables implemented provides three techniques for
(full-scan, hash, sort, and merge algorithms). partitioning tables [1]:
In conclusion the execution plan contains all • Range Partitioning - specify by a
information of a SQL statement execution range of values of the partitioning key;
and in order to improve query performance • List Partitioning - specify by a list of
we have to analyze this plan and to try to values of the partitioning key;
eliminate some of the factors that affect the • Hash Partitioning - a hash algorithm
performance. is applied to the partitioning key to
determine the partition for a given row;
3 Optimization solutions Sub partitioning techniques can be
3.1. Materialized views applied and first tables are partitioned by
To reduce the multiple joins between range/list/hash and then each partition is
relational tables in a virtual data warehouse, divided in sub partitions:
the first solution was to rewrite the views • Composite Range-Hash Partitioning
and build materialized views and semi- – a combination of Range and Hash
aggregate tables on the staging area. Data partitioning techniques, in which a table is
sources are loaded in these tables by the first range-partitioned, and then each
ETL (extract, transform and load) process individual range-partition is further sub-
periodically, for example at the end of the partitioned using the hash partitioning
week or at the end of the month after posting technique;
to the General Ledger. A benefit of this • Composite Range-List Partitioning -
solution is that it eliminates the joins from a combination of Range and List partitioning
the views and the ETL process can be used techniques, in which a table is first range-
to load data in a future data warehouse that partitioned, and then each individual range-
will be implemented after the prototype partition is further sub-partitioned using the
validation. list partitioning technique.
After re-write the queries in terms of • Index-organized tables can be
materialized views, the project team re-test partitioned by range, list, or hash.
the system under real conditions. The time In our case we consider evaluating each
for data extraction was again too long and type of partitioning technique and choose
the costs of executions consumed over 50% the best method that can improve the
of total resources. So, on these materialized queries’ performance. Some of our research
views and tables some of optimization can be found also in [2] and [3].
techniques must be applied. These For the loading process we created two
techniques are: table partitioning, indexing, tables based on the main table and compare
using hints and using analytical functions the execution cost obtained by applying the
instead of data aggregation in some reports. same query on them. First table TEST_A
30 Solutions for improving data extraction from virtual data warehouses

contained un-partitioned data and is the subpartition QT1_EXT values


('j.EXT','k.Imp') ),
target table for an ETL process. It counts partition QT2 values less than
100000 rows and the structure is shown (to_date('01-JUL-2009', 'dd-mon-
below in the scripts. The second table yyyy'))
(subpartition QT2_OP values
TEST_B is a range partitioned table by ('a.MTN','b.CTM','c.TRS','d.WOD','e.DM
column T_DATE which refers to the date of A'),
subpartition QT2_GA values ('f.GA
the transaction. This table has four partitions op','g.GA corp'),
as you can observe from the script below: subpartition QT2_AFO values ('h.AFO
div','i.AFO corp'),
create table test_b subpartition QT2_EXT values
( T_DATE date not null, ('j.EXT','k.Imp')),
PERIOD varchar2(15) not null, partition QT3 values less than
DEBIT number, (to_date('01-OCT-2009', 'dd-mon-
CREDIT number, yyyy'))
ACCOUNT varchar2(25), (subpartition QT3_OP values
DIVISION varchar2(50), ('a.MTN','b.CTM','c.TRS','d.WOD','e.DM
SECTOR varchar2(100), A'),
UNIT varchar2(100)) subpartition QT3_GA values ('f.GA
partition by range (T_DATE) op','g.GA corp'),
(partition QT1 values less than subpartition QT3_AFO values ('h.AFO
(to_date('01-APR-2009', 'dd-mon- div','i.AFO corp'),
yyyy')), subpartition QT3_EXT values
partition QT2 values less than ('j.EXT','k.Imp')),
(to_date('01-JUL-2009', 'dd-mon- partition QT4 values less than
yyyy')), (to_date('01-JAN-2010', 'dd-mon-
partition QT3 values less than yyyy'))
(to_date('01-OCT-2009', 'dd-mon- (subpartition QT4_OP values
yyyy')), ('a.MTN','b.CTM','c.TRS','d.WOD','e.DM
partition QT4 values less than A'),
(to_date('01-JAN-2010', 'dd-mon- subpartition QT4_GA values ('f.GA
yyyy'))); op','g.GA corp'),
subpartition QT4_AFO values ('h.AFO
div','i.AFO corp'),
Then, we create the third table which is Subpartition QT4_EXT values
partitioned and that contained also for each ('j.EXT','k.Imp')));
range partition four list partitions on the
column “Division” which is very much used After loading data in these two
in data aggregation in our analytical reports. partitioned tables we gather statistics with
The script is showed below: the package DBMS_STATS. Analyzing the
decision support reports we choose a sub-set
create table TEST_C
( T_DATE date not null,
of queries that are always performed and
PERIOD varchar2(15) not null, which are relevant for testing the
DEBIT number, optimization techniques. We run these
CREDIT number,
ACCOUNT varchar2(25),
queries on each test table A, B and C and
DIVISION varchar2(50), compare the results in table 1.
SECTOR varchar2(100),
UNIT varchar2(100))
partition by range (T_DATE) In conclusion, the best technique in our
subpartition by list (DIVISION) case is to use table C instead table A or table
(partition QT1 values less than B, that means that partitioning by range of
(to_date('01-APR-2009', 'dd-mon-
yyyy')) T_DATE and then partitioning by list of
(subpartition QT1_OP values DIVISION with type VARCHAR2 is the
('a.MTN','b.CTM','c.TRS','d.WOD','e.DM most efficient method. Also, we obtained
A'),
subpartition QT1_GA values ('f.GA better results with table B partitioned by
op','g.GA corp'), range of T_DATE than table A non-
subpartition QT1_AFO values ('h.AFO partitioned.
div','i.AFO corp'),
Table 1 Comparative analysis results for simple queries
TABLE: TES TEST_B TEST_C
Database Systems Journal vol. 1, no. 1/2010 31

T_A
Not Partition range Partition range by date
partitioned by date on column with four list partions on column
“T_DATE” “DIVISION”
Wit Par Wit Par Su
hout tition hout tition b-partition
QUERRY: partition (QT1) partition (QT1) (QT1_AF
clause clause O)
Select * from 170 184 - 346 - -
TEST_
where extract 183 197 91 357 172 172
(month from T_date) =1;
… and 173 199 91 25 12 172
division='h.AFO divizii'
select sum(debit) 173 199 91 25 12 172
TD, sum(credit) TC from
test_
where extract (month from
t_date) =1
and division='h.AFO
divizii'
… and unit ='MTN' 173 199 91 350 172 172
Note: The grey marked ones have the best execution cost of the current query

set the parameters for query optimizer mode


3.3 Using hints and indexes depending on our goal. For BI systems, time
When a SQL statement is executed the is one of the most important factor and we
query optimizer determines the most should optimize a statement with the goal of
efficient execution plan after considering best response time. To set up the goal of the
many factors related to the objects query optimizer we can use one of the hints
referenced and the conditions specified in that can override the OPTIMIZER_MODE
the query. The optimizer estimates the cost initialization parameter for a particular SQL
of each potential execution plan based on statement [1]. The optimizer first determines
statistics in the data dictionary for the data whether joining two or more tables having
distribution and storage characteristics of UNIQUE and PRIMARY KEY constraints
the tables, indexes, and partitions accessed and places these tables first in the join order.
by the statement and it evaluates the The optimizer then optimizes the join of the
execution cost. This is an estimated value remaining set of tables and determinates the
depending on resources used to execute the cost of a join depending on the following
statement which includes I/O, CPU, and methods:
memory [1]. This evaluation is an • Hash joins are used for joining large
important factor in the processing of any data sets and the tables are related with an
SQL statement and can greatly affect equality condition join. The optimizer uses
execution time. the smaller of two tables or data sources to
We can override the execution plan of build a hash table on the join key in memory
the query optimizer with hints inserted in and then it scans the larger table to find the
SQL statement. A SQL statement can be joined rows. This method is best used when
executed in many different ways, such as the smaller table fits in available memory.
full table scans, index scans, nested loops, The cost is then limited to a single read pass
hash joins and sort merge joins. We can over the data for the two tables.
32 Solutions for improving data extraction from virtual data warehouses

• Nested loop joins are useful when and a sort operation does not have to be done.
small subsets of data are being joined and We compare these techniques using hints
if the join condition is an efficient way of in SELECT clause and based on the results in
accessing the second table. table 2 we conclude that the Sort merge join is
• Sort merge joins can be used to join the most efficient method when table are
rows from two independent sources. Sort indexed on the join column for each type of
merge joins can perform better than hash table: non-partitioned, partitioned by range
joins if the row sources are sorted already and partitioned by range and sub partitioned
by list.
Table 2. Comparative analysis results using hints
TABLE: TEST_A TEST_B TEST_C
Not Partition range by Partition range by date with
partitioned date on column four list partions on column
“T_DATE” “DIVISION”
Withou Partitio Withou Partitio Sub-
t partition n (QT1) t partition n (QT1) partition
QUERRY: clause clause (QT1_AF
O)
select /*+ USE_HASH 176 182 95 294 152 152
(a u)*/ a.*,
u.location,u.country,
u.region
from TEST_t a
, d_units u
where a. unit=u. unit
and extract (month from
T_date) =1
… and a.division = 175 181 94 28 19 150
'h.AFO divizii'
…/*+ USE_NL (a u)*/ 281 287 151 170 18 171
…/*+ USE_NL (a u)*/ 265 235 110 120 12 143
--WITH INDEXES
…/*+ USE_MERGE (a 174 180 94 27 18 150
u)*/
--WITH INDEXES
…and u. unit ='MTN' 174 180 94 151 19 150
…/*+ USE_NL (a u)*/ 174 180 94 21 18 150
…/*+ USE_NL (a u)*/ 172 178 86 21 12 144
--WITH INDEXES
…/*+ USE_MERGE (a 172 178 86 21 12 144
u)*/
--WITH INDEXES
Database Systems Journal vol. 1, no. 1/2010 33

The significant improvement is in sub avoid using too much aggregate data from the
partitioned table in which the cost of virtual data warehouse. Thus, we can apply
execution was drastically reduce at only 12 these functions to write simple queries
points compared to 176 points of non- without grouping data like the following
partitioned table. Without indexes the most example in which we can compare the amount
efficient method is hash join with best results of current account with the average for three
in partitioned table and sub partitioned table. consecutive months in the same division,
sector and management unit, back and
3.3 Using analytical functions forward:
In the latest versions in addition to select period, division, sector,
unit, debit,
aggregate functions Oracle implemented avg(debit) over (partition by
analytical functions to help developers division, sector, unit
building decision support reports [1]. order by extract (month from
t_date)
Aggregate functions applied on a set of range between 3 preceding and 3
records return a single result row based on following) avg_neighbours
from test_a
groups of rows. Aggregate functions such as
SUM, AVG and COUNT can appear in
SELECT statement and they are commonly 3.4. Object oriented implementation
used with the GROUP BY clauses. In this A modern RDBMS environment, such as
case Oracle divides the set of records into Oracle Database 10g, supports the object type
groups, specified in the GROUP BY clause. concepts that can be used to specify the
Aggregate functions are used in analytic multidimensional models (MD) constrains. An
reports to divide data in groups and analyze object type differs from native SQL data types
these groups separately and for building in that it is user-defined, and it specifies both
subtotals or totals based on groups. Analytic the underlying persistent data (attributes) and
functions process data based on a group of the related behaviours (methods).
records but they differ from aggregate The object type is an object layer that can
functions in that they return multiple rows for map the MD model over the database level,
each group. The group of rows is called a but data is still stored in columns and tables.
window and is defined by the analytic clause. Internally, statements about objects are still
For each row, a sliding window of rows is basically statements about relational tables
defined and it determines the range of rows and columns, and you can continue to work
used to process the current row. Window sizes with relational data types and store data in
can be based on either a physical number of relational tables. But we have the option to
rows or a logical interval, based on conditions take advantage of object-oriented features too.
over values [4] Data persistency is assured by the object
Analytic functions are performed after tables, where each row of the table
completing operations such joins, WHERE, corresponds to an instance of a class and the
GROUP BY and HAVING clauses, but before table columns are the class’s attributes. Every
ORDER BY clause. Therefore, analytic row object in an object table has an associated
functions can appear only in the select list or logical object identifier. There can be use two
ORDER BY clause [1]. types of object identifiers: a unique system-
Analytic functions are commonly used to generated identifier of length 16 bytes for
compute cumulative, moving and reporting each row object assigned by default by Oracle
aggregates. The need for these analytical in a hidden column, and primary-key based
functions is to provide the power of identifiers specified by the user and in which
comparative analyses in the BI reports and to we have the advantage of enabling a more
34 Solutions for improving data extraction from virtual data warehouses

efficient and easier loading of the object table create or replace type unit_o under
unitspace_ot (sector ref sector_o,
[1]. location ref location_o /*also add other
The object oriented implementation can be attributes and methods*/) final;
used to reduce the execution cost by avoid the
multiple joins between the fact and the The orders’ fact is implemented also as an
dimension tables. For exemplification we’ll object type INSTANTIABLE and NOT
present here only the classes of management FINAL:
unit dimension (table unit in our previous
examples) and the fact table – balance_R. create or replace type balance_r as
object
We’ll use a super type class to define the
management unit dimension. We’ll call it as ( T_DATE date not null,
PERIOD varchar2(15) not null,
UnitSpace_OT. For hierarchical levels of the DEBIT number,
dimension, as you can observe, there are two CREDIT number,
major hierarchies: ACCOUNT varchar2(25),
DIVISION varchar2(50),
• geographical locations SECTOR varchar2(100),
(H1): zone->region->country- UNIT_ID varchar2(100)));
>location-> unit
• organizational and The methods of each MD object type are
management (H2): division- implemented as object type bodies in
>sector-> unit. PL/SQL language that are similar with
So, the final object in both hierarchies is package bodies. For example the unit_o
unit which will have two REFs, one for H1 object type has the following body:
and one for H2 hierarchies. The script is
shown below: create or replace type body unit_o as
static function f_unit_stat(p_tab
For the first hierarchy (H1): varchar2,p_gf varchar2, p_col_gf
varchar2, p_col varchar2, p_val number)
create or replace type unitspace_ot as return number as
object (unitspace_id number, /* the function return the aggregate
unitspace_desc varchar2 (50), statistics from fact tables for a
unitspace_type varchar2 (50)) not specific unit */
instantiable not final; v_tot number;
create or replace type zone_o under text varchar2(255);
unitspace_ot (/*also add other begin
attributes and methods*/) not final; text:= 'select '|| p_gf || '
create or replace type region_o under ('||p_col_gf||') from '||p_tab||' cd
unitspace_ot (zone ref zone_o /*also add where '||p_col||'= '||p_val;
other attributes and methods*/) not execute immediate text into v_tot;
final; return v_tot;
create type country_o under end f_unit_stat;
unitspace_ot (region ref region_o /*also /*others functions or procedures*/
add other attributes and methods*/) not end;
final; /
create type location_o under We can use this function to get different
unitspace_ot (country ref country_o
/*also add other attributes and aggregate values for a specific unit, such as
methods*/) not final; the total amount of quantity per unit or the
The second hierarchy (H2): average value per unit. Data persistency is
create or replace type division_o
under unitspace_ot (/*also add other assured with object tables that will store the
attributes and methods*/) not final; instances of that object type, for example:
create type sector_o under
unitspace_ot (division ref division_o CREATE TABLE unit_t OF unit_o;
/*also add other attributes and
methods*/) not final;
Database Systems Journal vol. 1, no. 1/2010 35

For example the static function f_unit_stat from unit t, balance_r b


where t.unit_id=b.unit_id
from the unit_o class can be use to retrieve
We’ll use for testing two types of tables:
the total debit value for each unit:
object tables (unit_t and balance_rt) and
(1) select unit_id, description, relational tables (unit and balance_r). The
unit_o.f_unit_stat(‘balance_rt’, ‘SUM’, cardinality of these tables is about 100000
‘debit’, ‘unit_id’, unit_id) total records in balance and about 100 in unit
from unit_t;
tables.
We analyze the impact of calling the
instead of using the join between the
function in the SQL query in different
unit_t and the balance_r tables:
situations, as we present in the following
(2) select t.unit_id, t.description, table:
SUM(debit) total_debit
Table 3. The execution costs of the queries
No Query Cost
select unit_id, description, unit_o.f_unit_stat(‘balance_rt’, ‘SUM’, 35
‘debit’, ‘unit_id’, unit_id) total
from unit_t
select t.unit_id, t.description, SUM(debit) total 171
from unit t, balance_r b
where t.unit_id=b.unit_id
Example (1) with an index on unit_id on both object tables 30
Example (2) with an index on unit_id on both relational tables 171
Example (1) with an index on unit_id on both relational tables 79
with use_nl hint

We analyze the execution cost of the • Soft parsing is used for the function’s
function’s query, it has 35 units, but the SQL query execution instead of hard parsing in the
Tuning Advisor makes a recommendation: case of another SQL query.
“consider collecting statistics for this table The main disadvantage of the model is
and indices”. We used DBMS_STATS that the function needs to open a cursor to
package to collect statistics from both object execute the query which can lead to an
and relational tables. Then we re-run the increase of PGA resources if the fact table is
queries and observe the execution plans; too large. But the execution cost is
there is no change and the Tuning Advisor insignificant and does not require a full table
doesn’t make any recommendation. scan if an index is used on the corresponding
By introducing these types of functions we foreign key attribute.
have the following advantages: Through an ETL (extract, transform and
• The function can be used in many load) process data is loaded into the object
reports and queries with different types of tables from the transactional tables of the
arguments, so the code is re-used and there is ERP organizational system. This process can
no need to build another query for each be implemented also through object types’
report; methods or separately, as PL/SQL packages.
• The amount of joins is reduced; the Our recommendation is that the ETL process
functions avoid the joins by searching the should be implemented separately from the
values in the fact table; object oriented implementation in order to
assure the independency of the MD model.
36 Solutions for improving data extraction from virtual data warehouses

[1] Oracle Corporation - Database


4 Conclusions Performance Tuning Guide 10g Release
The virtual data warehouse is based on a 2 (10.2), Part Number B14211-01, 2005
set of objects like views, packages and [2] Ion Lungu, Manole Velicanu, Adela
program units that extracts, joins and Bâra, Vlad Diaconita, Iuliana Botha –
aggregates rows from the ERP system’s Practices for designing and improving
database. In order to develop a BI system we data extraction in a virtual data
have to build analytical reports based on this warehouses project, Proceedings of
virtual data warehouse. But the performance ICCCC 2008, International conference
of the whole system can be affected by the on computers, communications and
data extraction process which is the major control, Baile Felix, Oradea, Romania,
time and cost consuming job. A possible 15-17 May 2008, pag 369-375,
solution is to apply the optimization published in International Journal of
techniques that can improve the performance. Computers, Communications and
Some of these techniques are presented in this Control, volume 3, 2008, ISSN 1841-
paper. The results that we’ve obtained are 9836
relevant for decreasing the execution cost. [3] Adela Bâra, Ion Lungu, Manole
Also, for developing BI reports an important Velicanu, Vlad Diaconiţa, Iuliana Botha
option is to choose analytic functions for – Improving query performance in
predictions, subtotals over current period, virtual data warehouses, WSEAS
classifications and ratings. Another issue TRANSACTIONS ON
discussed was the OO implementation which INFORMATION SCIENCE AND
is very flexible and offer a very good APPLICATIONS, May 2008, ISSN:
representation of the business aspects that are 1790-0832
essential for BI projects. Classes like [4] Ion Lungu, Adela Bara, Anca Fodor -
dimensions or fact tables can be implemented Business Intelligence tools for building
together with the attributes and methods, the Executive Information Systems,
which can be a very good and efficient 5thRoEduNet International Conference,
practice concerning both the performance and Lucian Blaga University, Sibiu, June
business modelling. 2006
[5] Donald K. Burleson, Oracle Tuning,
References ISBN 0-9744486-2-1, 2006

Das könnte Ihnen auch gefallen