Sie sind auf Seite 1von 8

120 (IJCNS) International Journal of Computer and Network Security,

Vol. 2, No. 5, May 2010

A Critical Evaluation of Relational Database


Systems for OLAP Applications
Sami M. Halawani1, Ibrahim A. Albidewi2, Jahangir Alam3 and Zubair Khan4
1, 2,
Faculty of Computing and Information Technology,
Rabigh Campus,
King Abdulaziz University
Saudi Arabia
halawani@kau.edu.sa, Iabidewi@kau.edu.sa
3
University Women’s Polytechnic, Faculty of Engineering & Technology
Aligarh Muslim University, Aligarh 202002, India
jahangir.uk786@yahoo.co.uk
4
Department of Computer Science and Engineering IIET, Bareilly, India
zubair.762001@gmail.com

Abstract: This paper analyses in detail various aspects of they handle and the subsequent I/Os. This classification
currently available Relational Database Management Systems leads to the following two types of database systems:
(RDBMS) and investigates why the performance of RDBMS
(i) Online Transaction Processing (OLTP) Systems.
suffers even on the most powerful hardware when they support
OLAP, Data Mining and Decision Support Systems. A large (ii) Decision Support Transaction Processing (DSS)
number of data models and algorithms have been presented Systems.
during the past decade to improve RDBMS performance under
heavy workloads. We argue if the problems presented here will 1.1 OLTP Transactions
be taken care of in the future architectures of relational
An OLTP transaction is a unit of work that is usually
databases then RDBMS would be at least the moderate platform
for OLAP, Data mining and Decision Support Systems.
expected to run in a very short duration of time because it
deals with the database in real time or online mode. In other
Keywords: RDBMS, OLAP, Data Mining, Decision Support words these transactions constantly update the database
Systems based on the most current information available so, the next
1. Introduction user can rely on that information being the most current. An
Commercial server applications such as database services, example of this kind of system will be a Library Information
file services, media and email services are the dominant System. In this case, all the information pertaining to the
applications run on the server machines. Database system is kept in tables spread across a disk system, and the
applications occupied around 57% of the server volume and database is online. Any user in the community will have
the share is increasing [8]. Corporate profits and access to that information.
organizational efficiency are becoming increasingly 1.2 Decision Support Transactions:
dependent upon their database server systems, and those
systems are becoming so complex and diverse that they are A different type of application system that is currently in
difficult to maintain and control. With the advent of modern demand is the DSS (Decision Support System). This type of
Internet based tools for accessing the remote and local system is generally used to provide information to
databases, more emphasis is being placed on improving the management so decisions can be made about issues such as
performance of Computer Systems along with their business growth, levels of stock on hand etc. The challenge
functionality[7][12]. To monitor, predict and improve the is in deriving answers to business questions from the
performance of databases has always been a challenge for available data, so that decision makers at all levels can
database administrators. respond quickly to changes in the business climate. While a
standard transactional query might ask, "When did order
A database server performs only database
84305 ship?" a typical decision support might ask, "How do
functions. In terms of workload it performs only
sales in the Southwestern region for this quarter compare
transactions. When a SELECT/ UPDATE statement is
with sales a year ago? What can we predict for sales next
executed a database server interprets it as a series of reads/
quarter? What factors can we alter to improve the sales
writes. Considering that atomic level of anything is smallest
forecast?"
part i.e. it could be said that the atomic level of a transaction
consists of the reads or writes it generates. If broken down to In the OLTP system the rate of the throughput of
this level a database server processes I/Os. So, the database transaction is commonly measured in transactions per
systems can be classified based on the type of transactions second (TPS) or transactions per minute (TPS). With DSS,
throughput is usually measured in queries per hour (QPH).
(IJCNS) International Journal of Computer and Network Security, 121
Vol. 2, No. 5, May 2010

This unit itself indicates that these queries are of extreme Relatively
size and overwhelm the machine’s resource until they are standardized and
Often complex queries
complete. In almost all cases, the ratio between OLTP and Queries simple queries,
involving aggregations.
DSS transactions equals thousands (sometimes tens of Returning relatively
thousands) of OLTP transactions to one DSS transaction [6]. few records.
An OLTP system, even a large one, is usually not much Depends on the
more than 300 GB, whereas a large DSS system can be 2-5 amount of data
involved, batch data
TB in size. Examples of Decision Support Systems are On
Processing refreshers and complex
Line Analytical Processing (OLAP) and Data Mining. Typically very fast.
Speed queries may take many
1.2.1 On Line Analytical Processing: hours, Query speed
may be improved by
OLAP may be defined as “the interactive process of creating indexes
creating, managing, analyzing and reporting on data”. The Larger due to the
Can be relatively
first point is that analytical processing invariably requires Space existence of
small if historical
some kind of data aggregation, usually in many different Requirements aggregation structures
data is archived
ways (i.e. according to many different groupings. In fact, and history data
one of the fundamental problems of analytical processing is Typically de-
that the number of possible groupings becomes very large Database Highly normalized normalized with fewer
very quickly, and yet users need to consider all or most of Design with many tables tables for faster
processing
them. Relational languages do support such aggregations
but each individual query in such a language produces just Instead of regular
backups, some
one table as its result (and all rows in that table are of the Backup Religiously,
environments may
same form and have the same kind of interpretation). Thus, Backup and Operational data is
consider simply
to obtain n distinct groupings require n distinct queries and Recovery critical to run the
reloading the OLTP
produces n distinct result tables. In RDBMS the drawbacks business
data as a recovery
to approach is obvious. Formulating so many similar but method
distinct queries is a tedious job for the user. Executing those
1.2.2 Data Mining:
queries means passing over the same data over and over
again – is likely to be expensive in execution time. Thus the Data mining can be described as “exploratory data
challenges with OLAP are: analysis”. The aim is to look for interesting patterns in data,
patterns that can be used to set business strategy or to
 Requesting several levels of aggregation in a single identify unusual behavior (e.g. a sudden change in credit
query. card activity could mean a card has been stolen). Data
 Offering the implementation the opportunity to compute mining tools apply statistical techniques to large quantities
all those aggregations more efficiently (probably in a of stored data in order to look for such patterns. Data
single pass). mining databases are often very large. Data Mining is also
known as discovering knowledge form a very large database
Special SQL statements are provided to take up the above i.e. getting the interesting patterns from very large
challenges in RDBMS. Examples are the use of GROUPING databases. The process certainly involves thorough analysis
SETS, ROLLUP, and CUBE options available with the of data hence executing one Data Mining query means
GROUP BY clause. But, as far as efficiency is concerned, execution of several OLAP queries. Thus Data Mining may
almost all commercially available RDBMS exhibit poor also be referred as “repeated OLAP”. RDBMS don’t support
performance under OLAP workloads [1][5]. OLAP queries efficiently. Executing repeated OLAP on an
Following table [9] summarizes the major differences RDBMS means to bring the system on its knees.
between OLTP and OLAP systems.
2. Performance Issues of Relational Databases
Table 1: Differences between OLTP and OLAP
OLTP System OLAP System Most of the relational databases currently in use were
Operational data: Consolidation Data: developed during 1990s. Since then their architecture has
Source of OLTPs are the OLAP data comes rarely been revised. The design decisions made at that time
Data original Source of the from the various OLTP still influence the way RDBMS processes the transactions.
Data. Databases. For example designers at that time were not aware of the
To control and run To help with planning, exponential growth of Internet and nobody could expect that
Purpose of
fundamental business problem solving and one day databases would serve as the backbone of large
Data
tasks. decision support. Content Management and Decision Support Systems.
Multi-dimensional The current RDBMS architecture was developed keeping in
What the A snapshot of ongoing
views of various kinds mind the OLTP, i.e. the architecture was optimized for the
Data Reveals business processes.
of business activities. Hardware, available at that time, to efficiently support
Short and fast inserts Periodic long-running OLTP. In the following sections we discuss some problems
Inserts and
and updates initiated batch jobs refresh the which affect the performance, efficiency and usage of
Updates
by end users. data. RDBMS for OLAP and Data Mining applications.
122 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 5, May 2010

2.1 Difficult/ Inefficient to Formulate the OLAP Queries: table (rows and columns) is stored in a single database file
as a sequence of bytes, representing each row, with some
Consider the following MANUFACTURER_PART (MP)
free space between two consecutive bytes (figure 1). The
table:
space is left to accommodate future updates.
Table 2: MANUFACTURER_PART (MP)
Table 3: Storage Scheme for Table 2
M_ID Item_No QTY
M101 A11 1000
M101 A11 1000 M101 A12 500 ----
M101 A12 500
M102 A11 800 Now consider the following query:
M102 A12 900 SELECT * FROM MP WHERE M_ID = ‘M101’;
M103 A12 700
We observe that all data of one row is consecutive
M104 A12 200
and thus can be found in the same disk block, hence a
Now, consider the following OLAP queries: minimum of I/O activity (i.e. one block read/ write) is
1: Get the total item quantity. necessary to execute a typical OLTP query.
Now consider the following query:
2: Get total item quantities by manufacturer.
“Draw a histogram representing total quantities of all items
3: Get total item quantities by item. manufactured by different Manufacturers”
4: Get total item quantities by manufacturer and item. To answer the query the RDBMS needs to analyze all
Following are the psudoSQL formulations of these queries: rows of which only one column (QTY) is used. As a portion
of all rows is required, the RDBMS has to analyze the
1. SELECT SUM(QTY) FROM MP GROUP BY ( ); complete table to execute such queries. It implies that a
2. SELECT M_ID, SUM(QTY) FROM MP GROUP BY heavy load in terms of I/O is generated.
(M_ID); In the large RDBMS applications like data warehousing
3. SELECT Item_No, SUM (QTY) FROM MP GROUP and Data Mining tables may contain several columns and
BY (Item_No); billions of rows. When an OLAP query is executed on such
tables it scans each row of the table and generates a heavy
4. SELECT M_ID, Item_No, SUM (QTY) FROM MP load of I/Os. Only few columns from the table are used but
GROUP BY (M_ID,Item_No); the process takes significant amount of time. Clearly, most
Drawbacks to this approach are obvious: Formulating of the part of the table scan processes is a wasted effort. This
so many similar but distinct queries is tedious for end user. phenomenon is the main reason for the poor performance of
Also, their execution requires passing over the same data RDBMS products under heavy workloads.
over and over again which is likely to be quite expensive in Someone may argue that the problem of wasted efforts
execution time. done in the scanning of large database files may be solved
using some index structures like bit map, but OLAP and
Some special SQL options on the GROUP BY clause Data Mining queries have low selectivity. The real benefits
may ease life, but up to some extent, e.g. GROUP SETS of any index structure come out a in selection process
option allows the user to specify exactly which particular because an indexing technique provides a list of selected
groupings are to be performed. The following SQL record identifiers quickly. In OLAP and Data Mining
statement represents a combination of Queries 2 and 3: applications selection is not the end of the query. The
SELECT M_ID,Item_No,SUM(QTY) FROM MP RDBMS has to perform some action on the selected tuples.
So, indexing is not the solution to this problem.
GROUP BY GROUPING SETS (M_ID,Item_No);
2.3 Trends in Hardware Technology
Solution seems to be acceptable but, firstly it requires Figures 1 – 9 show the advancements in the Hardware
a lot of analytical thinking at end user’s part and secondly technology during the past decade. The size and bandwidth
options like GROUP SETS are not universally supported of memory, size and bandwidth of hard disk, Network
across all SQL implementations. Another problem is that Bandwidth and processing power of computer have
every SQL Query returns a relation as query result that increased tremendously. Exceptions to these trends are I/O
forces the user into a kind of row-at-a-time thinking. OLAP efficiency and memory latency. These factors are affecting
products often display query results not as SQL-style table the performance of all Software including the RDBMS. In
but as cross tabulations or graphs. the following sections we shall consider each of them
2.2 RDBMS Storage and Access Methods individually and shall analyze its effect on RDBMS
performance.
Currently available RDBMS implementations store data in
the form of rows and columns (table). This approach favors
OLTP queries. The storage scheme of these relational
systems is based on the Flattened Storage Model (FSM) or
Normalized Storage Model (NSM). Both of these models
store the data as a consecutive byte sequence. The complete
(IJCNS) International Journal of Computer and Network Security, 123
Vol. 2, No. 5, May 2010

CPU Speed Increments During Last Decade


Disk RPM Increments

4000
18000

3500
16000

3000
14000
2500
C P U S p ee d (M h z )

12000

R o t e tio n s P e r M in u te
2000

10000
1500

8000
1000

6000
500

4000
0
1995 1996 1997 1998 1999 2000 2001 2001 2002 2002 2003 2003 2004 2005
Years
2000

Figure 1. CPU Speed Increments 0


1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005
Years

Disk Bandwidth Increments

40
Figure 4. Disk RPM Increments
35

30
Disk Ban dw idth (MB /s)

25 Disk Size Increments

180500
20

160500
15

140500
10

120500
5
Disk Size (MB)

100500

0
1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 80500
Years

60500

40500
Figure 2. Disk Bandwidth Increments
20500

500
1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005
Years

Figure 5. Disk Size Increments


I/O Efficiency Improvements during Past decade

10

9
Memory Bandwidth Increments
800
8

700
7
Disk Latency (ns)

6 600

5 500
Memory Bandwidth

4
400

3
300

2
200

100
0
1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005
0
Years
1995 1996 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005
Ye ars

Figure 3. I/O Efficiency Improvements Figure 6. Memory Bandwidth Increments


124 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 5, May 2010

Memory Latency Decrements


2.3.1 I/O Efficiency
160
From Fig 2 it is obvious that the I/O efficiency has not
changed much during the past decade. It means that the
time to access the data has stayed about the same. I/O
150
efficiency is an unglamours and often overlooked area of
RDBMS technology. In the vast majority of commercial
140
applications RDBMS performance is more dependent on I/O
M e m o r y L at en cy ( n s )

than on any other computing resource, because the


130 performance cost of I/O outweighs other costs by orders of
magnitude. The most important item to consider is whether
120
the I/O subsystem of a given RDBMS will support sustained
performance as time passes. One of the most common
problems with databases is “Saw - Toothed” performance
110
(Figure 10) where, upon first install and after RDBMS
reorganizations, performance is excellent [3].
100
1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005
Years

Figure 7. Memory Latency Decrements

Memory Size Increments


3000
2900
2800
2700
2600
2500
2400
2300
2200
2100
2000
1900
1800
M e m o r y S iz e ( M B )

1700
1600
1500
1400
1300
1200
1100
1000
900
800 Figure 10. Saw Tooth Performance of RDBMS
700
600
500
400 Performance will typically degrade over time due to
300
200 database fragmentations. Systems that fragment the database
100
0 do not support sustained performance because they are
1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005
Years
unable to perform following two tasks:
 Online reclamation of deleted space
Figure 8. Memory Size Increments  Online reorganization of data.
Benchmarks typically do not detect “Saw - Toothed”
performance because they are not typically run for a
sufficient amount of time to test the anti-fragmentation
Network Bandwidth Growth techniques implemented in a given system.
400
Another aspect that also contributes towards low I/O
efficiency is the product’s ability to explicitly cluster data on
350 the disk with other data that would typically be accessed at
300
the same time. One solution to this problem may be to add
additional database volumes as the amount of data stored in
250 it grows over time. For better I/O efficiency it should be
B a n d w id th (M b p s )

200
possible to place those volumes on separate disks (RAID).
This way many queries can be executed in parallel on
150 different disks that improves performance despite low I/O
100
efficiency.
As the database grows larger, the potential for database “hot
50 spots” increases. A hot spot is a volume within a database
0
that is accessed with great regularity, while other volumes
1994 1996 1998 2000
Years
2002 2004 2006 are accessed much less frequently. Hot spots can become a
bottleneck to database access and thus affect performance.
Solution to this problem may be installing additional cache
Figure 9. Network Bandwidth Increments on the disk itself. This solution may bring performance
improvements if the I/O access pattern exhibits locality of
(IJCNS) International Journal of Computer and Network Security, 125
Vol. 2, No. 5, May 2010

reference and that is not the case with OLAP and Data 2.3.5 CPU Utilization:
Mining queries. Most of the researches relating database performance to
2.3.2 Network Latency: processor have been carried out on multiprocessor platforms
using OLTP workloads [10] [13]. Few studies evaluate
database performance on
multiprocessor platforms using both types of loads (i.e.
OLTP and OLAP) [11]. All of the studies agree that the

RDBMS behavior depends upon the nature of workloads


(DSS or OLTP).
With the development of www and emergence of DSS
and CMS (Content Management Systems), RDBMS demand
for CPU has doubled every 9-12 months according to Greg’s
Law. At the same time processor speed has doubled every 18
months based on Moor’s Law. As shown in Figure 11, the
gap between RDBMS demand and CPU speed is increasing
[1].
Figure 12 shows CPU Utilization under OLTP workloads. A
Figure 11. Database CPU Demand response time versus CPU utilization curve can be obtained
using the following formula [6]:
Low I/O efficiency is not the only form of I/O one should be
concerned with. Network Latency is also equally important RESPONSE_TIME = ∑ Response_TIMES =
particularly in a Client – Server environment. An RDBMS ((QUEUE_LENGTH * SERVICE_TIME) +
should be able to move groups of objects (fields) to and from SERVICE_TIME)
the Client when it is appropriate for the application. To
optimally use Network Bandwidth, only objects that are
requested by the Client should be returned. But what the
RDBMS does is that, it passes all objects that reside together Figure 13 shows CPU Utilization Percentage versus
on a physical disk block to the Client, whether they are response time. From figure 12 it is clear that at about 75%
needed or not. This is really a serious problem with current utilization, the growth of the queue length shifts from linear
RDBMS architecture. to asymptotic. The curve in the queue length graph also
exists in response time graph. This is why we never want to
run our CPUs in a steady state over 75%.
2.3.3 Disk and Memory Bandwidth: We are seeing techniques such as speculative pre-fetch and
From Figure 1 and 5 it is obvious that Disk and Memory multithreading in the processors but the RDBMS demand
bandwidths have increased tremendously during the past for CPU is still increasing. The main reason is that modern
decade. In section 3.3.1 we have pointed out that I/O CPUs tend to be stalled for the majority of their time
efficiency can be improved by installing RAID devices, but achieving the low utilization of their true power [5]. These
it may create another serious problem. It leads to poor CPU stalls are because of the memory latency. One solution
performance if the total bandwidth of all RAID disks to get the most out of modern CPUs may be that the
exceeds the memory bandwidth. As there is no way to scale performance intensive parts of program must contain only
the memory bandwidth, it can become a hurdle in processor dependent instructions. This is a hard requirement
installations with RAID and may contribute to poor RDBMS and RDBMS applications can’t be designed using this
performance. principle. RDBMS instructions are more memory or I/O
dependent, so RDBMS can’t take full advantage of modern
CPUs.
2.3.4 Memory Latency:
Queue Length Vs. Processor Utilization

Figure 7 shows the improvements in the Memory Latency. 16000

Clearly we can store more (Figure 5) and process faster but 14000

the time the memory takes to access the data has stayed 12000

about the same. Memory latency is the most urgent reason 10000
QueueLength

for low performance of RDBMS under heavy workloads. 8000

The only solution to this problem provided by vendors has 6000

been to install more cache memory. An application that has 4000

a memory access pattern that achieves a low cache hit rate 2000

can’t take the advantage of this solution. RDBMS


0

applications that have this property are hash join and sorting 0 20 40 60
Utilization (% Values)
80 100 120

[2], which are common operations in OLAP and Data Figure 12. CPU Utilization under OLTP workloads.
Mining Queries.
126 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 5, May 2010

the native language of relational databases and that the


1.2
Response Time Vs. Processor Utilization
RDBMS tends to restrict the designer’s thinking around the
row-column concept which shows the inability of relational
databases to support any other data model.
1

0.8
The results from our analysis suggest that hardware
ResponseTime(Sec.)

0.6
engineers must pay more attention to improve I/O, memory
0.4 and network efficiencies. Memory latency has been
0.2
identified as the most urgent reason contributing to the poor
0
performance of RDBMS under heavy workloads. This is
0 10 20 30 40 50
CPU Utiliz at ion (%)
60 70 80 90 100
also responsible for long CPU stalls and can’t be cured by
Figure 13. CPU Utilization Percentage versus response simply installing more caches. SQL must be revised and
time. must be implemented to support other data models and
should not bind the designer’s thinking around rows
We are seeing techniques such as speculative pre-fetch and columns. Also it must be revised form the point of view of
multithreading in the processors but the RDBMS demand expressing the OLAP, Data Mining and DSS queries in an
for CPU is still increasing. The main reason is that modern easy way. The RDBMS kernel must be rewritten to take the
CPUs tend to be stalled for the majority of their time full advantage of the hardware trends seen during the last
achieving the low utilization of their true power [5]. These decade.
CPU stalls are because of the memory latency. One solution
to get the most out of modern CPUs may be that the References
performance intensive parts of program must contain only [1] X. CUI, “A Capacity Planning Study of DBMS With
processor dependent instructions. This is a hard requirement OLAP Workloads”, Masters Thesis, School of
and RDBMS applications can’t be designed using this Computing, Queen’s University, Ontario, Canada
principle. RDBMS instructions are more memory or I/O October 2003.
dependent, so RDBMS can’t take full advantage of modern
CPUs. [2] S. Manegold, P. Boncz and M. Kersten “Optimizing
Join on Modern Hardware” IEEE TKDE, 14(4), July
3. Inability to Support Other Data Models 2002.
In the RDBMS world the relational model and its query [3] H. Zawawy “Capacity Planning for DBMS Using
language (SQL) are most popular. Every entity is expressed Analytical Modeling”, Master Thesis, School of
in the form of table and the attributes of entity are the Computing, Queen’s University, Ontario, Canada,
columns of the table. Relations between tables (Entities) are December 2001.
established by including the prime attribute (or a set of
attributes which defines the primary key) of one table into [4] S. Manegold, P. Boncz and M. Kersten “What Happens
another. This approach restricts the user to think in terms of During a Join? Dissecting CPU and Memory
tables, primary key, foreign key and other related Optimization Effects” In Proceedings of VLDB
terminologies. Clearly the thinking of an RDBMS database Conference, Cairo, Egypt, July 2000.
designer is narrow when it comes on expressing the data. [5] Anastassia Ailamaki, D.J. DeWitt, M.D. Hill, and D.A.
Other data models like Object Oriented Model, Object – Wood “DBMSs on a modern processor: Where Does
Relational Model and Multidimensional Data Model are Time Go?” In Proceedings of the 25th VLDB
becoming increasingly popular as they perform better under Conference, Edinburg, Scotland, 1999.
heavy workloads generated by DSS queries. We argue that if
the RDBMS architecture is revised to support other data [6] E. Whalen “Teach Yourself Oracle 8 in 21 Days”,
models too, then performance issues may be resolved up to Macmillan Computer Publishing, 1998, USA.
some extent. [7] J. Vijavan “Capacity Planning More Vital Than Ever”,
Computer World, pp 1, February 1999.
4. Conclusions and Future Work [8] PC Quest Magazine www.pcquest.com.
Relational Databases are the backbones of several
Decision Support Systems and Content Management System [9] Rain Maker Group Whitepapers and Tools Library
in use. Despite the performance optimization found in available from www.rainmakerworks.com.
today’s relational database systems, they are still not able to [10] K. Keeton, D.A. Patterson, Y.Q. He, R.C. Raphael and
take full advantage of many improvements in the computing W.E. Baker “Performance Characterization of a Quad
technology during the last decade. In this paper we have Pentium Pro SMP using OLTP Workloads.” In
examined various aspects which contribute to the poor Proceedings of the 25th International Symposium on
performance of relational databases when they support Computer Architecture, pages 15 – 26, Barcelona,
OLAP, Data Mining and Decision Support Systems. Several Spain, June 1998.
available solutions to address the performance issues and
why they are insufficient have been discussed at length [11] P. Ranganathan, K. Gharachorloo, S. Adve and L.
throughout the paper. We have also shown that expressing Barroso “Performance of Database Workloads on
the queries related to these applications is a tough task in Shared Memory Systems with Out-of-Order
(IJCNS) International Journal of Computer and Network Security, 127
Vol. 2, No. 5, May 2010

Processors.” In Proceeding of the 8th International Zubair Khan received his Bachelor Degree in
Conference on Architectural Support for Programming Science and Master of Computer Application
Languages and Operating Systems, San Jose, Degree from MJP Rohilkhand University
California, October 1998. Bareilly, India in 1996 and 2001 respectively
and also Master in Technology (M.Tech)
[12] D.A. Menasce, V.A.F. Almeida “Challenges in Scaling Degree in Computer Science and Engineering
e-Business Sites”, In Proceedings of Computer from Uttar Pardesh Technical University in the
Measurement Group Conference, 2000. year 2008. He is currently pursuing his P.hd in
computer science and Information Technology from MJP
[13] L.Lo, L.A. Barroso, S.J. Eggers, K.Gharachorloo, H.M. Rohilkhand University Bareilly, UP India. He also worked as a
Levy and S.S. Parekh, “An Analysis of Database senior lecturer in JAZAN University Kingdom of Saudi Arbia. He
Workload Performance on Simulated Multithreaded is also servicing as Reader in the Department Of Computer
Processors.” In Proceedings of the 25th International Science and Engineering Invertis Institute of Technology Bareilly,
Symposium on Computer Architecture, pages 39 – 50, India. His areas of interest include data mining and warehousing,
Barcelona, Spain, June1998. parallel systems and computer communication networks. He is an
author/ co-author of more than 15 international and national
publications in journals and conference proceedings.
Authors Profile

Dr. Sami M. Halawani Received the M.S.


degree in Computer Science from the University
of Miami, USA, in 1987. He received the
Professional Applied Engineering Certificate
from The George Washington University, USA,
in 1992. He earned the Ph.D. degree in
Information Technology from the George Mason
University, USA in 1996. He is a faculty
member of the College of Computing and Information Technology,
King Abdul aziz University, Jeddah, Saudi Arabia. He is currently
working as the Dean for Graduate Studies and Research. He has
authored/ co-authored many publications in journals/conference
proceedings.

Dr. Ibrahim Albidewi is an Associate Professor


at the department of Information Systems,
Faculty of Computing and Information
Technology, King Abdualziz University, Jeddah.
He was the director of Computer Center at King
Abdulaziz University. His experience and
publications are in the area of image processing,
new vision of programming, expert system, computer education,
and management of information centers.
1986: Bachelor degree from Department of Islamic Economic,
College of Islamic Studies, University of Umm Al-Qura,
1989: Master degree from Department of Computer Science,
College of Sciences, swansea University,
1993: Doctorate degree from Electrical and Electronics Eng,
College of Engineering, Swansea University.

Jahangir Alam graduated in science from Meerut


University, Meerut in the year 1992 and received
Master degree in Computer Science and
Applications from Aligarh Muslim University,
Aligarh in the year 1995 and Master in Technology
(M.Tech) Degree in Computer Engineering from
JRN Rajasthan Vidyapeeth University, Udaipur, Rajasthan, in the
year 2008. He is currently working towards his Ph.D. in Computer
Engineering from Thapar University, Patiala. He is also serving as
an Assistant Professor in University Women’s Polytechnic, Faculty
of Engineering and Technology at Aligarh Muslim University,
Aligarh. His areas of interest include Interconnection Networks,
Scheduling and Load Balancing, Computer Communication
Networks and Databases. He has authored/ co-authored over 12
publications in journals/conference proceedings

Das könnte Ihnen auch gefallen