Sie sind auf Seite 1von 4

How to link a webi report to dashboard? Based on a prompt value?

Can you tell us what the difference between Dashboard, scorecard and report is? How many connection can a dashboard support? Is there any limitation? Have you encounter any error while creating dashboard? Is Complexity depends on the number of components? Is there a possibility that a dashboard can contain 150 components? And again th ere was a question, how do you invoke the dashboard within seconds? What is your favorite Xcelsius dynamic connection method, and what are the limit ations of it? Are pie chart are the better representation of the data than a Bar chart? Why? Full outer joins Explain how keys can help in setting Index Awareness ?

************************************************************************** Performance Tuning in Business Objects First, Let us look into the question what are Performance Issues in a report? 1) Reports are running extremely slow and getting timed out 2) BO Report has significant slow response time 3) Performance of the BO Report displaying aggregated or summarized data is extr emely slow 4) BO report is taking more processing time and still displaying partial data 5) A list of values request is taking more than fifteen minutes to return Second, Let us look into the options of how to we tune the performance of the re portsBO a) b) c) d) reports can be optimized at 4 levels: Universe level Report level Database level Server level Level 1 optimization - Universe level -> -> -> -> -> -> Modify Array Fetch parameter Allocate weight to each table Use shortcut joins Use aggreagte functions Use aggregate tables Minimize usage of the derived tables

Modify Array Fetch parameter: The Array fetch parameter sets the maximum number of rows that are permitted in a FETCH proedure. For example, of the Array Fetch size is 20, and total rows are 100, then five fetches will be executed to retri eve the data, which will consume more time in comparison with one fetch. Resolution: If network allows sending large arrays, then set Array fetch par ameter to new larger value. This speed up the FETCH procedure, and reduce query processing time. Allocating table weights: Table weight is a measure of how many rows there are i n a table. Lighter tables have less rows than heavier tables. By default Busines sObjects sorts the tables from the lighter to the heavier tables. The order in w hich tables are sorted at the database level depends on your database. For examp le, Sybase uses the same order as BusinessObjects, but Oracle uses the opposite order. The SQL will be optimized for most databases, but not for Oracle where th e smallest table is put first in the sort order. So, if you are using an Oracle database, you can optimize the SQL by reversing the order that BusinessObjects s orts the tables. To do this you must change a parameter in the relevant PRM file of the database. Resolution: Business Objects settings, the ORACLE PRM file must be modified as below: Browse to directory Business Objects\BusinessObjects Enterprise 6\dataAccess\RDBMS\connectionServer\oracle. Open ORACLE.PRM file, change the REVERSE_TABLE_WEIGHT value to N from Y. Using Shortcut joins: Numbers of tables in join are more, even when selected obj ects are less. Even when no object of related table is selected, then also that table is appearing in the join condition. For e.g., If A_id object from A table of C table is selected with B table in between, then BO generated SQL shows that intermediate table B table was present in From clause. Resolution: Shortcut joins allow users to skip intermediate tables and allow alt ernative paths between tables. Use of shortcut join reduces the number of tables used in query to improve SQL performance. Results in query performance improvin g from 1.5 minute to 30 seconds!! Use aggregate functions: Data is aggregated on the subject of analysis (user s elected criteria) at report level. This takes more processing time, as data from database is loaded in temporary memory and then aggregated or processed to display. Resolution: Use aggregate functions (e.g., sum, count, min, max) in measure objects at universe level. Aggregate functions will aggregate the data at database level rather than at report level which will save on processing time at report level and also red uce the number of rows returned back to report. Creating and using aggregate tables: Aggregate data are obtained by scanning an d summarizing all of the records in the fact table at real-time which consumes m ore time. Resolution: Aggregate tables contain pre-calculated aggregated data. Using a ggregate tables instead of detail tables enhances the performance of SQL transactions and speeds up query execution. Aggregate_Awareness function has ability to dynamically rewrite SQL to the level of granularity needed to answer a business question. Aggr egate tables allow for faster querying speed and increases query performance man ifolds!!

Minimize usage of derived tables: Since derived tables are evaluated and execut ed at runtime, SQL tuning is not possible. Resolution: Minimize the usage of derived tabled and replace them with tabl es or materialized view. SQL tuning techniques such as creating index can be applied on tables or m aterialized views which will improve performance of BO reports. Level 2 optimization Report level -> -> -> -> -> Opt for Refresh At- Will over Refresh-On-Open List of Values (LOV's) Conditional Objects Complex Calcualtion in ETL Minimize usage of Report variables/formulas

Opt for Refresh At- Will over Refresh-On-Open: Refresh-on-open reports refresh n ew data each time it is opened. Connection with database is established each tim e report is refreshed which in turn slows the report performance. Resolution: If report is based on snapshot data and static, it is better to publish report without refresh-on-open property. Users will thus view the same instance of report witho ut establishing database connection, which will reduce the response time of BO r eport. List of Values (LOV's): When we create LOV object, distinct values are selected into it. DISTINCT forces an internal sort/compare on the table. Selecting a dist inct list on large table is not optimal e.g., selecting a distinct list of custo m_store against t_curr_tran_daily table is not optimal. Resolution: a. Re-map the object list of values to smaller look up tables. b. If there are no smaller lookup tables, then create external file as a source to LOV. This file needs to be exported along with universe and be available to a ll users, which is additional overhead. Usage of external file replaces the need of lookup table and delivers high performance and weighs down the overhead cost c. Avoid creating LOV on dates and measures. Disassociate LOV from all such obje cts which are not display as prompts. Universe Condition Objects: The entire data from database is fetched (<=maximum rows setting) and the filters are applied at the report level. As data is not re stricted at the database or universe level, the reports takes more time to execu te. Resolution: When handling huge data, one of the following steps can be taken to limit data: 1. Use prompts to restrict data selection at universe level. Preferably use time period prompts in reports. 2. Replace report filters with Universe condition objects, if possible. Usage of conditional objects will limit rows returned at database level. Complex Calculations: The data from database is fetched and then calculations ar e applied to that data. As calculations are performed at universe or report leve l on huge data, reports takes more time to execute. Resolution: When dealing with huge data warehouses perform complex calculations at ETL level. Thus Business Objects saves time on calculations and deliver high performance. Minimize usage of Report variables/formulas: If the report is pulling tons of da

ta, doing loads of joins, making lot of clever calculations, using lot of report variables and formulas, report may run very slow. Report variables and formulas are loaded and calculated in memory at real time. As variables are created at r eal time and calculations are performed at report level, reports takes more time to execute. Resolution: When dealing with big reports, minimize usage of report variables/formulas and try to place them at universe to deliver high performance reports. Level 3 optimization - Database level Examine the execution plan of SQL: Determine the execution plan of BO generated SQL in target database. EXPLAIN PLAN is a handy tool for estimating resource req uirements in advance. It displays execution plans chosen by Oracle optimizer wit hout executing it and gives an insight on how to make improvements at database l evel.

Level 4 optimization - Server level If the performance of system deteriorates when reports are accessed by larger nu mber of users over web, then fix the problem at fourth level i.e., server level (Leve l 4). -> Scalable System -> Event Based Scheduling -> Report Server/Job Server closer to database server -> Maximum Allowed Size of Cache Posted by Bi Bo at 09:04 Email This BlogThis! Share to Twitter Share to Facebook Share to Pinterest Labels: Admin, business objects performance issues, business objects universe op timization, business objects universe performance tuning, CMC, performance tunin g in universe, sap business objects performance tuning

Das könnte Ihnen auch gefallen