Sie sind auf Seite 1von 6

1.

Joins in SQL

Joins conditions are quite important in a SQL statement where there are more than one table is involved. Please note that it is not mandatory to have join conditions in such SQLs, but the only problem with this could be there is lot of I/O caused because of this. If join conditions are not present, SQLs would do Full Table Scans to fetch the data, which is quite dangerous for large volume tables. Typically there should be n-1 join conditions if there are n tables involved (i.e. tables in from clause of a single SQL) in SQL. A missing join condition may result in a full table scan and cause performance degradation in production environment, since it takes more resources. Such an execution in peak load hours is quite dangerous for a business. Check: Proper join conditions are present in SQL statement.

2. Filtering Data with condition


This is particularly a where clause in the SQL statement. Since Full Table Scans can be quite expensive in terms of performance, alternative way to retrieve data could be Index Scans. As compared to I/O performed on large volume tables, Index Scans can do fewer I/O operations to retrieve required data. In order to have Index Scans in execution plan, there should be proper filtering conditions on in indexed column. Please note that a condition on an indexed column only can make use of an index scan. The following conditions make index access paths unavailable (i.e. WILL NOT cause index scan): where column1 > column2 column1 < column2 column1 >= column2 column1 <= column2 column1 and column2 are in the same table.

column IS NULL column IS NOT NULL column NOT IN column != expr column LIKE %pattern regardless of whether column is indexed expr = expr2 where expr is an expression that operates on a column with an operator or function, regardless of whether the column is indexed. NOT EXISTS sub query ROWNUM pseudo column in a view Any condition involving a column that is not indexed

Any SQL statement that contains only these constructs and no others that make index access paths available must use full table scans.

There will be times to decide whether an index should be created on a particular column. This depends on the factors like frequency of execution of an SQL statement, Tables involved in SQL statement. If the SQL statement is one-off and doing full table scan, creating an Index on a table involved may not be a feasible option. Rather such SQL can be considered to be executed in off-peak hours. But if the SQL statement in question is part of the routine functionality, whose frequency of occurrence is considerable, then creating an index may help SQL to perform better. Please note that creating an index to improve the performance of an SQL statement may adversely affect on DML (insert/update/delete) operations on the tables involved. Hence decision of creating an index should consider this aspect (i.e. DMLs) on the table also. Check: SQL makes use of available Indexes on table, via proper filtering conditions.

3. Sub Queries
Sub queries are merely SQLs within SQL statement. Hence to execute entire queries, sub queries should be executed too. If sub queries are non-performing, the entire SQL statement would become non-performing and eat up resources. There were lots of instances of non performing SQL statements because of sub queries involved in live environments. Check: Sub queries should be checked for all possible tuning options as for a single SQL statement. Rephrasing SQL statements to remove sub queries and use joins instead.

4. SQL on Large Volume Tables


As the data volume grows, there would be many tables added up in the list. Hence you may want to consider the current data volumes of the table on production databases to identify the large volume tables. Many times, sessions which fired SQL statements on large volume tables and doing full table scans, needed to be terminated because of time the SQL takes to complete the execution. This also may result into ORA-01555 Snapshot Too Old error, if simulations DML operations happen in the same large volume table. Check: No occurrence of full table scan in execution plan on large volume tables. Consider adding filtering condition/rephrasing SQL to tune it for better performance.

5. Full Table Scans


One thing that must be considered is FTS are not always bad. It is the data volume factor of the table and percentage records to be retrieved, which decides whether FTS should be removed. Ideally when 5 to 10% of the records are required/to be retrieved, we should consider removing FTS, if any. If 80 to 90% of the total rows to be retrieved, then FTS could prove to be better option. SQLs involving FTS eat up more resources on large tables, needed to be terminated in between the execution. Check: Analyze the data volume and then consider whether FTS should be removed or not.

6. Data Sorting
This particularly is related to the order by clause in SQL statement. Since the sorting operations in Oracle are performed on entire result set fetched and if the result set data volume is considerably large, this would take time and resource to complete the operation. SQLs involving order by clause on a result set from large volume tables resulted in slower response time. Change in application configuration required to remove such not-required sorting clauses. This also required customer acceptance for a change in functionality. Check: Analyze the data volume and then consider whether FTS should be removed or not.

7. Use of SQL Literals


SQL statements would not be shared due to the usage of literals. This result in hard parsing each and every query submitted to the database there by increasing the time significantly. PERMANENT ACTION: Investigate application logic for possible use of bind variables instead of literals. TEMPORARY ACTION: Alternatively, the "cursor_sharing" parameter to may be set to "force".

8. Reduce Explicit Cursors


Use bulk collect or FOR ALL instead of explicit cursors.

9. Missing Indexes
Check for any missing indexes based on the queries written. This becomes critical when the data in the tables becomes huge.

10. Repeated execution of an insert/update/delete statement within a cursor loop


For example consider the code block below for data in ( select * from hcs_lipedit ) loop insert into lipedit ( edit_field_sequence, file_field_name, file_field_description, payor_code, plan_code,

category_1, category_2, payor_plan_effective_from_date, payor_plan_end_date, payor_edit_delete_flag ) values ( data.edit_field_sequence, trim(data.file_field_name), trim(data.file_field_description), data.payor_code, data.plan_code, trim(data.category_1), trim(data.category_2), data.payor_plan_effective_from_date, data.payor_plan_end_date, trim(data.payor_edit_delete_flag) ); end loop; This could instead be rewritten with a single statement as follows insert into lipedit ( edit_field_sequence, file_field_name, file_field_description, payor_code, plan_code, category_1, category_2, payor_plan_effective_from_date, payor_plan_end_date, payor_edit_delete_flag ) select edit_field_sequence, trim(file_field_name), trim(file_field_description), payor_code, plan_code, trim(category_1), trim(category_2), payor_plan_effective_from_date, payor_plan_end_date, trim(payor_edit_delete_flag) from hcs_lipedit;

11. Unnecessary Redo Log Generation

It is always a good practice to use Global Temporary table instead of a permanent table. Using Global Temporary tables avoids the need for purging the table and reclaiming unused space by indexes.

12. Unnecessary Repetition of Queries


Make sure there are no unnecessary queries in the code. This can be a huge hit based on the data and the number of times the query gets executed.

13. Disable the Action buttons after they are clicked


This is specifically severe if the underlying action of a button executes long processing queries and the user might be under the impression that the button is not clicked properly and end up clicking multiple times there by submitting the same query multiple times.

14. Using Flat tables


Consider using flat table structures wherever possible to reduce the joins in queries. This is specifically recommended in queries which have over million of records and specific fields are always used in the joins.

15. Release of critical resources


Closing of cursors and releasing other critical database resources.

16. Use of Optimizer hints


Use the optimizer hints like /*+ ALL_ROWS */ and /*+ APPEND */ and /*+ INDEX */ wherever applicable.

17. Use ForAll instead of For loop


for lMonthYearArrCounter in lMonthYearArr.first.. lMonthYearArr.last loop update temp_linking_per_report_gt set visits_linked24 = lVisitsLinked24Arr(lMonthYearArrCounter), visits_linked24to48 = lVisitsLinked24to48Arr(lMonthYearArrCounter), visits_linked48 = lVisitsLinked48Arr(lMonthYearArrCounter) where supervisor_id = lSupervisorIdArr(lMonthYearArrCounter) and month_year = lMonthYearArr(lMonthYearArrCounter)||'/'||iYear; end loop; forall lMonthYearArrCounter in lMonthYearArr.first.. lMonthYearArr.last update temp_linking_per_report_gt set visits_linked24 = lVisitsLinked24Arr(lMonthYearArrCounter), visits_linked24to48 = lVisitsLinked24to48Arr(lMonthYearArrCounter), visits_linked48 = lVisitsLinked48Arr(lMonthYearArrCounter) where supervisor_id = lSupervisorIdArr(lMonthYearArrCounter) and month_year = lMonthYearArr(lMonthYearArrCounter)||'/'||iYear;

Das könnte Ihnen auch gefallen