Beruflich Dokumente
Kultur Dokumente
Joins in SQL
Joins conditions are quite important in a SQL statement where there are more than one table is involved. Please note that it is not mandatory to have join conditions in such SQLs, but the only problem with this could be there is lot of I/O caused because of this. If join conditions are not present, SQLs would do Full Table Scans to fetch the data, which is quite dangerous for large volume tables. Typically there should be n-1 join conditions if there are n tables involved (i.e. tables in from clause of a single SQL) in SQL. A missing join condition may result in a full table scan and cause performance degradation in production environment, since it takes more resources. Such an execution in peak load hours is quite dangerous for a business. Check: Proper join conditions are present in SQL statement.
column IS NULL column IS NOT NULL column NOT IN column != expr column LIKE %pattern regardless of whether column is indexed expr = expr2 where expr is an expression that operates on a column with an operator or function, regardless of whether the column is indexed. NOT EXISTS sub query ROWNUM pseudo column in a view Any condition involving a column that is not indexed
Any SQL statement that contains only these constructs and no others that make index access paths available must use full table scans.
There will be times to decide whether an index should be created on a particular column. This depends on the factors like frequency of execution of an SQL statement, Tables involved in SQL statement. If the SQL statement is one-off and doing full table scan, creating an Index on a table involved may not be a feasible option. Rather such SQL can be considered to be executed in off-peak hours. But if the SQL statement in question is part of the routine functionality, whose frequency of occurrence is considerable, then creating an index may help SQL to perform better. Please note that creating an index to improve the performance of an SQL statement may adversely affect on DML (insert/update/delete) operations on the tables involved. Hence decision of creating an index should consider this aspect (i.e. DMLs) on the table also. Check: SQL makes use of available Indexes on table, via proper filtering conditions.
3. Sub Queries
Sub queries are merely SQLs within SQL statement. Hence to execute entire queries, sub queries should be executed too. If sub queries are non-performing, the entire SQL statement would become non-performing and eat up resources. There were lots of instances of non performing SQL statements because of sub queries involved in live environments. Check: Sub queries should be checked for all possible tuning options as for a single SQL statement. Rephrasing SQL statements to remove sub queries and use joins instead.
6. Data Sorting
This particularly is related to the order by clause in SQL statement. Since the sorting operations in Oracle are performed on entire result set fetched and if the result set data volume is considerably large, this would take time and resource to complete the operation. SQLs involving order by clause on a result set from large volume tables resulted in slower response time. Change in application configuration required to remove such not-required sorting clauses. This also required customer acceptance for a change in functionality. Check: Analyze the data volume and then consider whether FTS should be removed or not.
9. Missing Indexes
Check for any missing indexes based on the queries written. This becomes critical when the data in the tables becomes huge.
category_1, category_2, payor_plan_effective_from_date, payor_plan_end_date, payor_edit_delete_flag ) values ( data.edit_field_sequence, trim(data.file_field_name), trim(data.file_field_description), data.payor_code, data.plan_code, trim(data.category_1), trim(data.category_2), data.payor_plan_effective_from_date, data.payor_plan_end_date, trim(data.payor_edit_delete_flag) ); end loop; This could instead be rewritten with a single statement as follows insert into lipedit ( edit_field_sequence, file_field_name, file_field_description, payor_code, plan_code, category_1, category_2, payor_plan_effective_from_date, payor_plan_end_date, payor_edit_delete_flag ) select edit_field_sequence, trim(file_field_name), trim(file_field_description), payor_code, plan_code, trim(category_1), trim(category_2), payor_plan_effective_from_date, payor_plan_end_date, trim(payor_edit_delete_flag) from hcs_lipedit;
It is always a good practice to use Global Temporary table instead of a permanent table. Using Global Temporary tables avoids the need for purging the table and reclaiming unused space by indexes.