Home » » Steps on Oracle Performance Tuning Tips at atoz help blog

Steps on Oracle Performance Tuning Tips at atoz help blog

Written By M.L on திங்கள், 23 ஏப்ரல், 2012 | ஏப்ரல் 23, 2012

Oracle Performance Tuning Tips at atoz help blog


SQL / PLSQL -  TipsCommentsExample
Check the operators usedVarious operators used directly affect how fast a query is run.Here are the key operators used in the WHERE clause, ordered by their performance. Those operators at the top will produce results faster than those listed at the bottom.
· =
· >, >=, <, <=
· LIKE
· <>
This lesson here is to use = as much as possible, and <> as least as possible. 
Check the Operands used Various operands used directly affect how fast a query is run.Here are the key operands used in the WHERE clause, ordered by their performance. Those operands at the top will produce results faster than those listed at the bottom.
· A single literal used by itself on one side of an operator
· A single column name used by itself on one side of an operator, a single parameter used by itself on one side of an operator
· A multi-operand expression on one side of an operator
· A single exact number on one side of an operator
· Other numeric number (other than exact), date and time
· Character data, NULLs

Don't include code that doesn't do anything.Where 1=0
or
Where 1=1
Avoid WHERE clauses that are non-sargableTry to avoid WHERE clauses that are non-sargable. Sargable which stands for "Search ARGument," which refers to a WHERE clause that compares a column to a constant value.

· If a WHERE clause is sargable, this means that it can take advantage of an    index (assuming one is available) to speed completion of the query.
· If a WHERE clause is non-sargable, this means that the WHERE clause (or at least part of it) cannot take advantage of an index, instead performing a table/index scan, which may cause the query's performance to suffer.
· Non-sargable search arguments in the WHERE clause, such as "IS NULL", "<>", "! =", "! >", "! <", "NOT", "NOT EXISTS", "NOT IN", "NOT LIKE", and "LIKE '%500'" generally prevents (but not always) the query optimizer from using an index to perform a search.
· In addition expressions that include a function on a column, expressions that have the same column on both sides of the operator, or comparisons against a column (not a constant), are not sargable.
· If the WHERE clause includes both sargable and non-sargable clauses, then at least the sargable clauses can use an index (if one exists) to help access the data quickly.
Avoid functions in Where clauseAvoid Functions in where clause.
The clause:
WHERE SUBSTRING (firstname, 1,1) = 'm'
Can be rewritten like this:
WHERE firstname like 'm%'
Both of these WHERE clauses produce the same result, but the first one is non sargable (using a function) and will run slow, while the second one is sargable, and will run much faster. 
Check for rewriting the where clause to separete function
and the index column

Function Acts Directly on Column, and Index Cannot Be Used:
SELECT member_number, first_name, last_name
FROM members
WHERE DATEDIFF (yy, datofbirth, GETDATE ()) > 21
Function Has Been Separated From Column, and an Index Can Be Used:
SELECT member_number, first_name, last_name
FROM members
WHERE dateofbirth < DATEADD (yy, -21,GETDATE ())
Avoid NOT from Where clauseWHERE NOT column_name > 5
To
WHERE column_name <= 5
Create new indexWrite Index on a column, if you are not able to avoid column with in function.
If you cant avoid using function on the right side of an equality sign (and there is no other way to rewrite the WHERE clause), consider creating an index on a computed column instead. Because of the additional overhead required for indexes on computed columns, you will only want to do this if you need to run this same query over and over in your application, thereby justifying the overhead of the indexed computed columns.
Choose EXISTS clause in place of IN.
If you use LIKE in your WHERE clause, try to use one or more leading character in the clause, if at all possibleIf you use a leading character in you’re LIKE clause, then the Query Optimizer has the ability to potentially use an index to perform the query, speeding performance and reducing the load
But if the leading character in a LIKE clause is a wildcard, the Query Optimizer   will not be able to use an index, and a table scan must be run, reducing performance and taking more time.
The more leading characters you can use in the LIKE clause, the more likely the Query Optimizer will find and use a suitable index
LIKE 'm%' Not LIKE '%m'
Choose BETWEEN if choice is IN and BETWEENSELECT customer_number, customer_name
FROM customer
WHERE customer_number in (1000, 1001, 1002, 1003, 1004)

Is much less efficient than this:

SELECT customer_number, customer_name
FROM customer
WHERE customer_number BETWEEN 1000 and 1004
If possible, you should avoid using the SUBSTRING function and use the LIKE   condition instead, for better performance.instead of doing this:
WHERE SUBSTRING (column_name, 1,1) = 'b'
Try using this instead:
WHERE column_name LIKE 'b%'
If you decide to make this choice, keep in mind that you will want your LIKE condition to be sargable, which means that you cannot place a wildcard in the first position.
Generally, avoid using optimizer hints in your queriesOptimizer hints are special keywords that you include with your query to force how the Query Optimizer runs. If you decide to use hint then this forces the Query Optimizer to become static, preventing the Query Optimizer from dynamically adapting to the current environment for the given query. More often than not, this hurts, not helps performance.
If you think that a hint might be necessary to optimize your query, be sure you first do all of the following first:
· Update the statistics on the relevant tables.
· If the problem query is inside a stored procedure, recompile it. 
· Review the search arguments to see if they are sargable, and if not, try to rewrite them so that they are sargable.
· Review the current indexes, and make changes if necessary
Don't use ORDER BY in your SELECT statements unless you really need to, as it adds a lot of extra overhead
Reduce Sorting OverheadsSorting often occurs when any of the following Transact-SQL statements are executed:
· ORDER BY
· GROUP BY
· SELECT DISTINCT
· UNION
· CREATE INDEX (generally not as critical as happens much less often.
In many cases, these commands cannot be avoided. On the other hand, there are few ways that sorting overhead can be reduced. These include:
· Keep the number of rows to be sorted to a minimum. Do this by only returning those rows that absolutely need to be sorted.
· Keep the number of columns to be sorted to the minimum. In other words, don't sort more columns that required.
· Keep the width (physical size) of the columns to be sorted to a minimum.
· Sort column with number datatypes instead of character datatypes.
If you have to sort by a particular column often, consider making that column a clustered index
Most frequently found values should be placed first while using IN Operator.
Order the list of values so that the most frequently found values are placed at the first of the list, and the less frequently found values are placed at the end of the list
Try using Where clause instead of HAVING clause

Don't use the GROUP BY clause without an aggregate function
The GROUP BY clause can be used with or without an aggregate function. But if you
want optimum performance, don't use the GROUP BY clause without an aggregate
 function. This you can accomplish the same end result by using the DISTINCT
 option instead, and it is faster.
you could write your query two different ways:
USE Northwind
SELECT OrderID
FROM [Order Details]
WHERE UnitPrice > 10
GROUP BY OrderID
or
USE Northwind
SELECT DISTINCT OrderID
FROM [Order Details]
WHERE UnitPrice > 10
Both of the above queries produce the same results, but the second one will use less resource and perform faster.
Avoid using * in select clauseExtra work for optimizer
Reduce number of databse trips to database
Use decode to reduce procesing
Use Truncate instead of deleteNote : Cannot rollback again
Check commitDon’t use commit in Loop

To reduce the resource usage use Commit whenever required.
Use Count(1) instead of  Count (*)
Minimize subqueries
Reduce SQL load by using stored functions
Always use Table Aliases
Use NOT EXIST in place of NOT IN
Use JOIN in place of EXISTS
Use EXISTS in Place of Distinct
Avoid calculation,NOT on Index
Use >= instead of  >
Use Union In place of OR (in Index columns)
Use IN place of OR
Avoid IS NULL & IS NOT NULL on Index ColumnsSelect ….
From emp
where emp_id is NULL

Can be written as

Select …
from emp
where emp_id >=0;
Use UNION-ALL in place of UNION (where Possible)
SQL Loadero  Disable Indexes and Constraints. For conventional data loads only, disabling
            of indexes and constraints before loading can increase the performance.

o  Use a Larger Bind Array. For conventional data loads, larger bind arrays
             limit the number of calls to the database and increase performance. The size of
             the bind array is specified using the BINDSIZE parameter. 

             When you are setting parameter for BINDSIZE , please set the same value for READSIZE.

o  Use ROWS=<n> to Commit Less Frequently. For conventional data loads,
              the rows parameter specifies the number of rows per commit. Issuing
             fewer commits will increase the performance.

o  Even for conventional path loads, always run SQL*Loader directly on the
            database server rather than across a network.  ( Especially in Production envrionment )
sqlldr / control=xyz.ctl  bindsize=5120000 readsize=5120000 rows=1000000


 Note : Provide your comments by clicking below options! Thanks ! :)

0 comments:

கருத்துரையிடுக

Popular Posts

General Category