Beruflich Dokumente
Kultur Dokumente
Joins: combining the columns of multiple tables and showing them as a single result.
As per the standards, the keywords must be typed in uppercase and other information in
lowercase.
Outer query =
(
Inner Query
)
Arithmetic Operator
it holds a space
you want to use the same case how it was typed in the script
NOTE: in an arithmetic expression having all the 4 operations, the division and multiplication
have more precedence than addition and subtraction. To override the order of precedence,
enclose the expression within parentheses ().
Example:
Write a query to display employee id, last name, salary, revised salary of 100$ raise and the
annual revised salary of all employees.
Note: In an arithmetic expression having null value, the result of the expression will also be null.
10+20*null = null
For Example:
Write a query to display employee id, last name, salary, commission, and net salary of employees
SELECT employee_id, last_name, salary, salary+100 AS Revised,
(salary+100)*12 AS AnnualSalary
FROM employees;
CONCATENATION OPERATOR:
Combining the result of two or more columns and displaying them as a single column. This
operator is introduced by ||
NOTE: A character and date value should be enclosed within single quote.
SQL> SELECT first_name || ' ' || last_name AS "Full Name"
2 FROM employees;
SQL> SELECT 'The Employee ' || first_name || ' ' || last_name || ' who works in ' || department_id
|| ' was hired on ' || hire_date AS message FROM employees;
Quote Operator (Q Operator) is used to override the special meaning of single quotes.
Syntax:
q'[john's]'
'John's salary is'
SQL> SELECT last_name || q'['s salary is]' || salary AS message
2 FROM employees;
= Equal to
<> or! = or ^=
The WHERE is an optional clause that is used to limit the no of rows to be shown.
Example:
1
2 FROM employees
3 WHERE first_name = 'john';
no rows selected
SQL> SELECT last_name, salary, first_name
2 FROM employees
3 WHERE first_name = 'John';
LAST_NAME SALARY FIRST_NAME
------------------------- ---------- -------------------Chen 8200 John
Seo 2700 John
Russell 14000 John
3 WHERE hire_date='2007/03/17';
WHERE hire_date='2007/03/17'
*
ERROR at line 3:
ORA-01861: literal does not match format string
LOGICAL Operators
They are used to join two conditions together:
Types:
1
AND
OR
NOT
For Examples:
Write a query to show employees whose salary is greater than 8000 and who were hired before
2007.
Write a query to display employees belonging to department 10, 20, and 50.
For Examples
Display employees whose first start begin with 'A'
2 FROM employees
3 WHERE commission_pct IS NOT NULL;
ORDER BY clause
SELECT <column list> | *
FROM <table>
WHERE <condition>
ORDER BY <colname>;
Order By clause is used to sort the resultset in ascending/descending order based
on the given column.
Example 1:
Write a query to display employee id, last name and salary in the
ascending order of their last name.
SELECT employee_id, last_name, salary
FROM employees
ORDER BY last_name;
Example 2:
Modify the above query to show only those employees whose salary is in between
3000 and 5000
SELECT employee_id, last_name, salary
FROM employees
WHERE salary BETWEEN 3000 AND 5000
ORDER BY last_name;
Note: You can sort the result using multiple columns
Example 3:
Modify the above query to add department id and sort the information first in the
ascending order of departments and then in the descending order of they
salary.
The UNDEFINE command will release the memory of the variable declared using the
DEFINE command.
FUNCTIONS
----------------Oracle have mainly 2 categories of functions:
1. Single Row Function
Applies on each rows independantly and gives one result per row.
2. Multiple Row Function (Group Function)
Applies on each group of rows and gives one result per group
Single Row Functions:
Types:
1. Character Function
2. Number Function
3. Date Function
4. Conversion Function
5. General Function
Character Function
----------------------------A character function passes a character value as a parameter but can return both
character or numeric values.
Types:
1. CASE-MANIPULATION
a) UPPER() : convert the character value into upper case
b) LOWER() : convert the character value into lower case
c) INITCAP() : capitalizes the first letter of each word
Example 7:
SQL> SELECT upper(last_name), lower(last_name), initcap(last_name)
2 FROM employees;
Example 8:
SQL> SELECT employee_id, last_name, first_name
2 FROM employees
3 WHERE upper(first_name)='JOHN';
EMPLOYEE_ID LAST_NAME
FIRST_NAME
----------- ------------------------- -------------------110 Chen
John
139 Seo
John
145 Russell
John
2. CHARACTER-MANIPULATION
a) CONCAT()
used to join two character values together.
Example 9:
SQL> SELECT concat('John','Smith') FROM dual;
CONCAT('J
--------JohnSmith
Example 10:
SQL> SELECT concat('John',concat(' ','Smith')) FROM dual;
CONCAT('JO
---------John Smith
Example 11:
SQL> SELECT concat(first_name, concat(' ',last_name)) AS FullName
2 FROM employees;
Nester function is a function within another function.
b) SUBSTR()
Extracts a string of determined length.
Example 12:
SQL> SELECT substr(first_name,1,3) AS code
2 FROM employees;
The first parameter indicate the column name
the second parameter indicate the start position
the third parameter indicate the total no of characters to be extracted from the given
start position.
Example 13:
SQL> SELECT upper(substr(first_name, 1, 3))
2 FROM employees;
c) LENGTH()
return the length the given string.
Example 14:
SQL> SELECT first_name, length(first_name) AS total_char
2 FROM employees;
d) LPAD() and RPAD()
LPAD() returns an expresion left-padded to the length of n characters with a character
expression.
RPAD() returns an expresion right-padded to the length of n characters with a character
expression.
Example 15:
SQL> SELECT rpad(salary,10,'@')
2 FROM employees;
e) REPLACE()
Replaces a part of the text with the given text.
Example 16:
SQL> SELECT replace('Jack and Jue','J','Bl') FROM dual;
REPLACE('JACKA
-------------Black and Blue
f) TRIM()
To remove the leading and trailing spaces
Example 17:
SQL> SELECT trim('
2 FROM dual;
Welcome to Oracle
') AS message
MESSAGE
----------------Welcome to Oracle
------------0
Example 21:
SQL> SELECT mod(4,2) FROM dual;
MOD(4,2)
---------0
Example 22:
SQL> SELECT mod(3,2) FROM dual;
MOD(3,2)
---------1
Pseudocolumn
SYSDATE is used to return the system date
3) Date() These functions passes a date value as a parameter and return
the date or numeric value
Example 23:
SQL> SELECT sysdate FROM dual;
SYSDATE
--------08-SEP-15
Example 24:
SQL> SELECT systimestamp FROM dual;
SYSTIMESTAMP
--------------------------------------------------------------------------08-SEP-15 01.09.43.851349 PM +03:00
SQL>
Note: You can perform arithmetic calculation with date values.
Example 25:
SQL> SELECT employee_id, last_name, hire_date, sysdate-hire_date AS total_days
2 FROM employees;
Example 26:
SQL> SELECT employee_id, last_name, hire_date, round(sysdate-hire_date) AS total_days
2 FROM employees;
Example 27:
SQL> SELECT employee_id, last_name, hire_date, round(sysdate-hire_date) AS total_days,
2 ROUND((sysdate-hire_date)/7) AS Total_Weeks
3 FROM employees;
MONTHS_BETWEEN() used to return total no of months between the given dates
SQL> SELECT employee_id, last_name, hire_date, round(sysdate-hire_date) AS total_days,
2 ROUND((sysdate-hire_date)/7) AS Total_Weeks, months_between(sysdate, hire_date) AS
3 Total_Months
4 FROM employees;
NEXT_DAY() : this function will return the date of the first given weekday
Example 28:
SQL> SELECT employee_id, last_name, hire_date, next_day(hire_date,'Sunday')
2 FROM employees;
4) Conversion Function: is used to convert the value from one type to another.
The conversion is performed by oracle in two methods:
a) Implicit Conversion
Here oracle will convert the value from one type to another automatically.
Example 30:
SQL> SELECT '10' + '20'
2 FROM dual;
'10'+'20'
---------30
Example 31:
SQL> SELECT employee_id, last_name
2 FROM employees
3 WHERE hire_date='01-JUL-06';
EMPLOYEE_ID LAST_NAME
----------- ------------------------194 McCain
b) Explicit Conversion
Here you must use a function to do the conversion.
i) TO_CHAR()
It converts the given value to a character type. It is used to format the output.
Example 32:
SQL> SELECT employee_id, last_name, to_char(hire_date,'YYYY / MM / DD HH:MI:SS AM')
AS hiredate
2 FROM employees;
Example 33:
SQL> SELECT to_char(sysdate,'Month DD YYYY Day HH24:MI:SS') AS today
2 FROM dual;
TODAY
-----------------------------------September 08 2015 Tuesday 13:30:24
Example 34:
'10'*'2'
---------20
SQL> SELECT to_number('10') * to_number('2') FROM dual;
TO_NUMBER('10')*TO_NUMBER('2')
-----------------------------20
5. GENERAL Function:
They can be applied on any type of columns:
Types:
a) NVL() it works with null. It is used to convert the null value to a given value before
performing the calculation.
Example:
Write a query to display employee id,last name, department id, salary, commission and net
salary of an emlployee.
SELECT employee_id, last_name, department_id, salary, commission_pct, salary+
(salary*NVL(commission_pct,0)) AS Net
FROM employees;
b) NVL2()
This function also works with null values. It is an extended version of NVL()
Example:
Write a query to display employee id, last name, salary, commission and income of an employee.
The income should show "Salary Only" if the employee do not earn any commission. Else it
should
show "Salary and Commission" as income.
SQL> SELECT employee_id, last_name, salary, commission_pct,
NVL2(commission_pct,'Salary and Commission','Salary Only') AS Income
2 FROM employees;
c) NULLIF() : works with null values.
If compares both the parameters passed and returns NULL if both them matches else it will
return
the first parameter.
SQL> SELECT nullif('Oracle','oracle') from dual;
NULLIF
-----Oracle
SQL> SELECT nullif('Oracle','Oracle') from dual;
NULLIF
-----d) COALESCE() it operates on null values
You can pass any no of parameters to this function.
SELECT coalesce(null,null,null,null,null,22,34,45,5,null) from dual;
e) IF THEN ELSE construct
i) CASE EXPRESSION
ii DECODE()
FROM employees;
Group Function: Operates on each group and gives one result per group:
Types:
1. SUM() : get the sum of range of values
2. COUNT() : get the total count of values in the range
3. MIN() : get the lowest value from range
4. MAX() : get the highest value from the range
5. AVG() : get the average value from the range of values.
By default the entire table will be considered as one group.
Group Function ignores null values:
SQL> SELECT sum(salary) AS Tot_Sal, min(salary) AS Min_Sal, max(salary) AS max_sal,
avg(salary) AS avg_sal,
2 count(salary) AS tot_sal
3 FROM employees;
TOT_SAL MIN_SAL MAX_SAL AVG_SAL TOT_SAL
---------- ---------- ---------- ---------- ---------691516 2100
24100 6462.76636
107
SQL> SELECT count(commission_pct)
2 FROM employees;
COUNT(COMMISSION_PCT)
--------------------35
GROUP BY clause used to form groups based on a column while using group function.
Write a query to display department-wise total salary given
SQL> SELECT department_id, sum(salary)
2 FROM employees
3 GROUP BY department_id;
Using ORDER BY
SQL> SELECT department_id, sum(salary)
2 FROM employees
3 GROUP BY department_id
4 ORDER BY sum(salary) DESC;
Modify the above query to remove the department 100, 101 and null
SQL> SELECT department_id, sum(salary)
2 FROM employees
3 WHERE department_id NOT IN(100,101) AND department_id IS NOT NULL
4 GROUP BY department_id
5 ORDER BY sum(salary) DESC;
Modify the above question to show only those departments whose total salary is
greater than 15000
SQL> SELECT department_id, sum(salary)
2 FROM employees
3 WHERE department_id NOT IN(100,101) AND department_id IS NOT NULL
4 GROUP BY department_id
HAVING sum(salary) >15000
5 ORDER BY sum(salary) DESC;
Note: You can also group the result based on multiple columns:
Display department-wise total no of employees for each jobs.
SQL> SELECT department_id, job_id, count(employee_id)
2 FROM employees
3 GROUP BY department_id, job_id
4 ORDER BY department_id, job_id;
)
For Example: Display employees of Shipping department
SELECT employee_id, last_name, salary, department_id
FROM employees
WHERE department_id =
(
SELECT department_id
FROM departments
WHERE department_name='Shipping'
)
Example:
Write a query to display employees belonging the city Seattle
SELECT employee_id, last_name, salary, department_id
FROM employees
WHERE department_id IN
(
SELECT department_id
FROM departments
WHERE location_id=
(
SELECT location_id
FROM locations
WHERE city='Seattle'
)
)
Types:
A subquery is of two types:
1. Single row subquery
Here the inner query returns only one row value to its parent query
Here you can use = operator
2. multiple row subquery
Here the inner query returns more than one row value to its parent query
Here you cannot use = operator
Joins: helps you to combne columns of two or many tables and display them as a single resultset.
Types:
1. NATURAL JOIN
You can perform joins between two tables only if they have common columns.
In Natural Join, oracle will take the common column to be used for joining.
Syntax:
SELECT <columns of two tables>
FROM <table1> NATURAL JOIN <table2>;
For Example
Write a query to show employee id, last name, department id and department name
for all employees
SQL> SELECT employee_id, last_name, department_id, department_name
2 FROM employees NATURAL JOIN departments;
Restriction: You can perform natural join between two tables only of the common columns
of both table matches in terms of their data type as well as their name.
Write a query to show the employee name and his manager name.
SQL> SELECT emp.first_name AS employee, mgr.first_name AS manager
2 FROM employees emp JOIN employees mgr
3 ON emp.manager_id=mgr.employee_id
4 ORDER BY emp.first_name;
5. OUTER JOIN
This type of join can show unmatched records also
Types:
a) Left Outer Join
b) Right Outer Join
c) Full Outer Join
SELECT emp.employee_id, emp.last_name, dep.department_name
FROM employees emp LEFT OUTER JOIN departments dep
ON emp.department_id=dep.department_id;
Non-Equi join: here the common columns of two tables are joined using non-equality
comparison operator such as <, >, BETWEEN, etc.
Display employee id, last name and grade of an employees for only those employees
whose salary is within the min and max salary of their grade.
SQL> SELECT emp.employee_id, emp.last_name, emp.salary,grd.gradeid
2 FROM employees emp JOIN grades grd
3 ON emp.salary BETWEEN grd.minsal AND grd.maxsal;
SET OPERATORS
used to combine the result of two or more tables and display as a single
resultset.
SELECT col4, col2, col3 FROM table1
SQL> ^C
SQL> SELECT x,y FROM testa
2 UNION ALL
3 SELECT a,c FROM testb;
XY
---------- ---------------------------------------1A
2B
3C
1A
2B
5F
6H
7 rows selected.
b) INTERSECT
shows only common rows of two tables removing the distinct rows
SQL>
SQL> SELECT x,y FROM testa
2 INTERSECT
3 SELECT a,c FROM testb;
XY
---------- ---------------------------------------1A
2B
c) MINUS:
it displays all the rows of first query that are not available in the second query.
SQL> select * from testa;
XY
---------- ---------------------------------------1A
2B
3C
Database Objects
-----------------------1. Table
2. View
3. Sequence
4. Synonym
5. Index
Table: is used to store data permanently in rows and column format.
Syntax:
CREATE TABLE <tablename>
(
<colname> <datatype>(size),
<colname> datatype>(size)
);
Rules to follow while naming a database object:
1. Must begin with a letter
2. Be 1 to 30 characters long
3. Can contain A-Z, a-z, 0-9, _, $ and #
4. The object name cannot be duplicated
5. It cannot be a reserved word
Constraint:
is a rule applied on a column to restrint users from entering invalid data.
Types:
1. NOT NULL
2. CHECK
3. UNIQUE
4. DEFAULT property
5. PRIMARY KEY
6. FOREIGN KEY
NOT NULL: used to make a column require and cannot be ignored.
CREATE TABLE employee1
(
empCode CHAR(10) NOT NULL,
empName VARCHAR(50) NOT NULL,
joinDate DATE,
salary number(8,2)
);
SQL> desc employee1
Name
Null? Type
----------------------------------------- -------- ---------------------------EMPCODE
NOT NULL CHAR(10)
EMPNAME
NOT NULL VARCHAR2(50)
JOINDATE
DATE
SALARY
NUMBER(8,2)
4. DEFAULT property
By default when a column is ignored, oracle will place a null value to that column.
The default property is used to insert some other value other than null when it is
4. PRIMARY KEY
A table can have only 1 primary key column. it is used to uniquely identofy two
rows in a table.
When a column is set as PRIMARY KEY, it also gets NOT NULL and UNIQUE constraints.
CREATE TABLE employee5
(
empCode CHAR(10) PRIMARY KEY,
empName VARCHAR(40) NOT NULL,
gender CHAR(1) default 'M' CHECK(gender IN('M','F'))
);
5. FOREIGN KEY
is the primary key column of some other table.
Dept
-----DeptId DeptName
1
Admin
Marketing
Emp
------EmpId DeptCode EmpName Salary
1001 1
Peter
5000
1002 1
Mary
4500
1003 1
George
2300
1004 1
Peter
3400
SQL> CREATE TABLE tasafdept
2 (
3 deptcode NUMBER PRIMARY KEY,
4 deptname VARCHAR(40) NOT NULL UNIQUE
5 );
Table created.
SQL> CREATE TABLE tasafemp
2 (
3 empcode CHAR(15) PRIMARY KEY,
4 deptid NUMBER REFERENCES tasafdept(deptcode),
5 empname VARCHAR(50)
6 );
Table created.
SQL> insert into tasafdept values(&dcode,'&dname')
2 ;
Enter value for dcode: 10
Enter value for dname: Admin
old 1: insert into tasafdept values(&dcode,'&dname')
new 1: insert into tasafdept values(10,'Admin')
1 row created.
SQL> /
Enter value for dcode: 20
Enter value for dname: Marketing
old 1: insert into tasafdept values(&dcode,'&dname')
new 1: insert into tasafdept values(20,'Marketing')
1 row created.
SQL> insert into tasafemp VALUES('&ecode',&dcode,'&ename');
Enter value for ecode: EMP001
Enter value for dcode: 10
Enter value for ename: Mike
old 1: insert into tasafemp VALUES('&ecode',&dcode,'&ename')
new 1: insert into tasafemp VALUES('EMP001',10,'Mike')
1 row created.
SQL> /
Enter value for ecode: EMP002
Enter value for dcode: 10
Enter value for ename: Peter
old 1: insert into tasafemp VALUES('&ecode',&dcode,'&ename')
new 1: insert into tasafemp VALUES('EMP002',10,'Peter')
1 row created.
SQL> /
Enter value for ecode: EMP003
Enter value for dcode: 20
Enter value for ename: Ally
old 1: insert into tasafemp VALUES('&ecode',&dcode,'&ename')
new 1: insert into tasafemp VALUES('EMP003',20,'Ally')
1 row created.
SQL> SELECT * FROM tasafdept;
DEPTCODE DEPTNAME
---------- ---------------------------------------10 Admin
20 Marketing
SQL> select * from tasafemp;
EMPCODE
DEPTID EMPNAME
--------------- ---------- --------------------------------------------------
EMP001
EMP002
EMP003
10 Mike
10 Peter
20 Ally
To enable/disable constraints:
To Drop a table:
DROP TABLE <tablename>;
VIEWS
---------helps to restrict users from viewing sensitive information of a table. A view do not have
data of its own.
Syntax:
CREATE VIEW <viewname>
AS <select statement>;
Types:
1. Simple View
Is one where only 1 table is used as the base table.
SQL> CREATE VIEW myv1
2 AS SELECT employee_id, department_id, last_name, salary
3 FROM employees;
You can perform all DML operation on the table though the view if it is a simple view.
2. Complex View
is one that uses joins, sub query, group functions
SQL> CREATE VIEW myv3
2 AS SELECT emploeye_id, last_name, salary, department_id
3 FROM employees
4 WHERE department_id IN
5 (
6 SELECT department_id
7 FROM departments
8 WHERE location_id=
9 (
10 SELECT location_id
11 FROM locations
12 WHERE city='Seattle'
13 ));
SQL> CREATE VIEW myv4
2 AS SELECT department_id, count(employee_id) AS total_emp
3 FROM employees
4 GROUP BY department_id
5 ORDER BY department_id;
To Drop a view
DROP VIEW <viewname>;
3. SEQUENCE
used to generate numbers.
Syntax:
SYNONYM
----------------they are used another name to a database object.
SQL> CREATE SYNONYM mike FOR employees;
INDEX
-------are used to retreive data faster.
To create an index :
CREATE INDEX <indname> ON <tablename>(colname);
Apply index on a column only if:
1. If the table is very large/
2. if the column holds large no of unique values.
External Tables
Creating a Directory for the External Table
Create a DIRECTORY object that corresponds to the
directory on the file system where the external data
source resides.
CREATE OR REPLACE DIRECTORY emp_dir
AS '/.../emp_dir';
TRUNCATE: this command deletes all the rows from a table and cannot be rolled back.
SQL> TRUNCATE TABLE testb;
INSERT ALL
INTO sal_history VALUES(EMPID,HIREDATE,SAL)
INTO mgr_history VALUES(EMPID,MGR,SAL)
SELECT employee_id EMPID, hire_date HIREDATE,
salary SAL, manager_id MGR
FROM employees
WHERE employee_id > 200;
THEN
INTO mgr_history VALUES(EMPID,MGR,SAL)
SELECT employee_id EMPID,hire_date HIREDATE,
salary SAL, manager_id MGR
FROM
employees
WHERE employee_id > 200;
GROUP BY department_id;
Pivoting INSERT
Suppose you receive a set of sales records from a
nonrelational database table,
SALES_SOURCE_DATA, in the following format:
EMPLOYEE_ID, WEEK_ID, SALES_MON, SALES_TUE,
SALES_WED, SALES_THUR, SALES_FRI
You want to store these records in the
SALES_INFO table in a more typical relational
format:
EMPLOYEE_ID, WEEK, SALES
Using a pivoting INSERT, convert the set of sales
records from the nonrelational database table to
relational format.
Pivoting INSERT
INSERT ALL
INTO sales_info VALUES (employee_id,week_id,sales_MON)
INTO sales_info VALUES (employee_id,week_id,sales_TUE)
INTO sales_info VALUES (employee_id,week_id,sales_WED)
INTO sales_info VALUES (employee_id,week_id,sales_THUR)
INTO sales_info VALUES (employee_id,week_id, sales_FRI)
SELECT EMPLOYEE_ID, week_id, sales_MON, sales_TUE,
sales_WED, sales_THUR,sales_FRI
FROM sales_source_data;
Merging Rows
Insert or update rows in the EMPL3 table to match the
EMPLOYEES table.
MERGE INTO empl3 c
USING employees e
ON (c.employee_id = e.employee_id)
WHEN MATCHED THEN
UPDATE SET
c.first_name
= e.first_name,
c.last_name
= e.last_name,
...
c.department_id = e.department_id
WHEN NOT MATCHED THEN
INSERT VALUES(e.employee_id, e.first_name, e.last_name,
e.email, e.phone_number, e.hire_date, e.job_id,
e.salary, e.commission_pct, e.manager_id,
e.department_id);
Merging Rows
TRUNCATE TABLE empl3;
SELECT *
FROM empl3;
no rows selected
MERGE INTO empl3 c
USING employees e
ON (c.employee_id = e.employee_id)
WHEN MATCHED THEN
UPDATE SET
...
WHEN NOT MATCHED THEN
INSERT VALUES...;
SELECT *
FROM empl3;
Correlated Subqueries
The subquery references a column from a table in the
parent query.
SELECT column1, column2, ...
outer
FROM
table1
WHERE column1 operator
(SELECT
FROM
WHERE
column1, column2
table2
expr1 =
outer
.expr2);
Correlated UPDATE
Use a correlated subquery to update rows in one table
based on rows from another table.
Correlated DELETE
Use a correlated subquery to delete rows in one table
based on rows from another table.
DELETE FROM table1 alias1
WHERE column operator
(SELECT expression
FROM
table2 alias2
WHERE alias1.column = alias2.column);
USER MANAGEMENT
----------------------------Database: is a collection of logical storge units called as Tablespaces.
USERS (default)
EXAMPLE
SYSTEM
SYSAUX
TEMP (default)
UNDO
A Tablespace is of 3 types:
1. PERMANENT: stores data permanently
2. TEMPORARY: stored data temporarily
3. UNDO: stores uncommitted data
To create a new user:
ROLES : a role is a group of privileges that can be granted to/revoked from the user.
To create a role:
CREATE ROLE <rolename>;
To drop a role:
DROP ROLE <rolename>;
Object Privileges
------------------------Syntax:
GRANT <privname>
ON <objname>
TO <username>;
SQL> GRANT SELECT, INSERT
2 ON HR.jobs
3 TO peter;
SQL> INSERT INTO hr.jobs VALUES('DBA','Database Administrator',6000,12000);
To revoke an object privilege:
SQL> l1
1* create user ally
SQL> c /ally/mohd/
1* create user mohd
SQL> /
Non-Equi join: here the common columns of two tables are joined using non-equality
comparison operator such as <, >, BETWEEN, etc.
Display employee id, last name and grade of an employees for only those employees
whose salary is within the min and max salary of their grade.
it displays all the rows of first query that are not available in the second query.
SQL> select * from testa;
XY
---------- ---------------------------------------1A
2B
3C
SQL> select * from testb;
AB
C
---------- --------- ---------------------------------------1 09-SEP-15 A
2 09-SEP-15 B
5 09-SEP-15 F
6 09-SEP-15 H
SQL> SELECT a,c FROM testb
2 MINUS
3 SELECT x,y FROM testa;
AC
---------- ---------------------------------------5F
6H
Database Objects
-----------------------1. Table
2. View
3. Sequence
4. Synonym
5. Index
Table: is used to store data permanently in rows and column format.
Syntax:
CREATE TABLE <tablename>
(
<colname> <datatype>(size),
<colname> datatype>(size)
);
Rules to follow while naming a database object:
1. Must begin with a letter
2. Be 1 to 30 characters long
3. Can contain A-Z, a-z, 0-9, _, $ and #
4. The object name cannot be duplicated
5. It cannot be a reserved word
CREATE TABLE employee
(
empCode CHAR(5),
empName VARCHAR(40),
joinDate DATE,
salary NUMBER(8,2)
);
To insert a new row:
INSERT INTO employee VALUES('EMP001','PETER',sysdate,4500);
OR
INSERT INTO employee (salary,empName, empCode, joinDate)
VALUES(3000,'Peter','EMP02',sysdate);
To ignore the columns
INSERT INTO employee (empCode, empName) VALUES ('EMP03','John');
When a column is ignored,oracle will place a NULL to the ignored column.
You can also ignore a column by explicitly providing the null keyword.
INSERT INTO employee VALUES(null,null,null,null);
Data Manipulation Language
INSERT
DELETE
UPDATE
Constraint:
is a rule applied on a column to restrint users from entering invalid data.
Types:
1. NOT NULL
2. CHECK
3. UNIQUE
4. DEFAULT property
5. PRIMARY KEY
6. FOREIGN KEY
NOT NULL: used to make a column require and cannot be ignored.
CREATE TABLE employee1
(
empCode CHAR(10) NOT NULL,
empName VARCHAR(50) NOT NULL,
joinDate DATE,
salary number(8,2)
);
SQL> desc employee1
Name
Null? Type
----------------------------------------- -------- ---------------------------EMPCODE
NOT NULL CHAR(10)
EMPNAME
NOT NULL VARCHAR2(50)
JOINDATE
DATE
SALARY
NUMBER(8,2)
SQL> insert into employee1 (empCode, joindate, salary)
2 values('EMP001',sysdate,3000);
insert into employee1 (empCode, joindate, salary)
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("HR"."EMPLOYEE1"."EMPNAME")
Note: Any no of columns in a table can have NOT NULL constraint.
2. CHECK
is used to allow users to enter only specific values as per the given condition.
CREATE TABLE employee2
(
empCode CHAR(10) NOT NULL,
empName VARCHAR(40) NOT NULL,
gender CHAR(1) NOT NULL CHECK(gender IN('M','F')),
joinDate DATE,
salary NUMBER(8,2)
);
4. DEFAULT property
By default when a column is ignored, oracle will place a null value to that column.
The default property is used to insert some other value other than null when it is
ignored. It can be applied only to nullable columns.
SQL> CREATE TABLE employee4
2 (
3 empCode CHAR(10) NOT NULL UNIQUE,
4 empName VARCHAR(40) NOT NULL,
5 joindate DATE DEFAULT sysdate,
6 gender CHAR(1) DEFAULT 'M' CHECK (gender IN('M','F'))
7 );
SQL> insert into employee4 (empCode, empName) VALUES ('EMP002','Peter');
1 row created.
SQL> select * from employee4;
EMPCODE EMPNAME
JOINDATE G
---------- ---------------------------------------- --------- EMP001
Mike
10-JAN-15 M
EMP002
Peter
09-SEP-15 M
4. PRIMARY KEY
A table can have only 1 primary key column. it is used to uniquely identofy two
rows in a table.
When a column is set as PRIMARY KEY, it also gets NOT NULL and UNIQUE constraints.
CREATE TABLE employee5
(
empCode CHAR(10) PRIMARY KEY,
empName VARCHAR(40) NOT NULL,
DEPTCODE DEPTNAME
---------- ---------------------------------------10 Admin
20 Marketing
SQL> select * from tasafemp;
EMPCODE
DEPTID EMPNAME
--------------- ---------- -------------------------------------------------EMP001
10 Mike
EMP002
10 Peter
EMP003
20 Ally
SQL> insert into tasafemp values('EMP004',40,'John');
insert into tasafemp values('EMP004',40,'John')
*
ERROR at line 1:
ORA-02291: integrity constraint (HR.SYS_C0011137) violated - parent key not
found
To enable/disable constraints:
SQL> ALTER TABLE employee
2 ENABLE CONSTRAINT FK_EMPLOYEE_DEPT;
SQL> ALTER TABLE employee
2 ENABLE CONSTRAINT FK_EMPLOYEE_DEPT;
SQL> SELECT constraint_name, constraint_type, status
2 FROM user_constraints
3 WHERE table_name='EMPLOYEE';
CONSTRAINT_NAME
C STATUS
------------------------------ - -------UNQ_EMPLOYEE_EMAIL
U ENABLED
PK_EMPLOYEE_EMPCODE
P ENABLED
FK_EMPLOYEE_DEPT
R ENABLED
SQL> ALTER TABLE employee DISABLE CONSTRAINT UNQ_EMPLOYEE_EMAIL;
Table altered.
SQL> SELECT constraint_name, constraint_type, status
2 FROM user_constraints
3 WHERE table_name='EMPLOYEE';
CONSTRAINT_NAME
C STATUS
------------------------------ - -------UNQ_EMPLOYEE_EMAIL
U DISABLED
PK_EMPLOYEE_EMPCODE
P ENABLED
FK_EMPLOYEE_DEPT
R ENABLED
To Drop a table:
DROP TABLE <tablename>;
VIEWS
---------helps to restrict users from viewing sensitive information of a table. A view do not have
data of its own.
Syntax:
CREATE VIEW <viewname>
AS <select statement>;
Types:
1. Simple View
Is one where only 1 table is used as the base table.
SQL> CREATE VIEW myv1
2 AS SELECT employee_id, department_id, last_name, salary
3 FROM employees;
You can perform all DML operation on the table though the view if it is a simple view.
2. Complex View
is one that uses joins, sub query, group functions
SQL> CREATE VIEW myv3
2 AS SELECT emploeye_id, last_name, salary, department_id
3 FROM employees
4 WHERE department_id IN
5 (
6 SELECT department_id
7 FROM departments
8 WHERE location_id=
9 (
10 SELECT location_id
11 FROM locations
12 WHERE city='Seattle'
13 ));
SQL> CREATE VIEW myv4
2 AS SELECT department_id, count(employee_id) AS total_emp
3 FROM employees
4 GROUP BY department_id
5 ORDER BY department_id;
To Drop a view
DROP VIEW <viewname>;
3. SEQUENCE
used to generate numbers.
Syntax:
CREATE SEQUENCE <seqname>;
SQL> CREATE SEQUENCE myseq1;
SQL> SELECT myseq1.NEXTVAL from dual;
SQL> SELECT myseq1.CURRVAL FROM dual;
SQL> CREATE SEQUENCE myseq2
2 START WITH 1000;
SQL> CREATE SEQUENCE myseq3
2 START WITH 1000
3 INCREMENT BY 2;
SQL> CREATE SEQUENCE myseq5
2 MAXVALUE 20
3 CYCLE NOCACHE;
SYNONYM
----------------they are used another name to a database object.
SQL> CREATE SYNONYM mike FOR employees;
INDEX
-------are used to retreive data faster.
To create an index :
External Tables
Creating a Directory for the External Table
Create a DIRECTORY object that corresponds to the
directory on the file system where the external data
source resides.
CREATE OR REPLACE DIRECTORY emp_dir
AS '/.../emp_dir';
TRUNCATE: this command deletes all the rows from a table and cannot be rolled back.
SQL> TRUNCATE TABLE testb;
INSERT ALL
INTO sal_history VALUES(EMPID,HIREDATE,SAL)
INTO mgr_history VALUES(EMPID,MGR,SAL)
SELECT employee_id EMPID, hire_date HIREDATE,
salary SAL, manager_id MGR
FROM employees
WHERE employee_id > 200;
*
ERROR at line 1:
ORA-00936: missing expression
SALARY
SALARY
102 De Haan
103 Hunold
104 Ernst
100
17000
102
9000
103
6000
EMPLOYEE_ID LAST_NAME
MANAGER_ID
----------- ------------------------- ---------- ---------105 Austin
103
4800
106 Pataballa
103
4800
107 Lorentz
103
4200
108 Greenberg
101
12008
109 Faviet
108
9000
110 Chen
108
8200
111 Sciarra
108
7700
112 Urman
108
7800
113 Popp
108
6900
114 Raphaely
100
11000
115 Khoo
114
3100
SALARY
EMPLOYEE_ID LAST_NAME
MANAGER_ID
----------- ------------------------- ---------- ---------116 Baida
114
2900
117 Tobias
114
2800
118 Himuro
114
2600
119 Colmenares
114
2500
120 Weiss
100
8000
121 Fripp
100
8200
122 Kaufling
100
7900
123 Vollman
100
6500
124 Mourgos
100
5800
125 Nayer
120
3200
126 Mikkilineni
120
2700
SALARY
EMPLOYEE_ID LAST_NAME
MANAGER_ID
----------- ------------------------- ---------- ---------127 Landry
120
2400
128 Markle
120
2200
129 Bissot
121
3300
130 Atkinson
121
2800
131 Marlow
121
2500
132 Olson
121
2100
133 Mallin
122
3300
SALARY
134 Rogers
135 Gee
136 Philtanker
137 Ladwig
122
2900
122
2400
122
2200
123
3600
EMPLOYEE_ID LAST_NAME
MANAGER_ID
----------- ------------------------- ---------- ---------138 Stiles
123
3200
139 Seo
123
2700
140 Patel
123
2500
141 Rajs
124
3500
142 Davies
124
3100
143 Matos
124
2600
144 Vargas
124
2500
145 Russell
100
14000
146 Partners
100
13500
147 Errazuriz
100
12000
148 Cambrault
100
11000
SALARY
EMPLOYEE_ID LAST_NAME
MANAGER_ID
----------- ------------------------- ---------- ---------149 Zlotkey
100
10500
150 Tucker
145
10000
151 Bernstein
145
9500
152 Hall
145
9000
153 Olsen
145
8000
154 Cambrault
145
7500
155 Tuvault
145
7000
156 King
146
10000
157 Sully
146
9500
158 McEwen
146
9000
159 Smith
146
8000
SALARY
EMPLOYEE_ID LAST_NAME
MANAGER_ID
----------- ------------------------- ---------- ---------160 Doran
146
7500
161 Sewall
146
7000
162 Vishney
147
10500
163 Greene
147
9500
164 Marvins
147
7200
165 Lee
147
6800
SALARY
166 Ande
167 Banda
168 Ozer
169 Bloom
170 Fox
147
6400
147
6200
148
11500
148
10000
148
9600
EMPLOYEE_ID LAST_NAME
MANAGER_ID
----------- ------------------------- ---------- ---------171 Smith
148
7400
172 Bates
148
7300
173 Kumar
148
6100
174 Abel
149
11000
175 Hutton
149
8800
176 Taylor
149
8600
177 Livingston
149
8400
178 Grant
149
7000
179 Johnson
149
6200
180 Taylor
120
3200
181 Fleaur
120
3100
SALARY
EMPLOYEE_ID LAST_NAME
MANAGER_ID
----------- ------------------------- ---------- ---------182 Sullivan
120
2500
183 Geoni
120
2800
184 Sarchand
121
4200
185 Bull
121
4100
186 Dellinger
121
3400
187 Cabrio
121
3000
188 Chung
122
3800
189 Dilly
122
3600
190 Gates
122
2900
191 Perkins
122
2500
192 Bell
123
4000
SALARY
EMPLOYEE_ID LAST_NAME
MANAGER_ID
----------- ------------------------- ---------- ---------193 Everett
123
3900
194 McCain
123
3200
195 Jones
123
2800
196 Walsh
124
3100
197 Feeney
124
3000
SALARY
HIRE_DATE
HIRE_DATE
HIRE_DATE
151
152
153
154
159
80 SA_REP
80 SA_REP
80 SA_REP
80 SA_REP
80 SA_REP
24-MAR-05
20-AUG-05
30-MAR-06
09-DEC-06
10-MAR-05
HIRE_DATE
HIRE_DATE
53 rows selected.
SQL>
employees
GROUP BY department_id;
Pivoting INSERT
Suppose you receive a set of sales records from a
nonrelational database table,
SALES_SOURCE_DATA, in the following format:
EMPLOYEE_ID, WEEK_ID, SALES_MON, SALES_TUE,
SALES_WED, SALES_THUR, SALES_FRI
You want to store these records in the
SALES_INFO table in a more typical relational
format:
EMPLOYEE_ID, WEEK, SALES
Using a pivoting INSERT, convert the set of sales
records from the nonrelational database table to
relational format.
Pivoting INSERT
INSERT ALL
INTO sales_info VALUES (employee_id,week_id,sales_MON)
INTO sales_info VALUES (employee_id,week_id,sales_TUE)
INTO sales_info VALUES (employee_id,week_id,sales_WED)
INTO sales_info VALUES (employee_id,week_id,sales_THUR)
INTO sales_info VALUES (employee_id,week_id, sales_FRI)
SELECT EMPLOYEE_ID, week_id, sales_MON, sales_TUE,
sales_WED, sales_THUR,sales_FRI
FROM sales_source_data;
Merging Rows
Insert or update rows in the EMPL3 table to match the
EMPLOYEES table.
15 dup.department_id=emp.department_id
16 WHEN NOT MATCHED THEN
17 INSERT VALUES(emp.employee_id,emp.first_name,
emp.last_name,emp.email,emp.phone_number,
18 emp.hire_date, emp.job_id, emp.salary, emp.commission_pct, emp.manager_id,
19 emp.department_id);
Merging Rows
TRUNCATE TABLE empl3;
SELECT *
FROM empl3;
no rows selected
MERGE INTO empl3 c
USING employees e
ON (c.employee_id = e.employee_id)
WHEN MATCHED THEN
UPDATE SET
...
WHEN NOT MATCHED THEN
INSERT VALUES...;
SELECT *
FROM empl3;
2 FROM employees
3 WHERE manager_id IN
4 (
5 SELECT manager_id
6 FROM employees
7 WHERE upper(first_name)='JOHN'
8 )
9 AND department_id IN
10 (
11 SELECT department_id
12 FROM employees
13 WHERE upper(first_name)='JOHN'
14 );
Example of Pair wise comparison:
SELECT employee_id, manager_id, department_id
FROM
employees
WHERE (manager_id, department_id) IN
(SELECT manager_id, department_id
FROM
employees
WHERE employee_id IN (199,174))
AND
employee_id NOT IN (199,174);
Correlated Subqueries
The subquery references a column from a table in the
parent query.
SELECT column1, column2, ...
outer
FROM
table1
WHERE column1 operator
(SELECT
FROM
WHERE
column1, column2
table2
expr1 =
outer
.expr2);
Correlated UPDATE
Use a correlated subquery to update rows in one table
based on rows from another table.
Correlated DELETE
Use a correlated subquery to delete rows in one table
based on rows from another table.
DELETE FROM table1 alias1
WHERE column operator
(SELECT expression
FROM
table2 alias2
WHERE alias1.column = alias2.column);
EMP_HISTORY table.
DELETE FROM empl6 E
WHERE employee_id =
(SELECT employee_id
FROM
emp_history
WHERE employee_id = E.employee_id);
(SELECT dept_avg
FROM avg_cost)
ORDER BY department_name;
USER MANAGEMENT
----------------------------Database: is a collection of logical storge units called as Tablespaces.
USERS (default)
EXAMPLE
SYSTEM
SYSAUX
TEMP (default)
UNDO
A Tablespace is of 3 types:
1. PERMANENT: stores data permanently
2. TEMPORARY: stored data temporarily
3. UNDO: stores uncommitted data
To create a new user:
CREATE USER <username>
IDENTIFIED BY <password>;
2. OBJECT PRIVILEGE
allowing one user to access and manipulate the objects belonging to another
user.
Grant succeeded.
SQL> conn peter/oracle
Connected.
SQL> create table emp
2 (
3 empcode number
4 );
Table created.
SQL> insert into emp values(1000);
insert into emp values(1000)
*
ERROR at line 1:
ORA-01950: no privileges on tablespace 'USERS'
ROLES : a role is a group of privileges that can be granted to/revoked from the user.
To create a role:
CREATE ROLE <rolename>;
To drop a role:
DROP ROLE <rolename>;
Object Privileges
------------------------Syntax:
GRANT <privname>
ON <objname>
TO <username>;
SQL> GRANT SELECT, INSERT
2 ON HR.jobs
3 TO peter;
SQL> INSERT INTO hr.jobs VALUES('DBA','Database Administrator',6000,12000);
To revoke an object privilege:
SQL> REVOKE select
2 ON hr.jobs
3 FROM peter;
SQL> CREATE USER jerry
2 IDENTIFIED BY oracle
3 DEFAULT TABLESPACE example
4 QUOTA UNLIMITED ON example;
User created.
3 SIZE 100M;
Tablespace created.
SQL> ALTER TABLESPACE PurchaseTbs
2 ADD DATAFILE '/u01/app/oracle/oradata/orcl/purchase02.dbf'
3 SIZE 10M;
Tablespace altered.
SQL> create user ally
2 identified by oracle
3 default tablespace purchasetbs
4 quota 40M ON purchasetbs;
create user ally
*
ERROR at line 1:
ORA-01920: user name 'ALLY' conflicts with another user or role name
SQL> l1
1* create user ally
SQL> c /ally/mohd/
1* create user mohd
SQL> /
To delete only the logical structrue from the database:
SQL> DROP TABLESPACE purchasetbs INCLUDING CONTENTS;
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> startup
To shutdown the instance:
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@rajiv Desktop]$ emctl stop dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://rajiv.oracle:1158/em/console/aboutApplication
tnsnames.ora
listener.ora
/u01/app/oracle/product/11.2.0.4/db_1/network/admin
The Shared Memory Area or SGA is the shared by all the background processes.
The PGA or the Program Global Area is shared by a user process.
Mounting the database means associating a database with an instance.
STARTUP
--NOMOUNT
--MOUNT
--OPEN
To start the database in the NOMOUNT stateL
SQL> STARTUP NOMOUNT;
1. Oracle will look for a parameter file called spfile<SID>.ora (spfileORCL.ora). If this file is
not found, it will then
look for another file called spfile.ora. If this is also not found, then it will look for init<SID>.ora
(initORCL.ora)
/u01/app/oracle/product/11.2.0.4/db_1/dbs
2. After reading the parameter file, then it will allocate the SGA (Shared Memory Area).
3. Start up the background processes.
4. Open the ALERT LOG file & trace files.
WHY?
1. For certain database creation
2. For recreation of control files.
3. For certain Backup and recovery operations.
To start the database in the MOUNT state:
If you db is in IDLE state:
STARTUP MOUNT
If the db is in NOMOUNT state:
SQL> ALTER DATABASE MOUNT;
1. Oracle associate the database with a previously started instance.
2. It will read the control files. After reading the control files, it will check the status of the online
redolog files and
database files from these controlfiles.
WHY:
1. To rename the database file
2. To perform certain backup and recovery operation.
3. Enabling/disabling Archiving (online redo log files)
WHY:
1. To enables the database users to connect to the instance.
SHUTDOWN MODES
------------------------------SHUTDOWN NORMAL
1. No new connections will be allowed to be made.
2. Oracle will wait for the users to disconnect the session.
3. When all the user processes are terminated, the database and redo buffers will be written to
disk.
4. The backgroup processes will be termianted and the SGA will be removed from the instance.
5. The database instance will then be dismounted after closing all the control files, redo log files
and datbase
files.
6. The next startup will not require any instance recovery.
SHUTDOWN TRANACTIONAL
1. No new connections will be allowed to be made.
2. The users who are not running any transactions will get immediately disconnected.
3. Oracle will wait for the running transactions to complete. Once the transaction finishes, the
user will
get disconnected.
4. No new transaction will be allowed to be made.
5. When all the user processes are terminated, oracle will initiate the SHUTDOWN process.
6. The next startup will not require any instance recovery
SHUTDOWN IMMEDIATE
1. No new connections will be allowed to be made.
2. The users who are not running any transactions will get immediately disconnected.
3. The currently running transactions will be rolled back and session will be disconnected
immediately
4. When all the user processes are terminated, oracle will initiate the SHUTDOWN process.
5. The next startup will not require any instance recovery
SHUTDOWN ABORT
1. The User process will not terminate
2. The transactions will not get rolled back
3. The database and redo buffers will bot be written to disk.
4. ORacle will close the instance without dismounting it.
To take a backup using OEM:
1. Login to OEM as sys
2. Click on Availability tab
To Perform Recovery
Non-Critical Recovery
If a Temp file is lost or damaged.
Sol:
You do not require a recovery file to recover a temp file.
You need to recreate the temp file.
Method 1:
shutdown and restart the database instance
Method 2: recreate the temp file
SQL> ALTER TABLESPACE TEMP
2 ADD TEMPFILE '/u01/app/oracle/oradata/orcl/temp02.dbf' SIZE 50M;
Tablespace altered.
SQL> ALTER TABLESPACE TEMP
2 DROP TEMPFILE '/u01/app/oracle/oradata/orcl/temp01.dbf';
Tablespace altered.
STEPS:
1. Connect to OEM as sys
2. Click Availability Tab.
3. Under Manage region click Perform Recovery link
4. Select Recovery Scope as Datafiles
5. Operation time as REcover to current time
6. Enter the host credentials. Click the Recover button
7. In step1 select the datafile to be recovered.click next
8. In step 2 decide if you want the recovered file to be restored on a new location or not the
default. click next
9. In review step, review the RMAN scripts and click submit.
MANAGING SCHEMA AND ITS OBJECTS
------------------------------------------------------To create a database object:
1. Click on the schema tab
To perform export:
1. Click Data Movement Tab
2. under Move Row Data region, click Export to Export files
3. Select the export type as Schema
4. Enter the host credentials
5. Click Continue to start the Data Pump Wizard
6. Click the Add button in step 1 and select the HR schema to be exported and click select
7. Click Next
8. Under Optional File region, change the dir object to DATA_PUMP_DIR and type the log file
name and click next
9. In next step, change the directory object to DATA_PUMP_DIR, change the file name to have a
proper format and click next.
10. In the schedule step, type the job name and description. Click next
11. Click Finish to complete the job. Click on the job link to monitor the scheduled job.
To perform import
1. Click Data Movement Tab
2. under Move Row Data region, click Import from export files
3. Select the directory object as DATA_PUMP_DIR abd enter the DMP file name in the file
name field
4. Select the import type as schema and enter the host credentials.
5. In step 1, add the schem to be imported. select the schema to be imported and click select and
next.
6. Under Re-Map Schemas click Add another row. ensure that the source and destination
schema is same. Click next.
7. In this step,select the directory object as DATA_PUMP_DIR and enter the log file name. Click
next
8. In this step, type the job name and description.Leave the schedule for the default and click
next.
To import:
[oracle@rajiv dpdump]$ impdp testemp/oracle DIRECTORY=myexport
DUMPFILE=hremp1.DMP REMAP_SCHEMA=hr:testemp TABLES=hr.employees
LOGFILE=empimport1.log
Database Auditing
---------------------------To enable auditing:
1. connec to OEM as sys
2. Click server tab
3. Under Database Configuration region click Initialization parameters link.
Background: You need to create a user account for Jenny Goodman, the new human
resources department manager. There are also two new clerks in the human resources
department, David Hamby and Rachel Pandya. All three must be able to log in to the
orcl database and to select data from, and update records in, the HR.EMPLOYEES
table. The manager also needs to be able to insert and delete new employee records.
Ensure that if the new users forget to log out at the end of the day, they are automatically
logged out after 15 minutes. You also need to create a new user account for the inventory
application that you are installing.
In this practice, you create the INVENTORY user to own the new Inventory application.
You create a profile to limit the idle time of users. If a user is idle or forgets to log out
after 15 minutes, the user session is ended.
sqlplus / as sysdba << EOF
drop user inventory cascade;
create user inventory identified by verysecure
Create the role named HRCLERK with SELECT and UPDATE permissions on the
HR.EMPLOYEES table.
a) Click the Server tab and then click Roles in the Security section.
b) Click the Create button.
c) Enter HRCLERK in the Name field. This role is not authenticated.
d) Click Object Privileges tab.
e) Select Table from the Select Object Type drop-down list, and then click Add.
f) Enter HR.EMPLOYEES in the Select Table Objects field.
g) Move the SELECT and UPDATE privileges to the Selected Privileges box. Click
OK.
h) Click the Show SQL button, and review your underlying SQL statement.
i) Click Return, and then click OK to create the role.
Create the role named HRMANAGER with INSERT and DELETE permissions on the
HR.EMPLOYEES table. Grant the HRCLERK role to the HRMANAGER role.
a) Click the Server tab, and then click Roles in the Security section.
b) Click Create.
c) Enter HRMANAGER in the Name field. This role is not authenticated.
d) Click Object Privileges tab.
e) Select Table from the Select Object Type drop-down list, and then click Add.
f) Enter HR.EMPLOYEES in the Select Table Objects field.
g) Move the INSERT and DELETE privileges to the Selected Privileges box. Click
OK.
h) Click the Roles tab, and then click Edit List.
i) Move the HRCLERK role into the Selected Roles box, and then click OK.
j) Click the Show SQL button, and review your underlying SQL statement.
k) Click Return, and then click OK to create the role.
MANAGING SCHEMA
1. run the lab_09_01.sql script. This script first creates the users (smavris and
ngreenberg) that are involved in this practice and the hremployee role that will
give these new users access to the hr.employee table. It then logs in to SQL*Plus
as the ngreenberg user and performs an update on the hr.employee table. The
script does not perform a commit, leaving the update uncommitted in this session.
Leave this session connected in the state that it is currently. Do not exit at this
time.
Notice that this session appears to be hung. Leave this session as is and move on
to the next step.
Using Enterprise Manager, click the Blocking Sessions link on the Performance page
and detect which session is causing the locking conflict.
a) In Enterprise Manager, click the Performance page.
a) Select the NGREENBERG session, and then click View Session.
b) Click the hash value link named Previous SQL.
c) Note the SQL that was most recently run.
a) Click the browsers Back button.
b) Now, on the Session Details: NGREENBERG page, click Kill Session.
c) Leave the Options set to Kill Immediate, and then click Show SQL to see the
statement that is going to be executed to kill the session.
d) Click Return, and then click Yes to carry out the KILL SESSION command.
Return to the SQL*Plus command window, and note that SMAVRISs update has
now completed successfully. It may take a few seconds for the success message to
appear.
AUDITING
Background: You have just been informed of suspicious activities in the HR.JOBS table
in your orcl database. The highest salaries seem to fluctuate in a strange way. You
decide to enable standard database auditing and monitor data manipulation language
(DML) activities in this table.
Log in as the DBA1 user (with oracle password, connect as SYSDBA) and perform the
necessary tasks either through Enterprise Manager Database Control or through
SQL*Plus. All scripts for this practice are in the /home/oracle/labs directory.
1) Use Enterprise Manager to enable database auditing. Set the AUDIT_TRAIL
parameter to XML.
a) Invoke Enterprise Manager as the DBA1 user in the SYSDBA role for your orcl
database.
b) Click the Server tab, and then click Audit Settings in the Security section.
c) Click the value of Audit Trail, the DB link.
d) On the Initialization Parameters page, click the SPFile tab.
2) Because you changed a static parameter, you must restart the database. Do so by
running the lab_11_02.sh script.
a) In a terminal window, enter:
b) Continue with the next step when you see that the database is restarted and the
script has exited SQL*Plus.
3) Back in Enterprise Manager, select HR.JOBS as the audited object and DELETE,
INSERT, and UPDATE as Selected Statements. Gather audit information by session.
Because the database has been restarted, you have to log in to Enterprise Manager
again as the DBA user.
a) Click logout in the upper-right corner of the Enterprise Manager window.
b) Log in as the DBA1 user in the SYSDBA role for your orcl database.
c) Click the Database home page tab to ensure that Enterprise Manager had time to
update the status of the database and its agent connections.
d) Click the Server tab, and then click Audit Settings in the Security section.
e) Click the Audited Objects tab at the bottom of the page, and then click the Add
button.
) On the Add Audited Object page, ensure that the Object Type is Table, and enter
HR.JOBS in the Table field (or use the flashlight icon to retrieve this table).
g) Move DELETE, INSERT, and UPDATE into the Selected Statements area by
double-clicking each of them.
h) Click Show SQL.
i) Review the statement, and then click Return.
j) Click OK to activate this audit.
4) Provide input for the audit, by executing the lab_11_04.sh script. This script
creates the AUDIT_USER user, connects to SQL*Plus as this user, and multiplies the
values in the MAX_SALARY column by 10. Then the HR user connects and divides
the column values by 10. Finally, the AUDIT_USER user is dropped again.
a) In a terminal window, enter:
5) In Enterprise Manager, review the audited objects.
a) Click the Server tab, and then click Audit Settings in the Security section.
b) Click Audited Objects in the Audit Trails area, which is on the right side of the
page.
c) On the Audited Objects page, review the collected information.
Question: Can you tell which user increased and which user decreased the
salaries?
Answer: No, the standard audit records only show which user accessed the table.
d) Click Return.
6) Undo your audit settings for HR.JOBS, disable database auditing, and then restart the
database by using the lab_11_06.sh script.
a) On the Audit Settings page, click the Audited Objects tab at the bottom of the
page.
b) Enter HR as Schema, and then click Search.
c) Select all three rows, and then click Remove.
d) On the Confirmation page, click Show SQL.
e) Review the statements, and then click Yes to confirm your removal.
f) On the Audit Settings page, click XML in the Configuration region.
g) On the Initialization Parameters page, click the SPFile tab.
h) On the SPFile page, enter audit in the Name field, and then click Go.
i) For the audit_trail parameter, select the DB value.
j) Click Show SQL.
k) Review the statement, and then click Return.
l) On the Initialization Parameters page, click Apply.
m) Because you changed a static parameter, you must restart the database. Do so by
running the lab_11_06.sh script. In a terminal window, enter:
7) Maintain your audit trail: Because you are completely finished with this task, backup
and delete all audit files from the /u01/app/oracle/admin/orcl/adump
directory.
a) In a terminal window, enter:
b) Create a backup of the audit trail files, and then remove the files
c) Close the terminal window.
This answer does not apply to an OMF database because the control files in
that case would have to all be re-created.
Alternatively, if you did not want to use Enterprise Manager to perform the
steps, you could perform the steps outlined in the Multiplexing Control Files
slide in the Backup and Recover Concepts lesson.
c) Click Backup to Trace.
d) When you receive the success message, note the trace directory location, and then
click OK.
e) Optionally, use a terminal window, logged in as the oracle user to view the
trace file name at the end of the alert log by executing the following command:
cd /u01/app/oracle/diag/rdbms/orcl/orcl/trace
tail alert_orcl.log
The following output shows only the last few lines:
f) Optionally, to view size and usage of the different sections within the control file,
click the Record Section tabbed page.
Your numbers could look different. For additional information, click Help in the
upper-right corner of the page.
2) Review the flash recovery area configuration and change the size to 8 GB.
a) In Enterprise Manager, select Availability > Recovery Settings in the Setup
section.
b) Scroll to the bottom of the page.
c) Question: Is the flash recovery area enabled?
Answer: Yes, by default.
d) Note the location of the flash recovery area.
For example: /u01/app/oracle/flash_recovery_area
e) Question: Which essential DBA tasks can you perform in this section?
Answer: You can change the location, size or retention time for the flash recovery
area, as well as enable the Flashback Database functionality.
f) Question: Does changing the size of the flash recovery area require the database
to be restarted?
Answer: No, a restart is not required for this change.
g) Change the size of the Flash Recovery Area to 8 GB, by entering 8 into the
Flash Recovery Area Size field.
h) Optionally, click Show SQL, review the statement and click Return.
i) Click Apply.
3) Check how many members each redo log group has. Ensure that there are at least two
redo log members in each group. One set of members should be stored in the flash
recovery area.
a) Click Server > Redo Log Groups, and note how many members are in the # of
Members column.
Answer: There is only one member in each group.
b) To add a member to each group, perform the following steps for each group:
i) Select the group (for example, 1) and click the Edit button.
ii) On the Edit Redo Log Group page, note the File Name, for example
redo01.log and click the Add button.
iii) On the Edit Redo Log Group: Add Redo Log Member page, enter a file name
by adding the letter b to the end of the name (before the dot). For example,
enter redo01b.log as File Name and enter your flash recovery area, for
example, /u01/app/oracle/flash_recovery_area/ as File
Directory.
list command.
Now that your database is in ARCHIVELOG mode, it will continually archive a copy
of each online redo log file before reusing it for additional redo data.
Note: Remember that this consumes space on the disk and that you must regularly
back up older archive logs to some other storage.
activity. This backup should be the base for an incremental backup strategy.
a) Question: What prerequisite must be met to create a valid backup of a database
without shutting it down?
Answer: The database must be in ARCHIVELOG mode. Backups made with the
database open, but not in ARCHIVELOG mode, cannot be used for recovery.
b) Select Availability > Schedule Backup (in the Manage section).
If you find that the Oracle-Suggested Backup strategy fits your needs exactly, you
would chose this option. For practice purposes, you will schedule a customized
backup
c) Select Whole Database as the object to be backed up.
d) Confirm or enter oracle and oracle for Host Credentials Username and
Password for your server.
e) Click Schedule Customized Backup.
f) On the Schedule Customized Backup: Options page, select Full Backup for your
Backup Type, and select the Use as the base of an incremental backup strategy
check box.
g) Select Online Backup as Backup Mode.
h) In the Advanced section, select Also back up all archived logs on disk and
Delete all archived logs from disk after they are successfully backed up,
and then click Next to continue.
i) On the Schedule Customized Backup: Settings page, select Disk for your backup
location. (Notice that your Disk Backup Location is retained and that you could
override the current settings for a one-off backup. But do not click it this time.)
j) Click Next.
k) Accept all the defaults on the Schedule Customized Backup: Schedule page and
then click Next to continue.
Note: Schedule Type should be One Time (Immediately).
l) On the Schedule Customized Backup: Review page, review the RMAN script,
and then click Submit Job.
m) Click View Job to monitor the status of the backup job. The time for this backup
depends on your hardware and system resources.
n) Click your browsers Refresh or Requery button until the job is completed.
6) Schedule nightly disk-based incremental online backups for your whole database,
without archived logs. Schedule it for execution at 11:00 PM. The schedule should be
in effect indefinitely.
a) In Enterprise Manager, select Availability > Schedule Backup (in the Manage
section).
b) Select Whole Database as the object to be backed up.
c) Confirm or enter oracle and oracle for Host Credentials Username and
Password for your server, and then click Schedule Customized Backup.
d) On the Schedule Customized Backup: Options page, select Incremental Backup
Many failures of the Oracle database can be traced to some sort of media
failure, such as disk or controller failure. In this practice, you encounter a number of
problems from which you need to recover the database.
Recover from the loss of a control file
Recover from the loss of a data file
Recover from the loss of a redo log member
Recover from the loss of a file in the SYSTEM tablespace
SQL script files are provided for you in the /home/oracle/labs directory. If
needed, use the appendixes for Linux and for SQL syntax. After you set up a failure with
a SQL script, you must complete the recovery before continuing with any other practice.
Note: Your system may have different OS file names than shown here. Your output
might look different. (To conserve space, blank lines have been removed.)
Before beginning one of the recovery scenarios, you need to run a script that will prepare
the environment for the remaining recovery practices.
1) Before setting up an individual problem, you need to navigate to your labs directory
and (in SQL*Plus) execute the lab_16_01.sql script as the SYS user. This script
prepares some procedures to be called by the rest of this practice.
In this practice, your system experiences the loss of a control file. You then go through
the steps to recover from this loss.
1) Continue in your SQL*Plus session as the SYS user. Execute the lab_16_02.sql
script. This script deletes one of your control files.
2) The Help desk begins receiving calls saying that the database appears to be down.
Troubleshoot and recover as necessary. Use Enterprise Manager to try to start up the
database, and use SQL*Plus if needed.
a) In Enterprise Manager, navigate to the Database home page. It reports that the
database is down and offers you the chance to start it up again.
Note: you may see a message stating Internal Error has occurred. If so, keep
trying to connect to using the Enterprise Manager URL. Eventually it will display
the database home page.
b) Click Startup. If you see a Connection Refused message, ignore it; the connection
will eventually be established.
c) Enter oracle as Username and Password for Host Credentials and click OK.
d) Click Yes to confirm your attempted startup.
e) The startup of the instance fails with Enterprise Manager. Click View Details for
more information.
f) Note the following, and then click OK:
ORA-00205: error in identifying control file, check alert
log for more info
g) Alternatively, in a new SQL*Plus session, check the current status of the instance
as the SYS user and attempt to mount it with the following commands:
select status from v$instance;
alter database mount;
3) The instance cannot move to the mount stage because it cannot find one of the control
files. To find the locations of the alert log and of diagnostic information, enter the
following SELECT statement:
SELECT NAME, VALUE FROM V$DIAG_INFO;
4) Look at the last 25 lines in the log.xml file to see if you can find out what the
problem is. Still inside your SQL*Plus session, enter the following command (on one
line):
host tail -25
/u01/app/oracle/diag/rdbms/orcl/orcl/alert/log.xml
5) Note that in the preceding example, the control02.ctl file is missing. This
might be different in your environment. Restore the control file that is missing for
your database by copying an existing control file. Enter the following command with
your correct file names (on one line):
host cp /u01/app/oracle/oradata/orcl/control01.ctl
/u01/app/oracle/oradata/orcl/control02.ctl
6) (Optional) To view the content of the directory, enter:
host ls /u01/app/oracle/oradata/orcl
7) Now mount and open the database with the following commands:
connect / as sysdba
alter database mount;
alter database open;
a) Why did you have to use two commands to move the instance state from
NOMOUNT to OPEN?
Answer: Because the ALTER DATABASE command enables you to change only
one state level for each command
b) Why did you use operating system commands to restore the control file instead of
using Oracle Recovery Manager?
Answer: Because all control files are identical. As long as any one control file is
intact, it can be used to restore the others.
8) Exit all sessions and close all windows
h) A Processing window appears, followed by the Job Activity page. You should see
a message that the job was successfully created. (Your link name is probably
different.)
i) Click the job name link.
j) On the Job Run page, check the Status in the Summary section. If it is Running,
use you browsers Refresh or Requery button until the job is completed.
k) In your SQL*Plus session, verify that the HR.COUNTRIES table is now
accessible.
In this practice, your system experiences the loss of a redo log member. You then go
through the steps to recover from this loss.
1) Make sure that you are in your labs directory. Using SQL*Plus, execute the
lab_16_04.sql script as the SYS user. The lab_16_04.sql script deletes one
of your redo log files. See the error in the alert log and recover from it.
2) The database continues to function normally, and no users are complaining. Log in to
Enterprise Manager with the DBA1 username as SYSDBA. On the Database home
page, view alerts similar to the following ones:
If you do not see similar alerts, you may need to wait a few minutes and refresh the
page. One of the failures listed may be left over from the data file recovery you
performed in the previous practice.
3) Click Availability > Perform Recovery (in the Manage section).
4) On the Perform Recovery page, you see the Failure Description and could directly
begin correcting the failure. But for practice purposes, you follow the steps in the
Data Recovery Advisor. Scroll down and ensure that your host credentials are set
(oracle for both username and password). Then click the Advise and Recover
button (which is one of the ways to invoke the Data Recovery Advisor).
5) On the View and Manage Failures page, ensure that the failure is selected, and click
Advise.
6) The Manual Actions page suggests to manually restore it. In the preceding example,
redo03.log is deleted. Do not click any button at this point in time.
7) In a new terminal window, as the oracle user, copy an existing redo log from the
same redo log group to the missing file.
Note: The actual redo log member that was lost on your machine may be different
than the one shown here. Make sure that you are replacing the file names as
appropriate for your failure.
cd /u01/app/oracle/oradata/orcl
ls
cp /u01/app/oracle/flash_recovery_area/redo02b.log redo02.log
ls
exit
8) Now return to your Manual Actions page in Enterprise Manager and click the Reassess Failures button.
a) Note that there are now no failures found.
b) Question: Why did the database not crash?
Answer: Because a single missing member is noncritical and does not affect the
operation of the database. As long as there is at least one good member for each
log group, the database operation continues.
In this practice, your system experiences the loss of a file in the SYSTEM tablespace. You
then go through the steps to recover from this loss.
1) Why is recovery from the loss of a system data file or a data file belonging to an undo
tablespace different from recovering an application data file?
Answer: Because recovery of system or undo data files must be done with the
database closed, whereas recovery of an application data file can be done with the
database open and available to users
2) As the SYS user, execute the lab_16_05.sql script in your labs directory. This
script deletes the system data file.
3) In Enterprise Manager, review the Database home page. If you see a message that
says the connection was refused, dismiss it and reenter the EM home page URL in the
browser. You may need to try several times before you see the Database home page.
4) The database is shut down. Attempt to start your database.
a) Click Startup to try to open it.
b) On the Startup/Shutdown:Specify Host and Target Database Credentials page,
enter oracle and oracle as Host Credentials. Click OK.
c) On the Startup/Shutdown:Confirmation page, click Yes.
d) A progress page appears, followed by an error message.
5) Note that the database is in a mounted state. Click Perform Recovery.
a) Enter oracle and oracle as Host Credentials, and click Continue.
b) On the Database Login page, enter DBA1, oracle, and SYSDBA and click
Login.
6) On the Perform Recovery page, you could select the Oracle Advised Recovery, but
for practice purposes, continue with a User Directed Recovery.
a) In the User Directed Recovery section, select Datafiles from the Recovery Scope
drop-down list and Recover to current time as Operation Type.
b) Scroll down and enter oracle and oracle as Host Credentials
c) Click Recover.
d) On the Perform Object Level Recovery: Datafiles page, you should see the
missing data file. Click Next.
e) Because the problem is simply a deleted file rather than a bad hard drive, there is
no need to restore to a different location. Select No. Restore the files to the
EXPORTING/IMPORTING
n the recent past, you received a number of questions about the HR
schema. To analyze them without interfering in daily activities, you decide to use the
Data Pump Wizard to export the HR schema to file. When you perform the export, you
are not sure into which database you will be importing this schema.
In the end, you learn that the only database for which management approves an import is
the orcl database. So you perform the import with the Data Pump Wizard, remapping
the HR schema to DBA1 schema.
Then you receive two data load requests for which you decide to use SQL*Loader.
In this practice, you first grant the DBA1 user the privileges necessary to provide access
to the DATA_PUMP_DIR directory. You then export the HR schema so that you can then
import the tables you want into the DBA1 schema. In the practice, you import only the
EMPLOYEES table at this time.
1) First, you need to grant the DBA1 user the appropriate privileges on the
DATA_PUMP_DIR directory and create the users and roles required for this practice.
A script exists that performs all the steps required to configure your environment for
this practice.
a) Review the lab_17_01.sql script, which grants the DBA1 user privileges on
the DATA_PUMP_DIR directory and performs other configurations to your
environment, by executing the following in your labs directory:
$ cat lab_17_01.sql
b) The lab_17_01.sh script calls the lab_17_01.sql script. Execute the
lab_17_01.sh script now:
2) Log in to Enterprise Manager as the DBA1 user in the Normal role and export the
HR schema.
a) Invoke Enterprise Manager as the DBA1 user as the Normal role for your orcl
In this practice, you load data into the PRODUCT_MASTER table by using SQL*Loader
via Enterprise Manager Database Control. Data and control files are provided.
1) As the DBA1 user, use Enterprise Manager to load the lab_17_02_01.dat data
file. This data file contains rows of data for the PRODUCT_MASTER table. The
lab_17_02_01.ctl file is the control file for this load.
Optionally, view the lab_17_02_01.dat and lab_17_02_01.ctl files to
learn more about their structure before going further.
a) Invoke Enterprise Manager as the DBA1 user as the Normal role for your orcl
database.
b) Select Data Movement > Move Row Data > Load Data from User Files.
c) Click Use Existing Control File. If not already entered, enter oracle as
Username and as Password, click Save as Preferred Credential, and then click
Continue.
d) On the Load Data: Control File page, enter
/home/oracle/labs/lab_17_02_01.ctl as the control file name and
path, or use the flashlight icon to select this control file. Click Next.
e) On the Load Data: Data File page, click Provide the full path and name on
the database server machine and enter
/home/oracle/labs/lab_17_02_01.dat as the data file name and path,
or use the flashlight icon to select this data file. Click Next.
f) On the Load Data: Load Method page, select Conventional Path, and then
click Next.
g) On the Load Data: Options page, accept all defaults, but enter
/home/oracle/labs/lab_17_02_01.log as the log file name and path.
Review the advanced options if you want, but do not change any, and then click
Next.
h) On the Load Data: Schedule page, enter lab_17_02_01 as Job Name and
Load data into the PRODUCT_MASTER table as Description. Let the
job start immediately, and then click Next.
i) On the Load Data: Review page, review the loading information and
parameters, and then click Submit Job.
j) Click the link to the LAB_17_02_01 job to monitor the progress. After the job
shows as successfully completed, move on to the next step.
k) Confirm your results by viewing your lab_17_02_01.log file in your
/home/oracle/labs directory.
2) As the INVENTORY user, load data into the PRODUCT_ON_HAND table by using
SQL*Loader command line. The lab_17_02_02.dat data file contains rows of
data for the PRODUCT_ON_HAND table. The lab_17_02_02.ctl file is the
control file for this load.
Optionally, view the lab_17_02_02.dat and lab_17_02_02.ctl files to
learn more about their structure before going further.
To perform export:
To perform import
1. Click Data Movement Tab
2. under Move Row Data region, click Import from export files
3. Select the directory object as DATA_PUMP_DIR abd enter the DMP file name in the file
name field
4. Select the import type as schema and enter the host credentials.
5. In step 1, add the schem to be imported. select the schema to be imported and click select and
next.
6. Under Re-Map Schemas click Add another row. ensure that the source and destination
schema is same. Click next.
7. In this step,select the directory object as DATA_PUMP_DIR and enter the log file name. Click
next
8. In this step, type the job name and description.Leave the schedule for the default and click
next.
9. Click Submit to finish the job.
To import:
[oracle@rajiv dpdump]$ impdp testemp/oracle DIRECTORY=myexport
DUMPFILE=hremp1.DMP REMAP_SCHEMA=hr:testemp TABLES=hr.employees
LOGFILE=empimport1.log
Database Auditing
---------------------------To enable auditing:
1. connec to OEM as sys
2. Click server tab
3. Under Database Configuration region click Initialization parameters link.
4. Click on SPFile tab
5. Type the name audit in the name field and click go to view (AUDIT_TRAIL)
6. Change the audit_trail parameter from db to xml
7. Click Apply
8. Restart the database to change the settings.
Steps to set the object for auditing:
1. Click on the server tab
2. Under Security region click Audit Settings link.
3. Click on the Audited objects tab.
4. Click Add button.
Background: You need to create a user account for Jenny Goodman, the new human
resources department manager. There are also two new clerks in the human resources
department, David Hamby and Rachel Pandya. All three must be able to log in to the
orcl database and to select data from, and update records in, the HR.EMPLOYEES
table. The manager also needs to be able to insert and delete new employee records.
Ensure that if the new users forget to log out at the end of the day, they are automatically
logged out after 15 minutes. You also need to create a new user account for the inventory
application that you are installing.
In this practice, you create the INVENTORY user to own the new Inventory application.
You create a profile to limit the idle time of users. If a user is idle or forgets to log out
after 15 minutes, the user session is ended.
sqlplus / as sysdba << EOF
drop user inventory cascade;
create user inventory identified by verysecure
default tablespace inventory;
grant connect, resource to inventory;
Create a profile named HRPROFILE that allows only 15 minutes idle time.
a) Invoke Enterprise Manager as the SYS user in the SYSDBA role for your orcl
database.
b) Click the Server tab, and then click Profiles in the Security section.
c) Click the Create button.
d) Enter HRPROFILE in the Name field.
e) Enter 15 in the Idle Time (Minutes) field.
f) Leave all the other fields set to DEFAULT.
g) Click the Password tab, and review the Password options, which are currently all
set to DEFAULT.
h) Optionally, click the Show SQL button, review your underlying SQL statement,
and then click Return.
i) Finally, click OK to create your profile
Create the role named HRCLERK with SELECT and UPDATE permissions on the
HR.EMPLOYEES table.
a) Click the Server tab and then click Roles in the Security section.
b) Click the Create button.
c) Enter HRCLERK in the Name field. This role is not authenticated.
d) Click Object Privileges tab.
e) Select Table from the Select Object Type drop-down list, and then click Add.
f) Enter HR.EMPLOYEES in the Select Table Objects field.
g) Move the SELECT and UPDATE privileges to the Selected Privileges box. Click
OK.
h) Click the Show SQL button, and review your underlying SQL statement.
i) Click Return, and then click OK to create the role.
Create the role named HRMANAGER with INSERT and DELETE permissions on the
HR.EMPLOYEES table. Grant the HRCLERK role to the HRMANAGER role.
a) Click the Server tab, and then click Roles in the Security section.
b) Click Create.
c) Enter HRMANAGER in the Name field. This role is not authenticated.
d) Click Object Privileges tab.
e) Select Table from the Select Object Type drop-down list, and then click Add.
f) Enter HR.EMPLOYEES in the Select Table Objects field.
g) Move the INSERT and DELETE privileges to the Selected Privileges box. Click
OK.
h) Click the Roles tab, and then click Edit List.
i) Move the HRCLERK role into the Selected Roles box, and then click OK.
j) Click the Show SQL button, and review your underlying SQL statement.
k) Click Return, and then click OK to create the role.
MANAGING SCHEMA
Leave this session connected in the state that it is currently. Do not exit at this
time.
Notice that this session appears to be hung. Leave this session as is and move on
to the next step.
Using Enterprise Manager, click the Blocking Sessions link on the Performance page
and detect which session is causing the locking conflict.
a) In Enterprise Manager, click the Performance page.
a) Select the NGREENBERG session, and then click View Session.
b) Click the hash value link named Previous SQL.
c) Note the SQL that was most recently run.
a) Click the browsers Back button.
b) Now, on the Session Details: NGREENBERG page, click Kill Session.
c) Leave the Options set to Kill Immediate, and then click Show SQL to see the
statement that is going to be executed to kill the session.
d) Click Return, and then click Yes to carry out the KILL SESSION command.
Return to the SQL*Plus command window, and note that SMAVRISs update has
now completed successfully. It may take a few seconds for the success message to
appear.
Note: Your information will look different on all analysis screenshots, based on your
analysis period and the system activity during this period.
c) Question: Looking at the preceding screenshot, how many errors did this system
encounter?
Answer: None
d) Question: Looking at the preceding screenshot, what is the duration of the longest
running query?
Answer: 28 minutes (Your answer may be different.)
e) Click the Plus icon to show related graphs.
f) Question: How many graphs are displayed?
Answer: Three. (Undo Tablespace Usage, Undo Retention Auto-Tuning, and
Undo Generation Rate)
g) Question: Looking at the preceding Undo Retention Auto-Tuning graph, could
this system support flashback above and beyond the current longest running
query?
Answer: Yes, (but most likely not enough to support the required 48 hours).
2) Modify the undo retention time and calculate the undo tablespace size to support the
requested 48-hour retention.
a) Click the General tab to go back to the General Automatic Undo Management
page.
b) Under the Undo Advisor section, select Specified manually to allow for longer
duration queries or flashback.
c) Enter 48 hours as Duration and click the Run Analysis button.
d) When the Undo Advisor is finished, take a look at the results.
It looks like the undo tablespace is very close to the recommended undo
tablespace size. This is okay for most workloads, but the recommendation is to set
your undo tablespace size to be three times the minimum size. This means that
you should change your undo tablespace size to be 846 MB.
Note: Your recommended size might be different from what is shown here, so
adjust the size accordingly.
e) Click the Show SQL button in the upper-right corner of the General Automatic
Undo Management page.
f) This command will change the undo retention to support the 48-hour requirement.
Review the SQL statement and click Return.
g) Click Apply to make the change to undo retention.
h) Now adjust the undo tablespace size by clicking the Edit Undo Tablespace button.
i) Scroll down to Datafiles and click Edit to make a change to the
undotbs01.dgf file size.
j) Change the file size to 846 MB and click Continue.
k) Verify the SQL commands that will be executed by clicking Show SQL.
Click Return.
l) Click Apply to change the tablespace size.
3) Go back to the Automatic Undo Management to see the results of the changes you
just made. You see that the undo retention time has increased to support the 48 hours
requirement. Your undo tablespace size has also increased based on the changes you
made to the size of the datafile for the undo tablespace.
a) Question: Which Flashback operations are potentially affected by this change?
Answer: Flashback query, Flashback transaction, and Flashback table.
b) Question: Do undo data survive the shutdown of a database?
Answer: Yes, undo is persistent.
AUDITING
Background: You have just been informed of suspicious activities in the HR.JOBS table
in your orcl database. The highest salaries seem to fluctuate in a strange way. You
decide to enable standard database auditing and monitor data manipulation language
(DML) activities in this table.
Log in as the DBA1 user (with oracle password, connect as SYSDBA) and perform the
necessary tasks either through Enterprise Manager Database Control or through
SQL*Plus. All scripts for this practice are in the /home/oracle/labs directory.
1) Use Enterprise Manager to enable database auditing. Set the AUDIT_TRAIL
parameter to XML.
a) Invoke Enterprise Manager as the DBA1 user in the SYSDBA role for your orcl
database.
b) Click the Server tab, and then click Audit Settings in the Security section.
c) Click the value of Audit Trail, the DB link.
d) On the Initialization Parameters page, click the SPFile tab.
e) Enter audit in the Name field and then click Go.
f) For the audit_trail parameter, select the XML value.
g) Click Show SQL.
h) Review the statement and then click Return.
i) On the Initialization Parameters page, click Apply.
2) Because you changed a static parameter, you must restart the database. Do so by
running the lab_11_02.sh script.
a) In a terminal window, enter:
b) Continue with the next step when you see that the database is restarted and the
7) Maintain your audit trail: Because you are completely finished with this task, backup
and delete all audit files from the /u01/app/oracle/admin/orcl/adump
directory.
a) In a terminal window, enter:
b) Create a backup of the audit trail files, and then remove the files
c) Close the terminal window.
database.
b) Click Server > Control Files (in the Storage section).
Question 1: On the Control Files: General page, how many control files do you
have?
Answer: Three (in the preceding example).
Question 2: How would you add another control file if you needed to?
Answer: Adding a control file is a manual operation. To perform this, you must:
Shut down the database
Use the operating system to copy an existing control file to the location where
you want your new file to be.
Start the database by using Enterprise Manager. Unlike a normal startup, you
would use Advanced Options to select a different startup mode. Select Start
the instance to leave the instance in the NOMOUNT state.
Edit the CONTROL_FILES initialization parameter to point to the new
control file.
Continue the STARTUP database operation until the database is in an open
state.
Note
This answer does not apply to an OMF database because the control files in
that case would have to all be re-created.
Alternatively, if you did not want to use Enterprise Manager to perform the
steps, you could perform the steps outlined in the Multiplexing Control Files
slide in the Backup and Recover Concepts lesson.
c) Click Backup to Trace.
d) When you receive the success message, note the trace directory location, and then
click OK.
e) Optionally, use a terminal window, logged in as the oracle user to view the
trace file name at the end of the alert log by executing the following command:
cd /u01/app/oracle/diag/rdbms/orcl/orcl/trace
tail alert_orcl.log
The following output shows only the last few lines:
f) Optionally, to view size and usage of the different sections within the control file,
click the Record Section tabbed page.
Your numbers could look different. For additional information, click Help in the
upper-right corner of the page.
2) Review the flash recovery area configuration and change the size to 8 GB.
a) In Enterprise Manager, select Availability > Recovery Settings in the Setup
section.
b) Scroll to the bottom of the page.
different hard drives, preferably with different disk controllers, to minimize the
risk of any single hardware failure destroying an entire log group.
4) You notice that, for each log group, the Archived column has a value of No. This
means that your database is not retaining copies of redo logs to use for database
recovery, and in the event of a failure, you will lose all data since your last backup.
Place your database in ARCHIVELOG mode, so that redo logs are archived.
Note: You must continue with step 5, so that your changes are applied.
a) In Enterprise Manager, select Availability > Recovery Settings in the Setup
section.
b) In the Media Recovery region, select the ARCHIVELOG Mode check box.
c) Verify that Log Archive Filename Format contains %t, %s, and %r.
d) Notice the current configuration of redundant archive log destinationsone to the
flash recovery area and the other to
/u01/app/oracle/product/11.1.0/db_1/dbs/arch. The database
is preconfigured to save archived logs to the flash recovery area (Archive Log
Destination 10), as well as to a redundant location (Archive Log Destination 1).
Note: If you add archive log destinations, you must create the directory, if it does
not already exist.
e) Click Apply.
f) When prompted whether you want to restart the database now, click Yes.
g) Enter the credentials to restart the database (oracle as the Host Credentials, and
sys/oracle as SYSDBA as Database Credentials), and then click OK.
h) When asked to confirm, click Yes again.
i) Should you receive an error during the shutdown and startup activity, click OK to
acknowledge the error, and then click Refresh again. (You might have been
simply faster than the database.)
5) Optionally, use SQL*Plus to check whether your database is in ARCHIVELOG mode.
In a terminal window, log in to SQL*Plus as SYSDBA and run the archive log
list command.
Now that your database is in ARCHIVELOG mode, it will continually archive a copy
of each online redo log file before reusing it for additional redo data.
Note: Remember that this consumes space on the disk and that you must regularly
back up older archive logs to some other storage.
loss of data. Establish the backup policy to automatically back up the SPFILE and
control file. Perform an immediate backup to disk and schedule nightly backup jobs that
repeat indefinitely.
In this practice, you perform an immediate backup to disk and schedule a nightly backup
job.
1) What is the difference between a backup set and an image copy?
Answer: A backup set contains data and archive log files packed in an Oracle
proprietary format. Files must be extracted before use. Image copies are the
equivalent of operating system file copies and can be used for restore operations
immediately.
2) What is the destination of any disk backups that are done?
a) Log in to Enterprise Manager as the DBA1 user in the SYSDBA role and select
Availability > Backup Settings.
b) Note the message under the Disk Backup Location that says the flash recovery
area is the current disk backup location.
3) Establish the backup policy to automatically back up the SPFILE and control file.
a) Click the Policy tab under the Backup Settings pages.
b) Click Automatically backup the control file and server parameter file
(SPFILE) with every backup and database structural change.
c) Scroll to the bottom and enter oracle and oracle for Host Credentials
Username and Password for your server, and click Save as Preferred
Credential.
4) Test making a backup to disk, as a backup set, with oracle for Host Credentials.
a) Click the Device tab under the Backup Settings pages.
b) Select Backup Set as your Disk Backup Type.
c) Scroll to the bottom and ensure the Host Credentials are set to oracle.
d) Scroll to the top of the page and click Test Disk Backup.
e) A processing message appears. When the test finishes, and you see the Disk
Backup Test Successful! message, click OK.
5) Back up your entire database, with archive logs, while the database is open for user
activity. This backup should be the base for an incremental backup strategy.
a) Question: What prerequisite must be met to create a valid backup of a database
without shutting it down?
Answer: The database must be in ARCHIVELOG mode. Backups made with the
database open, but not in ARCHIVELOG mode, cannot be used for recovery.
b) Select Availability > Schedule Backup (in the Manage section).
If you find that the Oracle-Suggested Backup strategy fits your needs exactly, you
would chose this option. For practice purposes, you will schedule a customized
backup
c) Select Whole Database as the object to be backed up.
d) Confirm or enter oracle and oracle for Host Credentials Username and
j) Select By Days from the Frequency Type drop-down list, enter 1 in the Repeat
Every field, confirm that Indefinite is selected as the Repeat Until value, and enter
11:00 PM as Time.
k) Click Next to continue.
l) On the Schedule Customized Backup: Review page, review your Settings and
RMAN script.
m) Click Submit Job, and then click OK.
n) Click Jobs on the Availability page in the Related Links section to see the
scheduled job in the Job Activity list.
Many failures of the Oracle database can be traced to some sort of media
failure, such as disk or controller failure. In this practice, you encounter a number of
problems from which you need to recover the database.
Recover from the loss of a control file
Recover from the loss of a data file
Recover from the loss of a redo log member
Recover from the loss of a file in the SYSTEM tablespace
SQL script files are provided for you in the /home/oracle/labs directory. If
needed, use the appendixes for Linux and for SQL syntax. After you set up a failure with
a SQL script, you must complete the recovery before continuing with any other practice.
Note: Your system may have different OS file names than shown here. Your output
might look different. (To conserve space, blank lines have been removed.)
Before beginning one of the recovery scenarios, you need to run a script that will prepare
the environment for the remaining recovery practices.
1) Before setting up an individual problem, you need to navigate to your labs directory
and (in SQL*Plus) execute the lab_16_01.sql script as the SYS user. This script
prepares some procedures to be called by the rest of this practice.
In this practice, your system experiences the loss of a control file. You then go through
the steps to recover from this loss.
1) Continue in your SQL*Plus session as the SYS user. Execute the lab_16_02.sql
script. This script deletes one of your control files.
2) The Help desk begins receiving calls saying that the database appears to be down.
Troubleshoot and recover as necessary. Use Enterprise Manager to try to start up the
database, and use SQL*Plus if needed.
a) In Enterprise Manager, navigate to the Database home page. It reports that the
In this practice, your system experiences the loss of a redo log member. You then go
through the steps to recover from this loss.
1) Make sure that you are in your labs directory. Using SQL*Plus, execute the
lab_16_04.sql script as the SYS user. The lab_16_04.sql script deletes one
of your redo log files. See the error in the alert log and recover from it.
2) The database continues to function normally, and no users are complaining. Log in to
Enterprise Manager with the DBA1 username as SYSDBA. On the Database home
page, view alerts similar to the following ones:
If you do not see similar alerts, you may need to wait a few minutes and refresh the
page. One of the failures listed may be left over from the data file recovery you
performed in the previous practice.
3) Click Availability > Perform Recovery (in the Manage section).
4) On the Perform Recovery page, you see the Failure Description and could directly
begin correcting the failure. But for practice purposes, you follow the steps in the
Data Recovery Advisor. Scroll down and ensure that your host credentials are set
(oracle for both username and password). Then click the Advise and Recover
button (which is one of the ways to invoke the Data Recovery Advisor).
5) On the View and Manage Failures page, ensure that the failure is selected, and click
Advise.
6) The Manual Actions page suggests to manually restore it. In the preceding example,
redo03.log is deleted. Do not click any button at this point in time.
7) In a new terminal window, as the oracle user, copy an existing redo log from the
same redo log group to the missing file.
Note: The actual redo log member that was lost on your machine may be different
than the one shown here. Make sure that you are replacing the file names as
appropriate for your failure.
cd /u01/app/oracle/oradata/orcl
ls
cp /u01/app/oracle/flash_recovery_area/redo02b.log redo02.log
ls
exit
8) Now return to your Manual Actions page in Enterprise Manager and click the Reassess Failures button.
a) Note that there are now no failures found.
b) Question: Why did the database not crash?
Answer: Because a single missing member is noncritical and does not affect the
operation of the database. As long as there is at least one good member for each
log group, the database operation continues.
In this practice, your system experiences the loss of a file in the SYSTEM tablespace. You
then go through the steps to recover from this loss.
1) Why is recovery from the loss of a system data file or a data file belonging to an undo
tablespace different from recovering an application data file?
Answer: Because recovery of system or undo data files must be done with the
database closed, whereas recovery of an application data file can be done with the
database open and available to users
2) As the SYS user, execute the lab_16_05.sql script in your labs directory. This
script deletes the system data file.
3) In Enterprise Manager, review the Database home page. If you see a message that
says the connection was refused, dismiss it and reenter the EM home page URL in the
browser. You may need to try several times before you see the Database home page.
4) The database is shut down. Attempt to start your database.
a) Click Startup to try to open it.
b) On the Startup/Shutdown:Specify Host and Target Database Credentials page,
enter oracle and oracle as Host Credentials. Click OK.
c) On the Startup/Shutdown:Confirmation page, click Yes.
d) A progress page appears, followed by an error message.
5) Note that the database is in a mounted state. Click Perform Recovery.
a) Enter oracle and oracle as Host Credentials, and click Continue.
b) On the Database Login page, enter DBA1, oracle, and SYSDBA and click
Login.
6) On the Perform Recovery page, you could select the Oracle Advised Recovery, but
for practice purposes, continue with a User Directed Recovery.
a) In the User Directed Recovery section, select Datafiles from the Recovery Scope
drop-down list and Recover to current time as Operation Type.
b) Scroll down and enter oracle and oracle as Host Credentials
c) Click Recover.
d) On the Perform Object Level Recovery: Datafiles page, you should see the
missing data file. Click Next.
e) Because the problem is simply a deleted file rather than a bad hard drive, there is
no need to restore to a different location. Select No. Restore the files to the
default location and then click Next.
f) On the Perform Object Level Recovery: Review page, view your current options
and the data file. Click Edit RMAN Script to review the RMAN commands.
g) Review the RMAN commands and click Submit.
h) A processing page appears, followed by the Perform Recovery: Result page. The
duration of this operation depends on your system resources. The recovery
operation should be successful.
i) On the Perform Recovery: Result page, click Open Database.
j) After you see the success message, click OK.
k) Verify that the database is open and operating normally by logging in to EM as
your DBA1 user as SYSDBA, and reviewing the Database home page.
EXPORTING/IMPORTING
n the recent past, you received a number of questions about the HR
schema. To analyze them without interfering in daily activities, you decide to use the
Data Pump Wizard to export the HR schema to file. When you perform the export, you
are not sure into which database you will be importing this schema.
In the end, you learn that the only database for which management approves an import is
the orcl database. So you perform the import with the Data Pump Wizard, remapping
the HR schema to DBA1 schema.
Then you receive two data load requests for which you decide to use SQL*Loader.
In this practice, you first grant the DBA1 user the privileges necessary to provide access
to the DATA_PUMP_DIR directory. You then export the HR schema so that you can then
import the tables you want into the DBA1 schema. In the practice, you import only the
EMPLOYEES table at this time.
1) First, you need to grant the DBA1 user the appropriate privileges on the
DATA_PUMP_DIR directory and create the users and roles required for this practice.
A script exists that performs all the steps required to configure your environment for
this practice.
a) Review the lab_17_01.sql script, which grants the DBA1 user privileges on
the DATA_PUMP_DIR directory and performs other configurations to your
environment, by executing the following in your labs directory:
$ cat lab_17_01.sql
b) The lab_17_01.sh script calls the lab_17_01.sql script. Execute the
lab_17_01.sh script now:
2) Log in to Enterprise Manager as the DBA1 user in the Normal role and export the
HR schema.
a) Invoke Enterprise Manager as the DBA1 user as the Normal role for your orcl
database. The Connect As setting should be Normal.
b) Select Data Movement > Move Row Data > Export to Export Files.
c) Select Schemas, enter oracle as Username and Password, select Save as
Preferred Credential, and then click Continue.
d) On the Export: Schemas page, click Add, select the HR schema, and then click the
Select button.
e) You see that HR is now in the list of schemas. Click Next.
f) On the Export: Options page, select DATA_PUMP_DIR from the Directory
Objects drop-down list, and enter hrexp.log as Log File.
g) Review Advanced Options (but do not change), and then click Next.
h) On the Export: Files page, select DATA_PUMP_DIR from the Directory
Object drop-down list, enter HREXP%U.DMP as File Name, and then click Next.
i) On the Export: Schedule page, enter hrexp as Job Name and Export HR
schema as Description, accept the immediate job start time, and then click
Next.
j) On the Export: Review page, click Show PL/SQL and review the PL/SQL that
the Export Wizard helped you to create.
k) Click Submit Job to submit the job.
l) Click the link to the HREXP job to monitor the progress. When the job shows as
successfully completed, move on to the next step.
3) Now, import the EMPLOYEES table from the exported HR schema into the DBA1
schema. To get a feeling for the command-line interface, you can use the impdp
utility from the command line to import the EMPLOYEES table into the DBA1 user
schema.
a) Enter the following entire command string. Do not press [Enter] before reaching
the end of the command:
impdp dba1/oracle DIRECTORY=data_pump_dir DUMPFILE=HREXP01.DMP
REMAP_SCHEMA=hr:dba1 TABLES=employees LOGFILE=empimport.log
Note: You may see errors on constraints and triggers not being created because only
the EMPLOYEES table is imported and not the other objects in the schema. These
errors are expected.
b) You can also verify that the import succeeded by viewing the log file.
$ cat /u01/app/oracle/admin/orcl/dpdump/empimport.log
4) Confirm that the EMPLOYEES table has been loaded into the DBA1 schema by
logging in to SQL*Plus as the DBA1 user and selecting data from the EMPLOYEES
table.
a) Log in to SQL*Plus as the DBA1 user.
b) Select a count of the rows from the EMPLOYEES table in the DBA1 schema, for
verification of the import.
In this practice, you load data into the PRODUCT_MASTER table by using SQL*Loader
via Enterprise Manager Database Control. Data and control files are provided.
1) As the DBA1 user, use Enterprise Manager to load the lab_17_02_01.dat data
file. This data file contains rows of data for the PRODUCT_MASTER table. The
lab_17_02_01.ctl file is the control file for this load.
Optionally, view the lab_17_02_01.dat and lab_17_02_01.ctl files to
learn more about their structure before going further.
a) Invoke Enterprise Manager as the DBA1 user as the Normal role for your orcl
database.
b) Select Data Movement > Move Row Data > Load Data from User Files.
c) Click Use Existing Control File. If not already entered, enter oracle as
Username and as Password, click Save as Preferred Credential, and then click
Continue.
d) On the Load Data: Control File page, enter
/home/oracle/labs/lab_17_02_01.ctl as the control file name and
path, or use the flashlight icon to select this control file. Click Next.
e) On the Load Data: Data File page, click Provide the full path and name on
the database server machine and enter
/home/oracle/labs/lab_17_02_01.dat as the data file name and path,
or use the flashlight icon to select this data file. Click Next.
f) On the Load Data: Load Method page, select Conventional Path, and then
click Next.
g) On the Load Data: Options page, accept all defaults, but enter
/home/oracle/labs/lab_17_02_01.log as the log file name and path.
Review the advanced options if you want, but do not change any, and then click
Next.
h) On the Load Data: Schedule page, enter lab_17_02_01 as Job Name and
Load data into the PRODUCT_MASTER table as Description. Let the
job start immediately, and then click Next.
i) On the Load Data: Review page, review the loading information and
parameters, and then click Submit Job.
j) Click the link to the LAB_17_02_01 job to monitor the progress. After the job
shows as successfully completed, move on to the next step.
k) Confirm your results by viewing your lab_17_02_01.log file in your
/home/oracle/labs directory.
2) As the INVENTORY user, load data into the PRODUCT_ON_HAND table by using
SQL*Loader command line. The lab_17_02_02.dat data file contains rows of
data for the PRODUCT_ON_HAND table. The lab_17_02_02.ctl file is the
control file for this load.
Optionally, view the lab_17_02_02.dat and lab_17_02_02.ctl files to
learn more about their structure before going further.
a) Open a terminal window and navigate to the /home/oracle/labs directory.
b) Enter the following SQL*Loader command (in continuation, without pressing
[Enter] before reaching the end of the command):
sqlldr userid=inventory/verysecure control=lab_17_02_02.ctl
log=lab_17_02_02.log data=lab_17_02_02.dat
c) Confirm your results by viewing your lab_17_02_02.log file in your
/home/oracle/labs directory.
Adding a Column
ADD
(job_id VARCHAR2(9));
Table altered.
Modifying a Column
Dropping a Column
Use the DROP COLUMN clause to drop columns you no
longer need from the table.
ALTER TABLE dept80
DROP COLUMN job_id;
Table altered.
ON DELETE CASCADE
Delete child rows when a parent key is deleted.
ALTER TABLE Emp2 ADD CONSTRAINT emp_dt_fk
FOREIGN KEY (Department_id)
REFERENCES departments ON DELETE CASCADE);
Table altered.
Deferring Constraints
Constraints can have the following attributes:
DEFERRABLE or NOT DEFERRABLE
INITIALLY DEFERRED or INITIALLY IMMEDIATE
ALTER TABLE dept2
ADD CONSTRAINT dept2_id_pk
PRIMARY KEY (department_id)
DEFERRABLE INITIALLY DEFERRED
Deferring constraint on creation
SET CONSTRAINTS dept2_id_pk IMMEDIATE
ALTER SESSION
SET CONSTRAINTS= IMMEDIATE
Dropping a Constraint
Remove the manager constraint from the EMP2
table.
ALTER TABLE emp2
DROP CONSTRAINT emp_mgr_fk;
Table altered.
Remove the PRIMARY KEY constraint on the
DEPT2 table and drop the associated FOREIGN
KEY constraint on the EMP2.DEPARTMENT_ID
column.
ALTER TABLE dept2
DROP PRIMARY KEY CASCADE;
Table altered.
Disabling Constraints
Execute the DISABLE clause of the ALTER TABLE
statement to deactivate an integrity constraint.
Apply the CASCADE option to disable dependent
integrity constraints.
Enabling Constraints
Activate an integrity constraint currently disabled
in the table definition by using the ENABLE clause.
ALTER TABLE
emp2
ENABLE CONSTRAINT emp_dt_fk;
Table altered.
A UNIQUE index is automatically created if you
enable a UNIQUE key or PRIMARY KEY constraint.
Cascading Constraints
The CASCADE CONSTRAINTS clause is used along
with the DROP COLUMN clause.
The CASCADE CONSTRAINTS clause drops all
referential integrity constraints that refer to the
primary and unique keys defined on the dropped
columns.
The CASCADE CONSTRAINTS clause also drops all
multicolumn constraints defined on the dropped
columns.
Overview of Indexes
Indexes are created:
Automatically
PRIMARY KEY creation
UNIQUE KEY creation
Manually
CREATE INDEX statement
CREATE TABLE statement
CREATE INDEX with CREATE TABLE
Statement
CREATE TABLE NEW_EMP
(employee_id NUMBER(6)
PRIMARY KEY USING INDEX
(CREATE INDEX emp_id_idx ON
NEW_EMP(employee_id)),
first_name VARCHAR2(20),
last_name
VARCHAR2(25));
Table created.
SELECT INDEX_NAME, TABLE_NAME
FROM
USER_INDEXES
WHERE TABLE_NAME = 'NEW_EMP';
Function-Based Indexes
A function-based index is based on expressions.
The index expression is built from table columns,
constants, SQL functions, and user-defined
functions.
Removing an Index
Remove an index from the data dictionary by
using the DROP INDEX command.
DROP INDEX index;
Remove the UPPER_DEPT_NAME_IDX index from
External Tables
Creating a Directory for the External Table
Create a DIRECTORY object that corresponds to the
directory on the file system where the external data
source resides.
CREATE OR REPLACE DIRECTORY emp_dir
AS '/.../emp_dir';
GRANT READ ON DIRECTORY emp_dir TO hr;
CREATE TABLE oldemp (
fname char(25), lname CHAR(25))
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY emp_dir
ACCESS PARAMETERS
(RECORDS DELIMITED BY NEWLINE
NOBADFILE
NOLOGFILE
FIELDS TERMINATED BY ','
(fname POSITION ( 1:20) CHAR,
lname POSITION (22:41) CHAR))
LOCATION ('emp.dat'))
PARALLEL 5
INSERT ALL
INTO sal_history VALUES(EMPID,HIREDATE,SAL)
INTO mgr_history VALUES(EMPID,MGR,SAL)
SELECT employee_id EMPID, hire_date HIREDATE,
salary SAL, manager_id MGR
FROM employees
WHERE employee_id > 200;
ELSE
INTO hiredate_history VALUES(DEPTID, HIREDATE)
SELECT department_id DEPTID, SUM(salary) SAL,
MAX(hire_date) HIREDATE
FROM
employees
GROUP BY department_id;
Pivoting INSERT
Suppose you receive a set of sales records from a
nonrelational database table,
SALES_SOURCE_DATA, in the following format:
EMPLOYEE_ID, WEEK_ID, SALES_MON, SALES_TUE,
SALES_WED, SALES_THUR, SALES_FRI
You want to store these records in the
SALES_INFO table in a more typical relational
format:
EMPLOYEE_ID, WEEK, SALES
Using a pivoting INSERT, convert the set of sales
records from the nonrelational database table to
relational format.
Pivoting INSERT
INSERT ALL
INTO sales_info VALUES (employee_id,week_id,sales_MON)
INTO sales_info VALUES (employee_id,week_id,sales_TUE)
INTO sales_info VALUES (employee_id,week_id,sales_WED)
INTO sales_info VALUES (employee_id,week_id,sales_THUR)
INTO sales_info VALUES (employee_id,week_id, sales_FRI)
SELECT EMPLOYEE_ID, week_id, sales_MON, sales_TUE,
sales_WED, sales_THUR,sales_FRI
FROM sales_source_data;
Merging Rows
Insert or update rows in the EMPL3 table to match the
EMPLOYEES table.
MERGE INTO empl3 c
USING employees e
ON (c.employee_id = e.employee_id)
WHEN MATCHED THEN
UPDATE SET
c.first_name
= e.first_name,
c.last_name
= e.last_name,
...
c.department_id = e.department_id
WHEN NOT MATCHED THEN
INSERT VALUES(e.employee_id, e.first_name, e.last_name,
e.email, e.phone_number, e.hire_date, e.job_id,
e.salary, e.commission_pct, e.manager_id,
e.department_id);
Merging Rows
TRUNCATE TABLE empl3;
SELECT *
FROM empl3;
no rows selected
MERGE INTO empl3 c
USING employees e
ON (c.employee_id = e.employee_id)
WHEN MATCHED THEN
UPDATE SET
...
WHEN NOT MATCHED THEN
INSERT VALUES...;
SELECT *
FROM empl3;
RECOVERY MANAGER (RMAN) : is a tool that allowes the user to perform backup and
recovery operations.
To start RMAN:
TERMINAL 1:
export ORACLE_SID=raj
lsnrctl start
emctl start dbconsole
sqlplus sys/manager as sysdba
SQL> startup
SQL> ARCHIVE LOG LIST
TERMINAL 2:
export ORACLE_SID=raj
rman target / nocatalog
Change the configuration setting to include the excluded tablespace for full backups:
RMAN> CONFIGURE EXCLUDE FOR TABLESPACE TBSALERT CLEAR;
SQL> ALTER SYSTEM SET db_recovery_file_dest_Size=5G SCOPE=both;
Database Recovery
-------------------------Recovering Database with FLASHBACK command:
Steps to enable Flashback Database:
SUM(SALARY)
----------691416
SQL> conn sys/manager as sysdba
Connected.
SQL> SELECT current_scn FROM v$database;
CURRENT_SCN
----------1050321
SQL> conn hr/hr
Connected.
SQL> UPDATE employees
2 SET salary=salary+100;
107 rows updated.
SQL> commit;
Commit complete.
SQL> conn sys/manager as sysdba
Connected.
SQL> SELECT current_scn FROM v$database;
CURRENT_SCN
----------1050366
DEPARTMENT_ID
Sln:
1. Shutdown and restart the instance in MOUNT state
SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> STARTUP MOUNT
OBJECT-LEVEL RECOVERY
------------------------------------------When a datafile is lost or damaged, you can recover it using two methods:
NOTE: YOU MUST HAVE A VALID BACKUP OF THE DATABASE TO RECOVERY A
DATAFILE
Scenario: Delete the datafiles example01.dbf and users01.dbf from oradata directory:
Database location: /u01/app/oracle/oradata/raj
Method 1: Without shutting down the instance:
---------------------------------------------------------------------Step 1: Make the datafiles offline as sys user:
SQL> ALTER DATABASE DATAFILE 5 OFFLINE;
SQL> ALTER DATABASE DATAFILE 4 OFFLINE;
Step 2: Connect to RMAN
[oracle@rajiv Desktop]$ export ORACLE_SID=raj
DATABLOCK RECOVERY
--------------------------------------STEPS:
1. Startup the database instance:
export ORACLE_SID=raj
lsnrctl start
emctl start dbconsole
sqlplus sys/manager as sysdba
SQL> startup
SQL> SELECT file_id, block_id
2 FROM dba_extents
3 WHERE segment_name='DEPARTMENTS' and owner='HR';
FILE_ID BLOCK_ID
---------- ---------5
168
SQL> exit
2. Ensure that your current working directory is desktop where you have copied mycorrupt.sh
file.
Run the script from the command prompt:
[oracle@rajiv Desktop]$ ./mycorrupt.sh /u01/app/oracle/oradata/raj/example01.dbf 168 8192
Connect to SQLPLUS as hr user and query the departments table.
You must have a backup file.
[oracle@rajiv Desktop]$ dbv file=/u01/app/oracle/oradata/raj/example01.dbf
Open another terminal window:
export ORACLE_SID=raj
rman target / nocatalog
RMAN> BLOCKRECOVER DATAFILE 5 BLOCK 169,170,171;
DISASTER RECOVERY
---------------------------------------DBID=2837611864
instance name: raj
STEPS:
Ensure that:
Managing Resources
--------------------------------Steps :
1) Connect to OEM as SYS
TASK A
Create a resource group called app_user
- On the OEM, click server tab, then click consumer group link
- Click create, enter the resource group name as app_user, optionally enter the description
-Ensure that Scheduling policy is selected as Round Robin
TASK B
Add the app_user and low_group consumer groups to default_plan resource plan
Change the level 3 cpu resource allocation percentages as below
1) 60% for app_user
2) 40% for low_group
Steps:
- On the OEM, click server tab, then click on the plans link.
- Select the plan called default_plan, then click edit
- Click modify button, move the app_user and low_group resorce groups to the plan, click ok.
- Under level 3, for app_user add 60, for low_group add 40, click apply.
TASK C
Configure the consumer groups mappings such that the HR user belongs to app_user group, scott
user
belongs to low_group consumer group. Also for the scott user confirm that his oracle_user
attribute
has a higher priority than client_os_user attribute.
Steps:
1) On the OEM click the server tab, then click the consumer group mappings.
2) Select oracle_user and click Add rule for selected type button.
3) Select the consumer group as app_user and shift HR from available users to selected users.
4) Select the consumer group as low_group and shift SCOTT from available users to selected
users.
and click ok.
5) Click on the priorities tab and ensure that oracle user is before the client_os_user in the list.
TASK D
Configure consumer group mappings such that the oracle_os_user belongs to the sys_group
consumet
group.
Steps:
1) On the OEM, click server tab, then click on the consumer group mappings link
2) Select client_os_user and click the Add rule for selected type button.
3) Select consumer group as sys_group, then move the oracle user to the selected user then
click ok, apply.
TASK E
Assign the pm user to the following consumer groups
1) app_user
2) low_group
3) sys_group
Steps:
1) On the OEM, click the server tab, click users link
2) Select pm user , click edit, click consumers group privileges tab.
3) Click edit list, move app_user,low_group, sys_group to the available list, click ok, apply.
ACTIVATE THE PLAN : DEFAULT_PLAN
Steps:
1) On the OEM, click the server tab, on the resource manager click plans.
2) Select default_plan, ensure that the actions drop down list has activate option, click go.
3) When asked for confirmation, click yes.
Managing Storage
=================
Create a new tablespace called TBSALERT with a 120 MB file called alert1.dbf. Make
sure that this tablespace is locally managed and uses Automatic Segment Space
Management. Do not make it autoextensible, and do not specify any thresholds for this
tablespace. Use Enterprise Manager Database Control to create it. If this tablespace already
exists in your database, drop it first, including its files.
a. In Enterprise Manager, select Server Tab > Tablespaces.
b. Click the Create button.
c. Enter TBSALERT as Name, and click the Add button in the Datafiles region.
d. Enter alert1.dbf as File Name and 120 MB as File Size, and select Reuse Exisiting File.
e. Click Continue, and then click OK.
4. In Enterprise Manager, change the Tablespace Space Usage thresholds of the TBSALERT
tablespace.
Set its warning level to 55 percent and its critical level to 70 percent.
a. On the Tablespaces page, select TBSALERT, click Edit, and then click Thresholds.
b. Select Specify Thresholds, and enter 55 as Warning (%) a nd 70 as Critical (%).
c. Optionally, click Show SQL, review the statement, and click Return.
d. Click Apply to modify the threshold values.
5. Using SQL*Plus, check the new threshold values for the TBSALERT tablespace.
a. In your SQL*Plus session, enter:
select warning_value,critical_value
from dba_thresholds
where metrics_name='Tablespace Space Usage' and
object_name='TBSALERT';
6. Select the reason and resolution columns from DBA_ALERT_HISTORY for the
TBSALERT tablespace
a. In your SQL*Plus session, enter:
select reason,resolution
from dba_alert_history
where object_name='TBSALERT';
7. Execute the lab_11_07.sh script that tcreates and populates new tables in the TBSALERT
tablespace.
8. Check the fullness level of the TBSALERT tablespace by using either Database Control or
SQL*Plus. The current level should be around 60%. Wait for approximately 10 minutes,
and check that the warning level is reached for the TBSALERT tablespace.
a. In Enterprise Manager on the Tablespaces page, see Used (%).
b. Navigate to the Database home page. You should see the new alert in the Space summary
section.
c. In SQL*Plus, enter:
select sum(bytes) *100 /125829120
from dba_extents
where tablespace_name='TBSALERT';
d. Enter the following command:
select reason
from dba_outstanding_alerts
where object_name='TBSALERT';
9. Execute the lab_11_09_a.sh script to add data to TBSALERT. Wait for 10 minutes and
view the critical level in both the database and Database Control. Verify that TBSALERT
fullness is around 75%.
insert into employees4 select * from employees4;
commit;
insert into employees5 select * from employees5;
commit;
a. Enter the following command in a terminal window:
$ ./lab_11_09_a.sh
b. Wait for 10 minutes and view the critical level in both the database and Database
Control. Verify that TBSALERT fullnesshas is around 75%. In SQL*Plus, enter:
select sum(bytes) *100/125829120
from dba_extents
where tablespace_name='TBSALERT';
c. In SQL*Plus, enter:
select reason, message_level
from dba_outstanding_alerts
where object_name='TBSALERT';
d. In Enterprise Manager, on the Tablespaces page, see Used (%).
e. Navigate to the Database home page. You should see the new alert in the Space
Summary region. Notice the red flag instead of the yellow one.
10. Execute lab_11_10.sh to delete few rows
./lab_11_10.sh
11. Now, run the Segment Advisor for the TBSALERT tablespace by using Database Control.
Make sure that you run the Advisor in Comprehensive mode without time limitation.
Accept and implement its recommendations. After the recommendations have been
implemented, check whether the fullness level of TBSALERT is below 55%.
a. In Enterprise Manager, select Administration > Tablespaces.
b. Select TBSALERT, and then select Run Segment Advisor from the Actions drop-down
list.
c. Click Go, review the objects, and click Next.
d. On the Segment Advisor: Schedule page, make sure that Schedule Type is "Standard and Start
is Immediately. Click Next.
e. On the Segment Advisor: Review page, click the Submit button.
f. On Advisor Central page, click Refresh.
g. Select your segment Advisor Task and click View Result button.
h. On the Segment Advisor Task page, click the Recommendation Details button.
i. Click the Select All link, and then click the Implement button.
j. On the Shrink Segment: Options page, make sure that the Compact Segments and
Release Space option is selected.
k. Optionally, click Show SQL, review the statements, and click Return.
l. Click Implement.
m. On the Shrink Sement: Schedule page, click the Submit button.
n. On the Scheduler Jobs page, click Refresh until you see your job in the Running table.
Continue to click Refresh until you no longer see your job in the Running table. It
should take approximately two minutes to complete.
o. Navigate to the Tablespaces page and verify that the TBSALERT tablespace fullness is
now below 55%.
12. Wait for 10 mins and check that there are no more alerts outstanding for TBSALERT
tablespace.
a. Navigate to the Database homeipage. You should see Problem Tablespaces 0.
13. Retrieve the history of the TBSALERT Tablespace Space Usage metric for the last 24 hours.
a. On the Database home page, select All Metrics in the Related Links region.
b. Expand the Tablespaces Full category, and click the Tablespace Space Used (%)
link.
c. Make sure that you select Real Time: Manual Refresh from the View Data drop-down
list. Then, click the TBSALERT link.
d. This takes you to the Tablespace Space Used (%): Tablespace Name TBSALERT
page. Select Last 24 hours from the View Data drop-down list.