Sie sind auf Seite 1von 27

Q1.

Manual plan loading can be used in conjunction with, or as an alternative to


automatic plan capture. The load operations are performed using the DBMS_SPM
package, which allows SQL plan baselines to be loaded from SQL tuning sets or
from specific SQL statements in the cursor cache. Manually loaded statements are
flagged as accepted by default.
If a SQL plan baseline is present for a SQL statement, the plan is added to the
baseline, otherwise a new baseline is created.
Hence, for our case, "SQL tuning Sets" and "Cursor Cache" are the direct sources
as indicated in the answer options.
Q2.

The inclusion of a shutdown and startup of the database is not necessary, but
Oracle suggest it as a good way to make sure any outstanding processes are
complete before starting the capture process
for "Database Capture and Replay" testing. Also,before the workload begins, the
application state of the replay system should be identical with the application
state of the capture system.
Q3.
The default value of the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES is FALSE, it
determines if the system should automatically capture SQL plan baselines.
When set to TRUE, the system records a plan history for SQL statements. The first
plan for a specific statement is automatically flagged as accepted.
For our case, as from exhibit 2, we can see that default value "FALSE" is changed
to "True" , when 1st 2 SQL statements are executed as shown in exhibit 1, 2nd
line of the exhibit 2 is created and
the plan is automatically accepted ( ACCEPTED = YES )....
Alternative plans generated after this point are not used until it is verified if they
do not cause performance degradations. Plans with acceptable performance are
added to the SQL plan baseline during the evolution phase.
That's why when, alter session statement is executed and then executed the 3rd
select statement , it is added to the SQL plan baseline but as a "Non-Accepted"
plan ( ACCEPTED=NO).
And as this new plan is still now in a "Non-Acceptable" state, if the SQL query of
exihit 1 is executed again, it would simply use the plan "Accepted" plan only i.e,
2nd row (as shown in the exhibit 2)
only.
Q4.
No Need to recompile the attached triggers for Table Redefinition . All triggers
becomes "Invalid" during Table Redefinition Process but gets automatically
validated again by next DML execution.
Q5.
Encrypted tablespaces are created by specifying the ENCRYPTION clause with an
optional USING clause to specify the encryption algorithm and default algorithm is
'AES128' only.
CREATE TABLESPACE USER_DATA
DATAFILE '/u01/app/oracle/oradata/11g/user_data.dbf' SIZE 128K

AUTOEXTEND ON NEXT 64K


ENCRYPTION USING 'AES256'
DEFAULT STORAGE(ENCRYPT);
ALTER USER test QUOTA UNLIMITED ON USER_DATA;
The ENCRYPTED column of the DBA_TABLESPACES and USER_TABLESPACES views
indicates if the tablespace is encrypted or not.
SELECT tablespace_name, encrypted FROM dba_tablespaces;
For our case, we have several options like - Data Pump, Alter Table Move, CTAS
etc.
Q6.
Access control lists can be manipulated by using the DBMS_NETWORK_ACL_ADMIN
package.
The DBMS_NETWORK_ACL_ADMIN package is having below parameters #acl - The name of the access control list XML file, in the XML DB Repository.
#description - the description of the ACL.
#principal - The first user(case sensitive) account or role being granted or denied
permissions.
#is_grant - TRUE to grant, FALSE to deny the privilege.
#privilege - If the value is 'connect' , then it will have for UTL_TCP, UTL_SMTP,
UTL_MAIL and UTL_HTTP access.
If the value is 'resolve', then it would be for UTL_INADDR name/IP
resolution.
#start_date - Default value NULL. When specified, the ACL will only be active on
or after the specified date.
#end_date - An optional end date for the ACL.
Q7.
For better NFS performance, Oracle recommends using the Direct NFS Client that
is shipped with Oracle 11g.
The direct NFS client looks for NFS details in the following locations.
1. $ORACLE_HOME/dbs/oranfstab
2. /etc/oranfstab
3 ./etc/mtab
For the client to work it's needed to switch the "libodm11.so" library for the libnfsodm11.so library, as
shown below.
cd $ORACLE_HOME/lib
mv libodm11.so libodm11.so_stub
ln -s libnfsodm11.so libodm11.so

To Check the direct NFS client usage , use the following views.
v$dnfs_servers
v$dnfs_files
v$dnfs_channels

v$dnfs_stats
For Our Case, as oracle can check from 3 locations as mentioned above, creating oranfstab is NOT
mandatory, only need to ensure that all required file systems are mounted and ODM library file is
modified as shown above.
Q8.
To facilitate diagnosis and resolution of critical errors, the fault diagnosability infrastructure introduces
two concepts for Oracle Database: problems and incidents.
A problem is a critical error in the database. Critical errors manifest as internal errors, such as ORA00600, or other severe errors, such as ORA-07445 (operating system exception) or ORA-04031
(out of memory in the shared pool). Problems are tracked in the ADR. Each problem has a problem
key, which is a text string that describes the problem. It includes an error code (such as ORA 600)
and in some cases, one or more error parameters.
An incident is a single occurrence of a problem. When a problem (critical error) occurs multiple times,
an incident is created for each occurrence.
Incidents are timestamped and tracked in the Automatic Diagnostic Repository (ADR).
Each incident is identified by a numeric incident ID, which is unique within the ADR. When an
incident occurs, the database:
Makes an entry in the alert log.
Sends an incident alert to Oracle Enterprise Manager (Enterprise Manager).
Gathers first-failure diagnostic data about the incident in the form of dump files (incident
dumps).
Tags the incident dumps with the incident ID.
Stores the incident dumps in an ADR subdirectory created for that incident.
Diagnosis and resolution of a critical error usually starts with an incident alert. The incident alert is
displayed on the Enterprise Manager Database Home page. You can then view the problem and its
associated incidents with Enterprise Manager or with the ADRCI command-line utility.
Q9.
A file section is defined as a contiguous range of blocks from a single file. The SECTION SIZE
parameter in the BACKUP command instructs RMAN to create a backup set where each backup piece
contains the blocks from one file section, allowing the backup of large files to be parallelized accross
multiple channels.
The following example of a multisection backup sets the parallelism to 3, allowing a tablespace with a
single 900M datafile to be backed up in 3x300M sections.
# One-off configuration of device type and parallelism.
CONFIGURE DEVICE TYPE sbt PARALLELISM 3;
CONFIGURE DEFAULT DEVICE TYPE TO sbt;
# Backup large table space in 3 sections.
RUN {
BACKUP SECTION SIZE 300M TABLESPACE My_ts;
}

Some points to remember about multisection backups include:


If the section size is larger than the file size, RMAN does not use a multisection backup for the
file.
If the section size is so small that more than 256 sections would be produced, RMAN increases
the section size such that 256 sections will be created.
SECTION SIZE and MAXPIECESIZE cannot be used together.
A backup set never contains a partial datafile, regardless of whether or not it is a multisection
backup.
Q10.

Executing a SQL workload runs each of the SQL statements contained in the workload to completion.
Each SQL statement in the SQL tuning set is executed onceone at a timeseparately from other
SQL statements without preserving their initial order of execution or concurrency. To avoid a potential
impact to the database, DDLs are not supported; only the query portion of DMLs are executed. During

execution, SQL Performance Analyzer generates execution plans and computes execution statistics
for each SQL statement in the workload.
Depending on its size, executing a SQL workload can be time and resource intensive. When executing a
SQL workload, you can choose to generate execution plans only, without collecting execution statistics.
This technique shortens the time to run the execution and lessens the effect on system resources, but
a comprehensive performance analysis is not possible because only the execution plans will be
available during the analysis.
Q11.
Oracle 11g ASM introduced two new compatibility attributes that determine the version of the ASM
and database software that can use specific disk groups:
COMPATIBLE.ASM - The minimum version of the ASM software that can access the disk
group.
COMPATIBLE.RDBMS - The minimum COMPATIBLE database initialization parameter
setting for any database instance that uses the disk group.
Q12.
As the parallelism is not set , there is no question of using 3 channels. The default channel will be
used with 300MB each piece size.
Q13.

Multi-Column Statistics
Individual column statistics for the selectivity of a specific column in a where clause works fine, but if
the where clause includes multiple columns from the same table, then the individual column statistics
provide no proper indication of the relationship between the columns.
The DBMS_STATS package's CREATE_EXTENDED_STATS procedure is used to explicitly create
multi-column statistics.
SELECT DBMS_STATS.create_extended_stats_name(ownname
=> 'SCOTT',
tabname
=> 'EMP',
extension => '(JOB,DEPTNO)')
FROM dual;
SELECT DBMS_STATS.show_extended_stats_name(ownname
=> 'SCOTT',
tabname
=> 'EMP',
extension => '(JOB,DEPTNO)') AS
col_group_name
FROM dual;

Q14.
For automatic memory management, the SGA_TARGET and PGA_AGGREGATE_TARGET act as
minimum size settings for their respective memory areas. To fully automate the memory management,
these parameters should be set to zero.
If you are using UNIX/Linux OS, you should check the current size of your shared memory file
system.
# df -k /dev/shm

The shared memory file system should be big enough to accommodate the MEMORY_TARGET and
MEMORY_MAX_TARGET values, or Oracle will throw the following error.
ORA-00845: MEMORY_TARGET not supported on this system

To adjust the size issue the following commands


# umount tmpfs
# mount -t tmpfs shmfs -o size=1024m /dev/shm

If you want to use a similar amount of memory to your current settings you will need to use the
following calculation.
MEMORY_TARGET = SGA_TARGET + GREATEST(PGA_AGGREGATE_TARGET, "maximum PGA
allocated")

Q15.
Direct upgrades to 11g are possible only from existing databases with versions 9.2.0.4+, 10.1.0.2+ or
10.2.0.1+. Other versions are supported only via intermediate upgrades to a supported upgrade version.
The Database Upgrade Assistant (DBUA), a GUI tool that performs all necessary prerequisite checks
and operations before upgrading the specified instances, can be started directly from the Oracle
Universal Installer (OUI) or separately after the software installation is complete.
Alternatively you may which to perform a manual upgrade which involves the following steps:
Backup the database.
In UNIX/Linux environments, set the $ORACLE_HOME and $PATH variables to point to the
new 11g Oracle home.
Analyze the existing instance using the "$ORACLE_HOME/rdbms/admin/utlu111i.sql"
script.
Start the original database using the STARTUP UPGRADE command and proceed with the
upgrade by running the "$ORACLE_HOME/rdbms/admin/catupgrd.sql" script.
Recompile invalid objects.
Restart the database.
Run the "$ORACLE_HOME/rdbms/admin/utlu111s.sql" script and check the result of the
upgrade.
Troubleshoot any issues or abort the upgrade.
The "$ORACLE_HOME/rdbms/admin/utlu111i.sql" script performs pre-update validation checks
on an existing instance. The script checks a number of areas to make sure the instance is suitable for
upgrade including:

Database version.
Tablespace sizes.
Updated, renamed and deprecated initialization parameters.
Database components.
SYSAUX tablespace present (if missing).
Miscellaneous information.

Q16.
SQL Performance Analyzer can compare the performance of SQL statements before and after the
change and produces a report identifying any changes in execution plans or performance of the
SQL statements.
It measures the impact of system changes both on the overall execution time of the SQL workload and
on the response time of every individual SQL statement in the workload.
Once the comparison is complete, the resulting data is generated into a SQL Performance Analyzer
report that compares the pre-change and post-change SQL performance. The SQL Performance
Analyzer report can be viewed as an HTML, text, or active report. Active reports provides in-depth
reporting using an interactive user interface that enables you to perform detailed analysis even when
disconnected from the database or Oracle Enterprise Manager.
If the performance analysis performed by SQL Performance Analyzer reveals regressed SQL
statements, then you can make changes to remedy the problem. For example, you can fix regressed
SQL by running SQL Tuning Advisor or using SQL plan baselines.
Q17.
You can use Enterprise Manager Support Workbench (Support Workbench) to investigate and report a
problem (critical error), and in some cases, resolve the problem also.

1 Check out the Critical Error Alerts in Enterprise Manager


Check the Database Home page in Enterprise Manager, and reviewing critical error alerts.

Select an alert for which to view details, and then go to the Problem Details page.
2 See the Problem Details
Examine closely the problem details and also view a list of all incidents that were recorded for
the problem. Display findings from any health checks that were automatically run.
3 Gather Additional Diagnostic Information(Optional)
Run additional health checks or other diagnostics. For SQL-related errors, optionally invoke
the SQL Test Case Builder, which gathers all required data related to a SQL problem and
packages the information in a way that enables the problem to be reproduced at Oracle Support.
4 Create a Service Request(Optional)
Optionally create a service request with OracleMetaLink and record the service request number
with the problem information. If you skip this step, you can create a service request later, or the
Support Workbench can create one for you.
5 Package and Upload Diagnostic Data to Oracle Support
Invoke a guided workflow (a wizard) that automatically packages the gathered diagnostic data
for a problem and uploads the data to Oracle Support.
6 Track the Service Request and Implement Any Repairs
Track the the service request via Support Workbench. Run Oracle advisors to help repair
SQL failures or corrupted data.
7 Close Incidents
Modify the status for one, some, or all incidents for the problem/incident to Closed.
Q18.
From RAMAN prompt, you can run the VALIDATE command to manually check for physical and
logical corruptions in database files. It can check a larger selection of objects. For example - individual
blocks can be validated by the VALIDATE DATAFILE ... BLOCK command.
When validating whole files, RMAN checks every block of the input files. If the backup validation
discovers anycorrupt blocks, then RMAN updates the V$DATABASE_BLOCK_CORRUPTION view
with rows describing the corruptions.
# Checks for physical corruption of all database files.
# For example, to validate all datafiles and control files (and the
# server parameter file if one is in use), execute the following
VALIDATE DATABASE;
# Checks for physical and logical corruption of a datafile.
VALIDATE CHECK LOGICAL DATAFILE 4;
# Checks for physical corruption of all archived redo logs files.
VALIDATE ARCHIVELOG ALL;
# Checks for physical and logical corruption of a tablespace.
VALIDATE CHECK LOGICAL TABLESPACE USERS;
# Checks for physical and logical corruption of a specific backupset.
VALIDATE CHECK LOGICAL BACKUPSET 3;
# Checks for physical and logical corruption of the controlfile.

VALIDATE CHECK LOGICAL CURRENT CONTROLFILE;

The RESTORE VALIDATE and BACKUP VALIDATE commands perform the same checks as the
VALIDATE command for the files targeted by the backup or restore command, but they don't actually
perform the specified backup or restore operatio rather it allows you to check the integrity of a backup
or restore operation before actually performing it.
# It Checks for physical corruption of files to be restored.
RESTORE VALIDATE DATABASE;
# It Checks for physical and logical corruption of files to be restored.
RESTORE VALIDATE CHECK LOGICAL DATABASE;
# It Checks for physical corruption of files to be backed up.
BACKUP VALIDATE DATABASE ARCHIVELOG ALL;
# It Checks for physical and logical corruption of files to be backed up.
BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL;

Q19.
A Flashback Data Archive (Oracle Total Recall) provides the ability to track and store all transactional
changes to a table over its lifetime. An individual flashback archives consists of one or more
tablespaces, or parts of tablespace. Each flashback archive has a name, retention period and a quota on
each associated tablespace.
The database can have multiple flashback data archives, but only a single default archive.
When a DML transaction commits an operation on a flashback archive enabled table, the Flashback
Data Archiver (FBDA) process stores the pre-image of the rows into a flashback archive, along with
metadata of the current rows. The FBDA process is also responsible for managing the data within the
flashback archives, such as purging data beyond the retention period.
The following script creates a new tablespace, then creates two flashback data archives using the
CREATE FLASHBACK ARCHIVE command.
CREATE TABLESPACE TEST_ts
DATAFILE '/u01/app/oracle/oradata/LIVEDB/test_01.dbf'
SIZE 100M AUTOEXTEND ON NEXT 100M;
CREATE FLASHBACK ARCHIVE DEFAULT TEST_1year TABLESPACE TEST_ts
QUOTA 100G RETENTION 1 YEAR;
CREATE FLASHBACK ARCHIVE TEST_4year TABLESPACE TEST_ts
RETENTION 4 YEAR;

There are some restrictions for flashback archives:


The tablespaces used for a flashback archive must use local extent management
and automatic segment space management.
The database must use automatic undo management.

Q20.
To use RMAN duplicate with Active DATABASE clause, no need to copy the backup sets to source
database from Target Database as it's indicated by ACTIVE Database keyword. Only you need to
ensure that TARGET database is in open mode, Source Database is in no-mount state and you are
connected to both target DB and Source DB as sysdba, target DB should be in archive log mode
and password file for the source DB have the same sys user password as the target.
Q21.
A failure group is a subset of the disks in a disk group, which could fail at the same time because they
share the hardware. There are always failure groups even if they are not explicitly created. If you do not
specify a failure group for a disk, then Oracle automatically creates a new failure group containing just
that disk. A normal redundancy disk group must contain at least two failure groups. A high redundancy
disk group must contain at least three failure groups.
If a disk goes offline during a rolling upgrade, the timer is not started until after the rolling upgrade is
complete.
Depending on the redundancy level of a disk group and how you define failure groups, the failure of
one or more disks could result in either of the following:
The disks are first taken offline and then automatically dropped. In this case, the disk group
remains mounted and serviceable. In addition, because of mirroring, all of the disk group data
remains accessible. After the disk drop operation, ASM performs a rebalance to restore full
redundancy for the data on the failed disks.
The entire disk group is automatically dismounted, which means loss of data accessibility.
During transient disk failures within a failure group, ASM keeps track of the changed extents that
need to be applied to the offline disk. Once the disk is available, only the changed extents are written
to resynchronize the disk, rather than overwriting the contents of the entire disk. This can speed up the
resynchronization process considerably. This is called as Fast mirror resync
Fast mirror resync is only available when the disk groups compatibility attributes are set to 11.1 or
higher.
Q22.
The retention policy for ADR tells oracle to how long to keep the data ADR incidents :
The incident metadata retention policy ( default is 1 year )
The incident files and dumps retention policy ( Default is one month)
The ADR Retention can be controlled with ADRCI
By default location of DIAGNOSTIC_DEST is $ORACLE_HOME/log, but if ORACLE_BASE is set
in environment then DIAGNOSTIC_DEST is set to $ORACLE_BASE

adrci> show control # this will show you the current settings for retention policy
adrci>set control (SHORTP_POLICY = 360 ) # This will set the short term retention policy
adrci>set control (LONGP_POLICY = 4380 )# This will set the long term retention policy to year
Note : The MMON background process does it automatically based on a control date established for
each database instance. There are two time attributes which are used to manage the retention of
information in ADR (in hours):
LONGP_POLICY (long term) defaults to 365 (8760 hours) days and relates to things like Incidents and
Health Monitor warnings.
SHORTP_POLICY (short term) defaults to 30 (720 hours) days and relates to things like trace and core
dump files
Q.23
Evolving a SQL plan baseline is a process by which the optimizer verifies if non-accepted plans in
the baseline should be accepted. Manually loaded plans are automatically marked as accepted, so
manual loading forces the evolving process whereas when the plans are loaded automatically, the
baselines are evolved using the EVOLVE_SQL_PLAN_BASELINE function, which returns a CLOB
reporting its results.
SET LONG 10000
SELECT DBMS_SPM.evolve_sql_plan_baseline(sql_handle =>
'SYS_SQL_7b76323ad80640b9')
FROM
dual;

Q.24
As per metalink notes : ( Doc ID 761111.1 )
How does Online Patching differ than traditional patches?
1. Online patches are applied and removed from a running instance where traditional patches
require the instances to be shutdown.
2. Online patches utilize the oradebug interface to install and enable the patches where traditional
diagnostic patches are linked into the "oracle" binary.
3. Online patches do not require the "oracle" binary to be relinked where traditional diagnostic patches
do.
4. There is additional memory consumption and process start time penalty for online patches.
A regular RDBMS patch is comprised of one or more object (.o) files and/or libraries (.a files).
Installing a regular patch requires shutting down the RDBMS instance, re-linking the oracle binary, and
restarting the instance; uninstalling a regular patch requires the same steps.
On the other hand, an online patch is a special kind of patch that can be applied to a live, running
RDBMS instance. An online patch contains a single shared library; installing an online patch does not
require shutting down the instance or relinking the oracle binary. An online patch can be installed/uninstalled using Opatch (which uses oradebug commands to install/uninstall the patch).
Q.25
From Oracle 11g onwards, in addition to saving storage space, compression can result in increased I/O

performance and reduced memory use in the buffer cache but the compression also incurs a CPU
overhead, so it won't be of benefit to everyone.
The compression clause can be specified at the tablespace, table or partition level with NOCOMPRESS - This is the default action when no compression clause is specified.
COMPRESS - This option is suitable for data warehouse systems. Compression is enabled on the
table or partition during direct-path inserts only.
COMPRESS FOR DIRECT_LOAD OPERATIONS - This option has the same affect as the
simple COMPRESS keyword.
COMPRESS FOR ALL OPERATIONS (suitable for OLTP ) -this option enables
compression for all operations, including regular DML statements but this is requiring the
COMPATIBLE initialization parameter to be set to 11.1.0 or higher.
COMPRESS FOR OLTP this is introduced in 11gR2, and COMPRESS
FOR ALL OPERATIONS is deprecated.
Q.27
Flashback Transaction allows the changes made by a transaction to be undone, optionally including
changes made by dependent transactions.
To configure the database for the Flashback Transaction feature,
1. Enable ARCHIVELOG:
ALTER DATABASE ARCHIVELOG;
2. Open at least one archive log:
ALTER SYSTEM ARCHIVE LOG CURRENT;
If not done, enable minimal and primary key supplemental logging:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
If you want to track foreign key dependencies, enable foreign key supplemental logging:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
If you have very many foreign key constraints, enabling foreign key supplemental logging might not be
worth the performance penalty.
Q28.
Multi-section Backup
Feature in RMAN Oracle 11g, offers multi-section backup to support
intrafile parallel backup which allows one data file to be divided into user-specified sections, so
that each section can be backed up in parallel on separate
channels.
This method is best for databases composed of a few data files that are
significantly larger than the rest, versus a database composed of large number of
small data files. This is also optimal for databases where there are fewer number of

large data files than the number of available tape drives. An initial recommendation
for section size = (average data file size / # of channels). Channels can be
incrementally added to scale up backup performance, depending on hardware and
tape drive limits.
Q29.
In Oracle 11g, the results of SQL queries can be cached in the SGA . Systems with large amounts of
memory can take advantage of it to improve response times of repetitive queries. The result cache
stores the results of SQL queries and PL/SQL functions in an area called Result Cache Memory
in the shared pool. When these queries and functions are executed repeatedly, the results are retrieved
directly from the cache memory.
SELECT name, value, isdefault
FROM
v$parameter
WHERE name LIKE 'result_cache_mode%';
NAME

VALUE

ISDEFAULT

---------------------------------- ------------------ --------result_cache_mode


MANUAL
TRUE

Here, the result cache can be enabled in three ways: via hint at table-level, alter session or alter
system. MANUAL is the default value which means that you need to explicitly request caching via
the RESULT_CACHE hint at table-level.
Q30.
During the execution of SQL(s) in the desired SQL Tuning Set, each of the SQL statements runs
only once and the execution plan along with the execution statistics are computed for each of them.
Q31.
The Database Replay Feature is mainly used for comparing the performance throughput before and
after OS or DB upgrade and database storage upgrade like from NON-ASM to ASM file systems.
Q32.
The optimizer will be able to use pending statistics if the
OPTIMIZER_USE_PENDING_STATISTICS (default is FALSE) initialization parameter
is set to TRUE. Setting this parameter to TRUE at session level allows you to test the impact of
pending statistics before publishing them.
ALTER SESSION SET OPTIMIZER_USE_PENDING_STATISTICS=TRUE;

Usage of

DBMS_STATS

-- Publish all pending statistics.


EXEC DBMS_STATS.publish_pending_stats(NULL, NULL);
-- Publish pending statistics for a specific object.
EXEC DBMS_STATS.publish_pending_stats('SCOTT','DEPT');

-- Delete pending statistics for a specific object.


EXEC DBMS_STATS.delete_pending_stats('SCOTT','DEPT');

Q33.
Database Resource Manager is automatically enabled because it will keep an eye on how
much system resources is being used and will prevent the Automated Maintenance Tasks from
consuming excessive system resources.
Q34.
From Top SQL(s) set of AWR, it creates the SQL Tuning Set to be used in Automatic SQL
Tuning as a part of the AUTOTASK framework.
Q35.
A flashback data archive is nothing but an extended store of undo information that allows some
logical flashback operations to extend way back into the past .
An individual flashback archives consists of one or more tablespaces, or parts of tablespace.
Each flashback archive has a name, retention period and a quota on each associated
tablespace. The database can have multiple flashback data archives, but only a single default
archive.
Q36.
The LIST FAILURE (not available in RAC )command displays the failures if any, with a status
OPEN and a priority of CRITICAL or HIGH in order of importance and if no such failures exist
it will list LOW priority failures.
The ADVISE FAILURE command provides repair advice for failures listed by the LIST
FAILURE command, as well as closing all open failures that are already repaired.
If manual repair actions are produced by the above command, we should attempt them first, as
they are likely to be less disruptive. If manual repair actions aren't present, or they do not fix the
problem, you can use the automated repair option.
The REPAIR FAILURE command applies the repair scripts produced by the ADVISE
FAILURE command but it would be a better idea, if you check by using the PREVIEW option
that lists the contents of the repair script without applying it.
Q37.
As a part of enhancement in Temporary Tablespace in Oracle 11g, introduced a new view called
DBA_TEMP_FREE_SPACE that displays information about temporary tablespace usage.
SQL> SELECT * FROM dba_temp_free_space;

TABLESPACE_NAME
TABLESPACE_SIZE ALLOCATED_SPACE FREE_SPACE
------------------------------ --------------- --------------- ---------TEMP
78623222
78623222
77623123
1 row selected.

Keeping an eye on this information, it would be easy to perform an online shrink of a temporary
tablespace using the ALTER TABLESPACE command.
SQL> ALTER TABLESPACE temp SHRINK SPACE KEEP 60M;

For all of the above just you need that the tablespace must be locally managed and no other
resitrictions.
The shrink can also be performed on a specific tempfile using the TEMPFILE clause.The KEEP clause
specifies the minimum size of the tablespace or tempfile. If this is omitted, the database will shrink the
tablespace or tempfile to the smallest possible size.
Q38.
A Flashback Data Archive (Oracle Total Recall) provides the ability to track and store all
transactional changes to a table over its lifetime. An individual flashback archives consists of one or
more tablespaces, or parts of tablespace. Each flashback archive has a name, retention period and a
quota on each associated tablespace.
Q39.
To recompile all invalid objects in the Database, utlrp.sql and utlprp.sql scripts are provided by
Oracle, mainly run after major database changes such as upgrades or patches. As usual, they are
located in the $ORACLE_HOME/rdbms/admin directory.
Q40.
ASM instance does not store any user data but keeps storage metadata such as the disk group
names,directories etc, in the disk header. So if there is any disk crash and disk headers get corrupted
then it would be a big issue.
In Oralce 11g, Oracle has introduced an extended feature of ASMCMD utility to provide ASM
metadata backup and restore functionality via md_backup and md_restore commands, this
functionality is called as Asm Metadata Backup and Restore - AMBR.
You need to use md_backup for taking ASM metadata backup for a disk group and md_restore to
restore the ASM metadata for a disk group. This way, AMBR functionality is used to re-create an
ASM disk group with an identical template and directory structure using the backup of ASM metadata.
Q41.
To customize the ADDM analysis for example, filtering out certain conditions , Directives were
introduced with Oracle 11g .
Creation of Directives
Insert finding
DBMS_ADDM.INSERT_FINDING_DIRECTIVE
Creates an ADDM directive that limits the ADDM report to specific finding classification types.
Insert parameter DBMS_ADDM.INSERT_PARAMETER_DIRECTIVE
Creates an ADDM directive that prevents ADDM from suggesting actions to alter the value of a specific system
parameter.

Insert segment directive


DBMS_ADDM.INSERT_SEGMENT_DIRECTIVE
Creates an ADDM directive that will cause ADDM to exclude actions related to specific owner, segment, subsegment, or a specific object number.
Insert SQL directive
DBMS_ADDM.INSERT_SQL_DIRECTIVE
Creates an ADDM directive that cause the ADDM to exclude SQL statements based on specific criteria. Criteria
may include the SQL IDs, a minimum number of active sessions, or minimum response time in microseconds.
Removal of Directives
Delete finding DBMS_ADDM.DELETE_FINDING_DIRECTIVE
Delete parameter

DBMS_ADDM.DELETE_PARAMETER_DIRECTIVE

Delete segment directive


DBMS_ADDM.DELETE_SEGMENT_DIRECTIVE
Delete SQL directive DBMS_ADDM.DELETE_SQL_DIRECTIVE

For our case, the ADDM will not have the CPU Usage if it's less than 90%
Q42.
Review the Question 6.

After creating the new ACL by using DBMS_NETWORK_ACL_ADMIN.CREATE_ACL procedure, access


control lists are assigned to networks via ASSIGN_ACL procedure as shown in the below
code
acl - The name of the access control list XML file.
host - The hostname, domain, IP address or subnet to be assigned. Hostnames are case sensitive,
and wildcards are allowed for IP addresses and domains.
lower_port - Defaults to NULL. Specifies the lower port range for the 'connect' privilege.
upper_port - Defaults to NULL. If the lower_port is specified, and the upper_port is NULL, it is
assumed the upper_port matches the lower_port.
# Creating ACL
BEGIN
DBMS_NETWORK_ACL_ADMIN.create_acl (
acl
=> 'NewACL_File .xml',
description => 'New ACL Creation',
principal
=> 'SCOTT',
is_grant
=> TRUE,
privilege
=> 'connect',
start_date
=> SYSTIMESTAMP,
end_date
=> NULL);
COMMIT;
END;
/

# Assigning the ACL


BEGIN
DBMS_NETWORK_ACL_ADMIN.assign_acl (
acl
=> 'NewACL_File .xml',
host
=> '192.168.21.1',
lower_port => 80,
upper_port => NULL);
DBMS_NETWORK_ACL_ADMIN.assign_acl (
acl
=> 'NewACL_File ',
host
=> '10.1.50.*',

lower_port
upper_port

=> NULL,
=> NULL);

COMMIT;
END;
/

Q43.
SQL tuning Task should be associated with a workload.
e.gdbms_sqltune.create_sqlset( <sqltuning set name>, 'generating workload from cursor cache');
------------------------------------------dbms_sqltune.load_sqlset(<sqltuning set name>, <cursor_name>);
-------dbms_advisor.add_sqlwkld_ref(<task_name>,<sqltuning set name>,1) .,.. etc.
Q44.
lsdsk Command
It lists the disks that are visible to ASM ( V$ASM_DISK_STAT and V$ASM_DISK views ).
Syntax lsdsk [-ksptagcHI] [-d diskg_roup_name] [pattern]
The k, s, p, and t flags modify how much information is displayed for each disk. If any combination of the flags
are specified, then the output shows the union of the attributes associated with each flag. This command can
run in both connected or non-connected mode. The -I option forces the non-connected mode.

in nonconnected mode, ASMCMD scans disk headers to retrieve disk information, using an
ASM disk string to restrict the discovery set.
In connected mode, ASMCMD uses dynamic views to retrieve disk information. Where as,

If lsdsk is executed without any flag, it displays the PATH column of the V$ASM_DISK view.
-k , displays the TOTAL_MB, FREE_MB, OS_MB,NAME, FAILGROUP, LIBRARY, LABEL, UDID, PRODUCT,
REDUNDANCY, and PATH columns of the V$ASM_DISK view.
-s, displays the READS, WRITES, READ_ERRS, WRITE_ERRS, READ_TIME, WRITE_TIME, BYTES_READ,
BYTES_WRITTEN, and the PATH columns of the V$ASM_DISK view.
-p, displays the GROUP_NUMBER, DISK_NUMBER, INCARNATION, MOUNT_STATUS, HEADER_STATUS,
MODE_STATUS, STATE, and the PATH columns of the V$ASM_DISK view.
-t , displays the CREATE_DATE, MOUNT_DATE, REPAIR_TIMER, and the PATH columns of the V$ASM_DISK
view.
-g, selects from GV$ASM_DISK_STAT, or from GV$ASM_DISK if the -c flag is also specified.
GV$ASM_DISK.INST_ID is included in the output.

-c, selects from V$ASM_DISK, or from GV$ASM_DISK if the -g flag is also specified. This option is ignored if
the ASM instance is version 10.1 or earlier.
-H, suppresses column headings.
-I,scans disk headers for information rather than extracting the information from an ASM instance. This option
forces the non-connected mode.
-d, restricts results to only those disks that belong to the group specified by disk_group_name.

In our case, it is runing in NON-Connected mode, so lsdsk -I -d DATA


will display the information by scaning disk headers of DATA disk group.

Q45.
An AWR baseline is a set of AWR snapshots collected within a certain time frame, which can then be
referenced to compare performance during another time period of interest. The snapshots in an AWR
baseline are grouped to provide a set of baseline values that change over time.
If you set an arbitrary alerts that remain identical throughout are not optimal because they will likely
miss the natural peaks and downs in the workload of a real production database.
On the other hand, baselines are ideal for setting time- dependent alert thresholds because the baselines
let the database compare the present performance with baseline data from a similar time period.
In Oracle Database 11g, we can collect two types of baselines: static baselines and moving window
baselines.
Baselines help us to set alert thresholds, monitor performance, and compare advisor reports.
A static baseline can be either a single baseline that has been collected over a single fixed time period
(e.g- from Feb 2, 2014 at 11:00 A.M. to Feb 2, 2014 at 1:00 P.M.) or a repeating baseline collected over
a repeating time period (e.g - every first Wednesday in a month from 11:00 A.M. to 1:00 P.M. for the
year 2014).
The moving window baseline captures the data over a window that keeps moving over time. You can
create a moving window AWR baseline instead of a mere fixed baseline corresponding to a fixed,
contiguous past period in time. A moving window baseline encompasses AWR data during the AWR
retention period, which is, by default, eight days. To set up the baseline metric thresholds, 1st you
need to collect the baseline statistics.
Q46.
NumToYMInterval is a SQL function, is used to convert a NUMBER to a INTERVAL YEAR TO
MONTH literal. As per the create syntax, it will create 2 range partitions of varying range.
Q47.
To customize the ADDM analysis for example, filtering out certain conditions , Directives were
introduced with Oracle 11g .
Insert segment directive
DBMS_ADDM.INSERT_SEGMENT_DIRECTIVE
Creates an ADDM directive that will cause ADDM to exclude actions related to specific owner, segment, subsegment, or a specific object number.
To use ADDM Directive, you need to initialize the task. In our case, task has not been reset to its initial
state.( see the exhibit )
Q48.

Manual plan loading can be used in conjunction with, or as an alternative to


automatic plan capture. The load operations are performed using the DBMS_SPM

package, which allows SQL plan baselines to be loaded from SQL tuning sets or
from specific SQL statements in the cursor cache. Manually loaded statements
are flagged as accepted by default.
If a SQL plan baseline is present for a SQL statement, the plan is added to the
baseline, otherwise a new baseline is created.
Q49.
If you see the Exhibit2, MEAN_JOB_DURATION(add

all of them ) then it's clearly more than the Maintenance


Window duration ( 20 mins ) (exhibit 1), so you need to increase the window duration and resource percentage.
See the oracle documentation
http://docs.oracle.com/cd/B28359_01/server.111/b28320/statviews_3076.htm#REFRN23591
MEAN_JOB_DURATION --->Average elapsed time for a job for this client
RESOURCE_PERCENTAGE--->Percentage of maintenance resources for high priority maintenance tasks for
this client
Q50. In oracle 11g, Reference partitioning is a new feature which allows tables related by foreign keys to
be logically equi-partitioned. The child table is partitioned using the same partitioning key as the parent table
without duplicating the key columns.
To create a reference-partitioned table, you need to specify a PARTITION BY REFERENCE clause in the
CREATE TABLE statement and if no explicit table space is specified for the reference-partitioned table's
partition, then- Partitions of a reference-partitioned table will collocate with the corresponding partition of the
parent table.
Q51.

Oracle SecureFiles use variable chunk sizes that can be as large as 64 MB and by storing these
chunks next to one another, Oracle also minimizes fragmentation.
Advanced SecureFile Compression not only results in significant savings in storage but also improved
performance by reducing IO, buffer cache requirements, redo generation and encryption overhead. If
the data is already compressed or if significant benefit could not be achieved, then SecureFiles will
automatically turn off compression for such columns. Compression is performed on the server-side
and allows for random reads and writes to SecureFile data.
Q52.
To execute DDL commands, it requires exclusive locks on internal structures. If these locks are not
available, it will return an "ORA-00054: resource busy" error . When trying to modify objects that
are accessed frequently, it could be an challenging issue . To overcome this, Oracle 11g introduced
the DDL_LOCK_TIMEOUT parameter, which can be set at instance level (initialization parameters )
or session level using the ALTER SYSTEM and ALTER SESSION commands respectively.
The DDL_LOCK_TIMEOUT parameter signifies the number of seconds a DDL command should
wait for the locks to become available before throwing the resource busy error message.
Q53.
All passwords remain non-case-sensitive till they are changed.
Q54.
The SQL Access Advisor generates recommendations about indexes(bitmap, function-based, and Btree indexes) and also recommends how to optimize materialized views so that they can be refreshed
fast and take advantage of general query rewrite.
In Oracle 11g,
1. The SQL Access advisor now includes advice on partitioning of tables and indexes that may
improve performance.

2. The original workload manipulation has been deprecated and replaced by SQL tuning sets.
Q55.
The SQL management base (SMB) is a part of the data dictionary that is located in the SYSAUX
tablespace. It stores statement log, plan histories, SQL plan baselines, and SQL profiles.
We can also add plans manually to the SMB for a set of SQL statements. This feature is very useful
when upgrading Oracle Database from a pre-11g version, since it helps to minimize plan regressions
resulting from the use of a new optimizer version.
As of 11g, Automatic SQL Tuning Tasks run every night via an automatic task job if you choose to
allow the server to create SQL Profiles and implement them automatically as well.
Q56.
Alter Diskgroup SQL statement is valid only if you issue this statement from within the Oracle ASM
instance, not from a normal database instance.
The disk_offline_clause is used to take one or more disks offline.
By default, Automatic Storage Management drops a disk shortly after it is taken offline and it can be
delayed by specifying the timeout_clause, which gives you the opportunity to repair the disk and
bring it back online.
You can specify the timeout value in units of minute or hour. If you omit the unit, then the default is
hour.
To learn how much time remains before Automatic Storage Management will drop an offline disk,
query the repair_timer column of V$ASM_DISK.
This clause overrides any previous setting of the disk_repair_time attribute.
Q57.
As per Oracle Documentation ORA-04031: unable to allocate nn bytes of shared memory
Cause: More shared memory is needed than was allocated in the shared pool.
Action: If the shared pool is out of memory,
either
use the dbms_shared_pool package to pin large packages, reduce your use of shared memory,
or
increase the amount of available shared memory by increasing the value of the INIT.ORA parameters
"shared_pool_reserved_size" and "shared_pool_size".
If the large pool is out of memory, increase the INIT.ORA parameter "large_pool_size".
In Oracle 11g, Automatic memory management is introduced and it can be configured by using two
new initialization parameters:
MEMORY_TARGET: The total amount of shared memory available(SGA + PGA ) for Oracle
to use. This parameter is dynamic, so the total amount of memory available to Oracle can be
increased or decreased, provided it does not exceed the MEMORY_MAX_TARGET size.
MEMORY_MAX_TARGET: The maximum size the MEMORY_TARGET can be increased to
without an instance restart. If the MEMORY_MAX_TARGET is not specified, it defaults to
MEMORY_TARGET setting.
In automatic memory management(AMM), the SGA_TARGET and PGA_AGGREGATE_TARGET

indicates the minimum size settings for their respective memory areas.
To allow Oracle to take full control of the memory management, you need to set SGA_TARGET and
PGA_AGGREGATE_TARGET to zero.
CONN / AS SYSDBA
ALTER SYSTEM SET MEMORY_MAX_TARGET=10G SCOPE=SPFILE;--static
-- Set the dynamic parameters now.
ALTER SYSTEM SET MEMORY_TARGET=7G SCOPE=SPFILE;
ALTER SYSTEM SET PGA_AGGREGATE_TARGET=0 SCOPE=SPFILE;
ALTER SYSTEM SET SGA_TARGET=0 SCOPE=SPFILE;
-- Restart instance.
SHUTDOWN IMMEDIATE;
STARTUP;

Q.58
In Oracle 11g, the New ABP(Autotask Background Process)
- converts automatic tasks into Scheduler jobs
- determine the jobs that need to be created for each maintenance task
- stores task execution history in the SYSAUX tablespace
- does NOT execute maintenance tasks
By default AutoTask schedules: optimizer statistics collection, SQL Tuning Advisor
and Automated Segment Advisor.
Memory Monitor (MMON) monitors SGA and performs various manageability related background
tasks.
Q.59
In an RMAN session, the recommended workflow should be
# LIST FAILURE to display failures,
then
# ADVISE FAILURE to display repair options,
and
# REPAIR FAILURE to fix the failures.

ADVISE FAILURE Displays the repair options for the specified failures.
This command prints a summary of the failures identified by the Data Recovery Advisor and
implicitly closes all open failures that are already fixed.
Syntax
ADVISE FAILURE [{ALL | CRITICAL | HIGH | LOW | failure_no [,failure_no]}]
[EXCLUDE FAILURE failure_no [,failure_no] ]
The use of ADVISE FAILURE with no options is possible, when a LIST FAILURE command
was previously executed in the current session.
It will show you the warning about any new failure happened in between the time
gap of executing List Failure and Advise Failure.
Q60.

V$DIAG_INFO describes the state of Automatic Diagnostic Repository (ADR) functionality using
NAME=VALUE pairs.
Name : Identifies a piece of data that reflects the state of ADR, such as whether it is enabled, where
the directories and files are located, and how many ongoing issues (incidents and problems) there are.
As per the output of the question, all statements are true except text alert log location as it's default
location is indicated by Diag Trace not in Diag Alert location ( here XML version is located).
Q61.
DB_SECUREFILE specifies whether or not to treat LOB files as SecureFiles.
It's modifiable via both ALTER SESSION & ALTER SYSTEM.
Syntax: DB_SECUREFILE = { NEVER | PERMITTED | ALWAYS | IGNORE }
NEVER - Any LOBs that are specified as SecureFiles are created as BasicFile LOBs.
All SecureFile-specific storage options and features (like- compress, encrypt,
deduplicate) will cause an exception.
The BasicFile LOB defaults will be used for storage options not specified.
PERMITTED-(default)-LOBs are allowed to be created as SecureFiles.
ALWAYS- All LOBs created in the system are created as SecureFile LOBs. If the LOB
is not created in an Automatic Segment Space Managed tablespace, then an error
will occur. Any BasicFile LOB storage options are ignored. The SecureFile defaults
will be used for all storage options not specified.
IGNORE - The SECUREFILE keyword and all SecureFile options are ignored.
Q62.
By default, ASM drops a disk in 3.6 hours after it is taken offline and we can use
DISK_REPAIR_TIME attribute to delay the drop operation by specifying a time
interval to repair the disk and bring it back online.
The time can be specified in units of minutes (m or M) or hours(default) (h or H).
If a disk is offlined by ASM because of an I/O (write) error or is explicitly
offlined using the ALTER DISKGROUP... OFFLINE statement without the DROP AFTER
clause, then the value that is specified for the DISK_REPAIR_TIME attribute for
the disk group is used.
If this attribute value is changed with the ALTER DISKGROUP... SET ATTRIBUTE
'disk_repair_time' statement before this offlined disk is dropped, then the new
(current) value of the attribute is used by the ASM disk offline functionality but
the elapsed time is not reset.
Q63.
The syntax used in the question, is to make a long-term backup(tagged) with a
restore point, i.e it will take backup of the archivelog files along with the
datafile and also create a normal restore point, provided the same restore point
does not exist already.
The said log backup contains just those archived logs that will be needed to
restore this backup to a consistent state. The database performs a online redo log
switch to archive the redo that is in the current online logs and is necessary to
make this new backup consistent.
A recovery catalog is required for KEEP FOREVER, but is not required for any
other KEEP option like KEEP UNTIL TIME 'SYSDATE+365' that keeps the backup for
365 days and after a year has passed, the backup becomes obsolete regardless of
the backup retention policy settings.
Q64.
If you run VALIDATE DATABASE command then it will validate all datafiles and

control files (and the server parameter file if one is in use) and after doing
the data integrity checks, it logs information about physical block corruptions
( and optionally logical block corruptions) of database files and backups in the
V$DATABASE_BLOCK_CORRUPTION view and the Automatic Diagnostic Repository as one or
more failures.
Q65.
The ADVISE FAILURE command is capable of presenting both manual and automatic
repair options. Data Recovery Advisor (DRA) can categorize manual actions as
either mandatory or optional.
In some cases, the situation demands only for Manual Actions. For example
Suppose, there is no backups available for a lost control file. The only option
here is the manual action - is to re-create the control file with the CREATE
CONTROLFILE statement. Data Recovery Advisor will present this manual action as
mandatory because no automatic repair is available.
We can consider another scenario where the manual action would be optional
only.For instance, say, the RMAN backups exist for a missing datafile. In this
case, the REPAIR FAILURE command can perform the repair automatically by restoring
and recovering the datafile. An optional manual action would be to restore the
datafile if it was unintentionally renamed or moved.
Data Recovery Advisor suggests optional manual actions if they might prevent a
more extreme form of repair such as datafile restore and recovery.
DRA performs feasibility checks before it recommends an automated repair. For
example, Data Recovery Advisor preforms check if all backups and archived redo
logs needed for media recovery are present and consistent.
Data Recovery Advisor may need specific backups and archived redo logs.
If the files needed for recovery are not available, then recovery will not
possible.
Q66.
STARTUP UPGRADE starts the database in OPEN UPGRADE mode and sets system
initialization parameters to specific values required to enable database upgrade
scripts to be run. UPGRADE should only be used when a database is first started
with a new version of the Oracle Database Server.
And off-course, You must be connected to a database as SYSOPER, or SYSDBA. You
annot be connected to a shared server via a dispatcher.
Q67.
System partitioning provides the well-known benefits of partitioning (scalability,
availability, and manageability), but the partitioning and actual data placement
are controlled by the application.
For insert operations into a system partitioned table, the partition must be
specified to avoid receiving an error. The PARTITION Extended Syntax is optional
for update and delete statements, but omitting this will force all partitions to
be scanned as there is no way to perform automatic partition pruning when the
database has no control over row placement. When the PARTITION clause is used, you
must be sure to perform the operation against the correct partition.
In order for a local index to be unique, the partitioning key of the table must be
part of the index's key columns and this is NOT possible for system partitioned
tables, so Unique local indexes cannot be created on a system-partitioned table.
Q68.
Oracle follows conservative plan selection strategy i.e even if a new plan looks
like it might perform better, it prefers to use the existing and tested plan. Only
when the newer plan is proved to perform well will it be accepted for use. And
this is how SQL Plan Management Works.
A SQL Plan Baseline is defined as a set of one or more "accepted" plans.

Oracle also maintains a SQL Plan History which is nothing but a list of all
execution plans generated for a statement, including those that have and have not
been moved into the SQL Plan Baseline.
Acceptable execution plans are moved from the SQL Plan History into the SQL
Plan Baseline, which is referred to as evolving the plan.
The SQL Plan base line(accepted plans) and SQL Plan History are maintained in
the SQL Management Base (SMB) as a Super set of all the plans, which is kept in
tables in the SYSAUX tablespace. Any related SQL profiles for the statements are
also kept in SMB.
Before a plan in the SQL Plan Baseline can be used or selected by the oracle
optimizer, the SQL Plan Baseline must be initialized with at least one accepted
plan for the repeatable statements being run.
The two activities that would populate the SQL Plan Baselines are (1) capturing
and (2) evolving.
Capturing is the initial load of plans into the baseline whereas evolving is
the evaluation of new plans in the SQL History to ensure they will not cause the
statement to perform worse and then moving them to the SQL Baseline.
Oracle actually maintains a log of the SQL ID(s) for statements that are being
executed against the database.
# If a statement is parsed or executed after it was initially logged, it is
considered a repeatable statement and oracle maintains the SQL History for each of
them.
There are two ways of capturing automatic capturing by setting
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter to TRUE and manual capturing by
using the DBMS_SPM package.
If the optimizer generates a new plan for a repeated SQL statement, it will be
automatically added to the SQL Plan History but it is not automatically added to
the SQL Plan Baseline.
For a new plan to be added to the SQL Plan Baseline, it must be "evolved" or
verified first.
There are several methods for moving(evolving) a plan from the SQL Plan
History (set of all plans) into the SQL Plan Baseline(set of accepted plans)
like - Automatic SQL Tuning job
- Manually running the SQL Tuning Advisor may result in plans being added to the
SQL Plan Baseline(If the SQL Tuning Advisor generates a recommendation to create
and use a SQL Profile,then if that profile is accepted, the corresponding plan is
automatically added to the baseline).
- Using ALTER_SQL_PLAN_BASELINE function of DBMS_SPM to change the status of plans
in the SQL History to Accepted, which in turn moves them into the SQL Baseline.
Once SQL Plan Baselines are in place,whether or not an execution plan from the SQL
Plan Baseline should be used for a repeatable statement run in the database, below
strategy is followed Check if OPTIMIZER_USE_SQL_PLAN_BASELINES=TRUE?
If YES? (ie
if OPTIMIZER_USE_SQL_PLAN_BASELINES=TRUE )
Further Check if the plan part of the SQL History?
If YES? (ie is a part of SQL History )
Further Check if the plan part of the SQL Baseline?
If YES? (ie also a part of SQL Base Line)
Use the new plan generated
If NO? (ie is a part of SQL History but not a part of Base Line)
Replace with the best SQL Baseline Plan
If NO? (ie is NOT a part of SQL History )
Add the new plan to the SQL History and picks the
best SQL Baseline Plan

If NO?(ie if OPTIMIZER_USE_SQL_PLAN_BASELINES=FALSE)
Use the new plan generated

69.
Networked-Attached Storage (NAS) systems are used very commonly in enterprise

data centers.
NAS appliances and their client systems typically communicate via the Network File
System (NFS) protocol.
Client systems use the operating system's NFS driver to facilitate the communication between
the client and the NFS server. While this approach was somewhat successful, drawbacks such
as performance degradation and complex configuration requirements have limited the
benefits of using NFS and NAS for database storage.
Oracle Database 11g Direct NFS Client integrates the NFS client functionality directly
in the Oracle software. With Oracle Database 11g, we can configure Oracle Database to access
NFS V3 NAS devices directly using Oracle Direct NFS Client, rather than using the operating
system kernel NFS client. Oracle Database will access files stored on the NFS server
directly through the integrated Direct NFS Client eliminating the overhead imposed by the
operating system kernel NFS.
Direct NFS Client includes two fundamental I/O optimizations to increase throughput and
overall performance.
(1) Direct NFS Client performs concurrent direct I/O, which bypasses any operating system
level caches and eliminates any operating system write-ordering locks. In turn, this
decreases memory consumption by eliminating scenarios where Oracle data is cached both in
the SGA and in the operating system cache and eliminates the kernel mode CPU cost of copying
data from the operating system cache into the SGA.
(2) Direct NFS Client performs asynchronous I/O, which allows processing to continue while
the I/O request is submitted and processed.
To use Direct NFS Client, the NFS file systems must first be mounted and available
over regular NFS mounts.
Direct NFS Client can use a new configuration file,called oranfstab or the mount tab file
(/etc/mtab on Linux) to determine the mount point settings for NFS storage devices.
First, Oracle looks for the mount settings in $ORACLE_HOME/dbs/oranfstab, which specifies
the Direct NFS Client settings for a single database.
Next, Oracle looks for settings in /etc/oranfstab, which specifies the NFS mounts available
to all Oracle databases on that host.
Finally, Oracle reads the mount tab file (/etc/mtab on Linux) to identify available NFS
mounts. Note that Direct NFS Client will use the first entry found if duplicate entries
exist in the configuration files.

70.

In Oralce 11g, Oracle has introduced an extended feature of ASMCMD utility to provide ASM
metadata backup and restore functionality via md_backup and md_restore commands, this
functionality is called as Asm Metadata Backup and Restore - AMBR.
You need to use md_backup for taking ASM metadata backup for a disk group and md_restore to
restore the ASM metadata for a disk group. This way, AMBR functionality is used to recreate an ASM disk group with an identical template and directory structure using the backup
of ASM metadata.
When you take the backup by md_backup, information about ASM disks and disk groups,
configurations, attributes, etc.. are stored in a text file and will be used later to
restore the ASM diskgroup metadata definition. The information gathers during ASM backup
includes the ASM diskgroup name, Redundancy Type, Allocation Unit Size, diskpath, alias
directories, stripe, full path of alias entries, etc..
When you restore the backup by md_restore,, it can re-create the disk group based on the
backup file with all user-defined templates with the exact configuration as the backuped
disk group.
Several options are available when restore the disk groupfull
- re-create the disk group with the exact configuration

nodg
- restores metadata in an existing disk group provided as an input parameter
newdg - Change the configuration like failure group, disk group name, etc..
Hence, for our case, we will choose all 3 options mentioned in the Question.
Note ::
Example
To restore the disk group asmdsk1 from the backup script and creates a copy.
ASMCMD> md_restore t full g asmdsk1 i backup_file
The following example takes an existing disk group asmdsk1 and restores its metadata.
ASMCMD> md_restore t nodg g asmdsk1 i backup_file
The following example restores disk group asmdsk1 completely but the new disk group that is
created is called asmdsk2.
ASMCMD> md_restore t newdg -o 'asmdsk1:asmdsk2' i backup_file
The following example restores from the backup file after applying the overrides defined in
the file override.txt.
ASMCMD> md_restore t newdg of override.txt i backup_file

Q71.

Here, ALLOCATED_SPACE means Total allocated space, in bytes, including space that
is currently allocated and used and space that is currently allocated and
available for reuse.
And FREE_SPACE means Total free space available, in bytes, including space that
is currently allocated and available for reuse and space that is currently
unallocated.
Q72.
RESULT_CACHE_MODE: if it set to manual, only the queries associated with the hints
will be cached and if it is set to force, then all the queries are cached if they
are qualified and fit in cache.
Result cache is part of shared pool and maximum can be of 75% of shared pool size.
So, for our case, the option C is correct.
Q73.
The IMPORT CATALOG command is used to import the metadata from one recovery
catalog schema into a different catalog schema. If you created catalog schemas of
different versions to store metadata for multiple target databases, then this
command enables you to maintain a single catalog schema for all databases.

Q74.
If the optimizer generates a new plan for a repeated SQL statement, it will be
automatically added to the SQL Plan History but it is not automatically added to
the SQL Plan Baseline.
For a new plan to be added to the SQL Plan Baseline, it must be "evolved" or
verified first.
There are several methods for moving (evolving) a plan from the SQL Plan
History (set of all plans) into the SQL Plan Baseline (set of accepted plans)
like - Automatic SQL Tuning job
- Manually running the SQL Tuning Advisor may result in plans being added to the
SQL Plan Baseline(If the SQL Tuning Advisor generates a recommendation to create
and use a SQL Profile, then if that profile is accepted, the corresponding plan is
automatically added to the baseline).
- Using ALTER_SQL_PLAN_BASELINE function of DBMS_SPM to change the status of plans
in the SQL History to Accepted, which in turn moves them into the SQL Baseline.
For our case, A, B and E are the correct options.
Q75.
Explanation is similar to Q.74, i.e. the tuned plan is added to the fixed SQL plan
baseline as a non-fixed plan => Option C is the correct answer.
Q76.
You can only run a single calibration task at a time. By Using Oracle 11g
introduced new procedure CALIBRATE_IO from dbms_resource_manager package to run
I/O calibration task, you can check the values of different metrics such as
Max_iops (=Maximum I/O per sec i.e Maximum Number of random DB Block-sized read
requests that can be serviced ),Max_mbps(=Maximum MegaBytes Per Sec i.e Maximum
number of randomly distributed 1MB reads that can be serviced in megabytes per
sec), Latency (i.e Average latency of Db Block-sized I/O requests at max_iops rate
in milliseconds ) etc.
Latency refers to the Lag between the time an I/O request is made and when the
request is serviced by the associated storage system. High Latency means a system
that is overloaded etc. Latency Time is computed if the TIMED_STATISTICS
initialization parameter is set to TRUE.
Hence, for our case, options A,D & E are correct.

Das könnte Ihnen auch gefallen