Sie sind auf Seite 1von 78

1. which two statements are true about identifying unused indexes? (choose two.

)
a. performance is improved by eliminating unnecessary overhead during dml operations.
b. v$index_stats displays statistics that are gathered when using the monitoring usage keyword.
c. each time the monitoring usage clause is specified, the v$object_usage view is reset for the
specified index.
d. each time the monitoring usage clause is specified, a new monitoring start time is recorded in
the alert log.
answer: ac
explanation:
monitoring index usage
oracle provides a means of monitoring indexes to determine if they are being used or not used. if
it is determined that an index is not being used, then it can be dropped, thus eliminating
unnecessary statement overhead.
to start monitoring an index’s usage, issue this statement:
alter index index monitoring usage;
later, issue the following statement to stop the monitoring:
alter index index nomonitoring usage;
the view v$object_usage can be queried for the index being monitored to see if the index has
been used. the view contains a used column whose value is yes or no, depending upon if the
index has been used within the time period being monitored. the view also contains the start and
stop times of the monitoring period, and a monitoring column (yes/no) to indicate if usage
monitoring is currently active.
each time that you specify monitoring usage, the v$object_usage view is reset for the specified
index. the previous usage information is cleared or reset, and a new start time is recorded. when
you specify nomonitoring usage, no further monitoring is performed, and the end time is recorded
for the monitoring period. until the next alter index ... monitoring usage statement is issued, the
view information is left unchanged.

2. you need to create an index on the sales table, which is 10 gb in size. you want your index to
be spread across many tablespaces, decreasing contention for index lookup, and increasing
scalability and manageability.
which type of index would be best for this table?
a. bitmap
b. unique
c. partitioned
d. reverse key
e. single column
f. function-based
answer: c
explanation:
i suggest that you read chapters 10 & 11 in oracle9i database concepts release 2 (9.2) march
2002 part no. a96524-01 (a96524.pdf)

oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01 (a96524.pdf) ch 10
bitmap indexes
the purpose of an index is to provide pointers to the rows in a table that contain a given key value.
in a regular index, this is achieved by storing a list of rowids for each key corresponding to the
rows with that key value. oracle stores each key valuerepeatedly with each storedrowid.in
abitmap index,abitmapfor eachkey value is used instead of a list of rowids.
each bit in the bitmap corresponds to a possible rowid. if the bit is set, then it means that the row
with the corresponding rowid contains the key value. a mapping function converts the bit position
to an actual rowid, so the bitmap index provides the same functionality as a regular index even
though it uses a different
representation internally. if the number of different key values is small, then bitmap indexes are
very space efficient.
bitmap indexing efficiently merges indexes that correspond to several conditions in a where
clause. rows that satisfy some, but not all, conditions are filtered out before the table itself is
accessed. this improves response time, often dramatically.
note: bitmap indexes are available only if you have purchased the oracle9i enterprise edition.
see oracle9i database new features for more information about the features available in oracle9i
and the oracle9i enterprise edition.

oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01 (a96524.pdf) ch 11
partitioned indexes
just like partitioned tables, partitioned indexes improve manageability, availability, performance,
and scalability. they can either be partitioned independently (global indexes) or automatically
linked to a table's partitioning method (local indexes).
local partitioned indexes
local partitioned indexes are easier to manage than other types of partitioned indexes. they also
offer greater availability and are common in dss environments. the reason for this is
equipartitioning: each partition of a local index is associated with exactly one partition of the table.
this enables oracle to automatically keep the index partitions in sync with the table partitions, and
makes each table-index pair independent. any actions that make one partition's data invalid or
unavailable only affect a single partition.
you cannot explicitly add a partition to a local index. instead, new partitions are added to local
indexes only when you add a partition to the underlying table. likewise, you cannot explicitly drop
a partition from a local index. instead, local index partitions are dropped only when you drop a
partition from the underlying table.
a local index can be unique. however, in order for a local index to be unique, the partitioning key
of the table must be part of the index’s key columns. unique localindexes are useful for oltp
environments.
see also: oracle9i data warehousing guide for more information about partitioned indexesname,
and stores the index partition in the same tablespace as the table partition.
global partitioned indexes
global partitioned indexes are flexible in that the degree of partitioning and the partitioning key are
independent from the table's partitioning method. they are commonly used for oltp environments
and offer efficient access to any individual record.
the highest partition of a global index must have a partition bound, all of whose values are
maxvalue. this ensures that all rows in the underlying table can be represented in the index.
global prefixed indexes can be unique or nonunique. you cannot add a partition to a global index
because the highest partition always has a partition bound of maxvalue. if you wish to add a new
highest partition, use the alter index split partition statement. if a global index partition is empty,
you can explicitly drop it by issuing the alter index drop partition statement. if a global index
partition contains data, dropping the partition causes the next highest partition to be marked
unusable. you cannot drop the highest partition in a global index.

oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01 (a96524.pdf) ch 10
unique and nonunique indexes
indexes canbeunique or nonunique.unique indexesguaranteethat notworows of a table have
duplicate values in the key column (or columns). nonunique indexes do not impose this restriction
on the column values. oracle recommends that unique indexes be created explicitly, and not
through
enabling a unique constraint on a table.
alternatively, you can define unique integrity constraints on the desired columns.
oracle enforces unique integrity constraints by automatically defining a unique index on the
unique key. however, it is advisable that any index that exists for query performance, including
unique indexes, be created explicitly.

reverse key indexes


creating a reverse key index,compared to a standard index, reverses the bytes of each column
indexed (except the rowid) while keeping the column order. such an arrangement can help avoid
performance degradation with oracle9i real application clusters where modifications to the index
are concentrated on a small set of leaf blocks. by reversing the keys of the index, the insertions
become distributed across all leaf keys in the index.
using the reverse key arrangement eliminates the ability to run an index range scanning query on
the index. because lexically adjacent keys are not stored next to each other in a reverse-key
index, only fetch-by-key or full-index (table) scans can be performed.
sometimes, using a reverse-key index can make an oltp oracle9i real application clusters
application faster. for example, keeping the index of mail messages in an e-mail application:
some users keep old messages, and the index must maintain pointers to these as well as to the
most recent.
the reverse keyword provides a simple mechanism for creating a reverse key index. you can
specify the keyword reverse along with the optional index specifications in a create index
statement:
create index i on t (a,b,c) reverse;
you can specify the keyword noreverse to rebuild a reverse-key index into one
that is not reverse keyed:
alter index i rebuild noreverse;
rebuilding a reverse-key index without the noreverse keyword produces a rebuilt, reverse-key
index.
function-based indexes
you can create indexes on functions and expressions that involve one or more columns in the
table being indexed. a function-based index computes the value of the function or expression and
stores it in the index. you can create a function-based index as either a b-tree or a bitmap index.

function-based indexes provide an efficient mechanism for evaluating statements that contain
functions in their where clauses. the value of the expression is computedand stored
intheindex.whenitprocessesinsert and update statements, however, oracle must still evaluate the
function to process the statement.
for example, if you create the following index:
createindexidxontable_1(a+b *(c -1),a,b);
then oracle can use it when processing queries such as this:
select a from table_1 where a + b * (c - 1) < 100;

3. the database needs to be shut down for hardware maintenance. all users sessions except one
have either voluntarily logged off or have been forcibly killed. the one remaining user session is
running a business critical data manipulation language (dml) statement and it must complete prior
to shutting down the database.
which shutdown statement prevents new user connections, logs off the remaining user, and shuts
down the database after the dml statement completes?
a. shutdown
b. shutdown abort
c. shutdown normal
d. shutdown immediate
e. shutdown transactional
answer: e
explanation:

from a newsgroup
there are four ways to shut down a database :
· shutdown
· shutdown immediate
· shutdown transactional
· shutdown abort
shutdown waits for everyone to finish & log out before it shuts down. the database is cleanly
shutdown.
shutdown immediate rolls back all uncommitted transactions before it shuts down. the database is
cleanly shutdown.
shutdown transactional waits for all current transactions to commit or rollback before it shuts
down. the database is cleanly shutdown.
shutdown abort quickly shuts down - the next restart will require instance recovery. the database
is technically crashed.
the key reason for an immediate shutdown not being immediate is because of the need to
rollback all current transactions. if a user has just started a transaction to update emp set sal =
sal * 2 where emp_id = 1000; then this will be rolled back almost instantaneously.
however, if another user has been running a huge update for the last four hours, and has not yet
committed, then four hours of updates have to be rolled back and this takes time.
so, if you really want to shutdown right now, then the advised route is :
· shutdown abort
· startup restrict
· shutdown
when you shutdown abort, oracle kills everything immediately. startup restrict will allow only dba
users to get in but, more importantly, will carry out instance recovery and recover back to a
consistent state using the current on-line redo logs. the final shutdown will perform a clean
shutdown. any cold backups taken now will be of a consistent database.
there has been much discussion on this very subject on the oracle server newsgroups. some
people are happy to backup the database after a shutdown abort, others are not. i prefer to use
the above method prior to taking a cold backup - if i have been unable to shutdown or shutdown
immediate that is.

4. what provides for recovery of data that has not been written to the data files prior to a failure?
a. redo log
b. undo segment
c. rollback segment
d. system tablespace
answer: a
explanation:
oracle 7 documentation, oracle 7 server concepts, 22-5
the redo log
the redo log, present for every oracle database, records all changes made in an oracle database.
the redo log of a database consists of at least two redo log files that are separate from the
datafiles (which actually store a database’s data). as part of database recovery from an instance
or media failure, oracle applies the appropriate changes in the
database’s redo log to the datafiles, which updates database data to the instant that the failure
occurred. a database’s redo log can be comprised of two parts: the online redo
log and the archived redo log, discussed in the following sections.
the online redo log every oracle database has an associated online redo log. the online redo log
works with the oracle background process lgwr to immediately record all changes made through
the associated instance. the online redo log consists of two or more pre-allocated files that are
reused in a circular fashion to record ongoing database changes; see “the online redo log” on
page 22-6 for more information.
the archived (offline) redo log optionally, you can configure an oracle database to archive files of
the online redo log once they fill. the online redo log files that are archived are uniquely identified
and make up the archived redo log. by archiving filled online redo log files, older redo log
information is preserved for more extensive database recovery operations, while the pre-allocated
online redo log files continue to be reused to store the most current database changes; see “the
archived redo log” page 22-16 for more information.

oracle9i database administrator’s guide release 2 (9.2) march 2002 part no. a96521-01
(a96521.pdf) 13-2
undo and rollback segments
every oracle database must have a method of maintaining information that is used to roll back, or
undo, changes to the database. such information consists of records of the actions of
transactions, primarily before they are committed. oracle refers to these records collectively as
undo.
undo records are used to:
n roll back transactions when a rollback statement is issued
n recover the database
n provide read consistency
when a rollback statement is issued, undo records are used to undo changes that were made to
the database by the uncommitted transaction. during database recovery, undo records are used
to undo any uncommitted changes applied from the redo log to the datafiles. undo records
provide read consistency by maintaining
the before image of the data for users who are accessing the data at the same time that another
user is changing it.
historically, oracle has used rollback segments to store undo. space management for these
rollback segments has proven to be quite complex. oracle now offers another method of storing
undo that eliminates the complexities of managing rollback segment space, and enables dbas to
exert control over how long undo is retained before being overwritten. this method uses an undo
tablespace.

5. you intend to use only password authentication and have used the password file utility to
create a password file as follows:
$orapwd file=$oracle_home/dbs/orapwdb01
password=orapass entries=5
the remote_login_passwordfile initialization parameter is set to none.
you created a user and granted only the sysdba privilege to that user as follows:
create user dba_user
identified by dba_pass;
grant sysdba to dba_user;
the user attempts to connect to the database as follows:
connect dba_user/dba_pass as sysdba;
why does the connection fail?
a. the dba privilege was not granted to dba_user.
b. remote_login_passwordfile is not set to exclusive.
c. the password file has been created in the wrong directory.
d. the user did not specify the password orapass to connect as sysdba.
answer: b

oracle 7 documentation, the oracle7 database administrator, 1 - 11


remote_login_passwordfile
in addition to creating the password file, you must also set the initialization parameter
remote_login_passwordfile to the appropriate value. the values recognized are described below.
note: to startup an instance or database, you must use server manager. you must specify a
database name and a parameter file to initialize the instance settings. you may specify a fully-
qualified remote database name using sql*net. however, the initialization parameter file and any
associated files, such as a configuration file, must exist on the client machine. that is, the
parameter file must be on the machine where you are running server manager. none setting this
parameter to none causes
oracle7 to behave as if the password file does not exist. that is, no privileged connections are
allowed over non-secure connections. none is the default value for
this parameter.
exclusive an exclusive password file can be used with only one database. only an exclusive file
can contain the names of users other than sys and internal.
using an exclusive password file allows you to grant sysdba and sysoper system privileges to
individual users and have them connect as themselves.
a shared password file can be used by multiple databases. however, the only users recognized
by a shared password file are sys and internal; you cannot add users to a shared password file.
all users needing sysdba or sysoper system privileges must connect using the same name, sys,
and password. this option is useful if you have a single dba administering multiple databases.
suggestion: to achieve the greatest level of security, you should set the
remote_login_passwordfile file initialization parameter to exclusive immediately after creating the
password file.

6. which data dictionary view(s) do you need to query to find the following information about a
user?
• whether the user's account has expired
• the user's default tablespace name
• the user's profile name
a. dba_users only
b. dba_users and dba_profiles
c. dba_users and dba_tablespaces
d. dba_users, dba_ts_quotas, and dba_profiles
e. dba_users, dba_tablespaces, and dba_profiles
answer: a
explanation:
sql> desc dba_users
name null? type
----------------------------------------- -------- ----------------
username not null varchar2(30)
user_id not null number
password varchar2(30)
account_status not null varchar2(32)
lock_date date
expiry_date date
default_tablespace not null varchar2(30)
temporary_tablespace not null varchar2(30)
created not null date
profile not null varchar2(30)
initial_rsrc_consumer_group varchar2(30)
external_name varchar2(4000)

7. you omit the undo tablespace clause in your create database statement. the
undo_management parameter is set to auto. what is the result of your create database
statement?
a. the oracle server creates no undo tablespaces.
b. the oracle server creates an undo segment in the system tablespace.
c. the oracle server creates one undo tablespace with the name sys_undotbs.
d. database creation fails because you did not specify an undo tablespace on the create
database statement.
answer: c
explanation:

<http://www.oracle-
base.com/articles/9i/automaticundomanagement.asp#enablingautomaticundomanagement>
using automatic undo management: creating an undo tablespace
oracle recommends that instead of using rollback segments in your database, you use an undo
tablespace. this requires the use of a different set of initialization parameters, and optionally, the
inclusion of the undo tablespace clause in your create database statement.
you must include the following initialization parameter if you want to operate your database in
automatic undo management mode:
undo_management=auto
in this mode, rollback information, referred to as undo, is stored in an undo tablespace rather than
rollback segments and is managed by oracle. if you want to create and name a specific
tablespace for the undo tablespace, you can include the undo tablespace clause at database
creation time. if you omit this clause, and automatic undo management is specified, oracle
creates a default undo tablespace named sys_undotbs.

8. which password management feature ensures a user cannot reuse a password for a specified
time interval?
a. account locking
b. password history
c. password verification
d. password expiration and aging
answer: b
explanation:
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01 (a96524.pdf) 22-8
account locking
oracle can lock a user’s account if the user fails to login to the system within a specified number
of attempts. depending on how the account is configured, it can be unlocked automatically after a
specified time interval or it must be unlocked by the database administrator.
password complexity verification
complexity verification checks that each password is complex enough to provide reasonable
protection against intruders who try to break into the system by guessing passwords.
password history
the password history option checks each newly specified password to ensure that a password is
not reused for the specified amount of time or for the specified number of password changes. the
database administrator can configure the rules for password reuse with create profile statements.

9. which view provides the names of all the data dictionary views?
a. dba_names
b. dba_tables
c. dictionary
d. dba_dictionary
answer: c
explanation:
<http://docs.rinet.ru:8080/o8/ch02/ch02.htm>

all the data dictionary tables and views are owned by sys. you can query the dictionary table to
obtain the list of all dictionary views.

10. you are going to create a new database. you will not use operating system authentication.
which two files do you need to create before creating the database? (choose two.)
a. control file
b. password file
c. redo log file
d. alert log file
e. initialization parameter file
answer: ae
explanation:
i guess answer a is stupid. first of all, creating control files is the purpose of the whole create
database procedure. other proof:
oracle9i sql reference release 2 (9.2) march 2002 part no. a96540-01 (a96540.pdf) 13-26
controlfile reuse clause
specify controlfile reuse to reuse existing control files identified by the initialization parameter
control_files, thus ignoring and overwriting any information they currently contain. normally you
use this clause only when you are re-creating a database, rather than creating one for the first
time. you cannot use this clause if you also specify a parameter value that requires that the
control file be larger than the existing files. these parameters are maxlogfiles, maxlogmembers,
maxloghistory, maxdatafiles, and maxinstances. if you omit this clause and any of the files
specified by control_files already exist, oracle returns an error.
password file
but since not os authentication is used, the other choice can only be password-file authentication.
for this purpose a password file is needed.

redo log file


is used for the transactions within a database, not for database creation
alert log file
see q11 (“trace files, on the other hand, are generated by the oracle background processes or
other connected net8 processes when oracle internal errors occur and they dump all information
about the error into the trace files.”)
initialization parameter file
we need one, this file contains the description of the created database.

so the answer is b,e

remark: it would be logical, that if oracle wants to read from a file, the file needs to be there. if
oracle wants to write to a file, it will create one.

11. which initialization parameter determines the location of the alert log file?
a. user_dump_dest
b. db_create_file_dest
c. background_dump_dest
d. db_create_online_log_dest_n
answer: c

<http://www.experts-exchange.com/databases/oracle/q_20308350.html>

there is one alert log per db instance and normally named as alert_<sid>.log. trace files, on the
other hand, are generated by the oracle background processes or other connected net8
processes when oracle internal errors occur and they dump all information about the error into the
trace files. you can also set the level of tracing for net8 connections as per your requirement.
the alert log is a special trace file. the alert log of a database is a chronological log of messages
and errors, which includes the following:
- all internal errors (ora-600), block corruption errors (ora-1578), and deadlock errors (ora-60) that
occur
- administrative operations, such as create/alter/drop database/tablespace/rollback segment sql
statements and startup, shutdown, and archive log
- several messages and errors relating to the functions of shared server and dispatcher
processes
- errors occurring during the automatic refresh of a snapshot
- the values of all initialization parameters at the time the database and instance start

location:
---------

all trace files for background processes and the alert log are written to the destination specified by
the initialization parameter background_dump_dest. all trace files for server processes are written
to the destination specified by the initialization parameter user_dump_dest. the names of trace
files are operating system specific, but usually include the name of the process writing the file
(such as lgwr and reco).

info about all the oracle parameters:


<http://www.orafaq.net/parms/>

12. temporary tablespaces should be locally managed and the uniform size should be a multiple
of the ________.
a. db_block_size
b. db_cache_size
c. sort_area_size
d. operating system block size
answer: c
explanation:
<http://www.interealm.com/technotes/roby/temp_ts.html>
temporary tablespace considerations
by roby sherman <http://www.interealm.com/roby/>
today, depending on your rdbms version, oracle offers three varieties of temporary tablespaces to
choose from. these spaces are used for disk based sorts, large index rebuilds, global temporary
tables, etc. to ensure that your disk-based sorting is optimal, it is critical to understand the
different types, caveats, and benefits of these temporary tablespace options:
· permanent tablespaces with temporary segments
· tablespaces of type “temporary”
· temporary tablespaces
permanent tablespaces with temporary segments
this option has been available since oracle 7.3 and is the least efficient for disk-based sorting. in
this type of configuration, temporary (sort) extents are allocated within a permanent tablespace.
compared to other temp tablespace choices, the performance and operation of this disk-sort
option suffers in the areas of:
· extent management - the st-enqueue (and subsequent recursive dictionary sql) is used
for the allocation and de-allocation of extents allotted to each sort segment.
· sort segment reuse - each process performing a disk sort creates then drops a private
sort segment. this adds additional overhead to the sorting process.
· extent reuse - because of the “private sort segment” policy used in this tablespace option,
there is no ability for disk-based sorts to re-use extents that are no longer active.
tablespaces of type “temporary”
this disk-sorting option was introduced in oracle 8.0 as a way to provide a more dedicated facility
for disk-based sorting while reducing some amount of resources and i/o associated with extent
management. it is created by invoking the create tablespace xyz… temporary; sql clause.

sorts assigned to a tablespaces of type temporary use a single sort segment (multiple segments
in an ops environment) that is only dropped at instance start up and is created during the first
disk-based sort.
sorts using this type of tablespace have the ability to reuse extents that are no longer active. this
added level of reuse reduces the amount of resources necessary to manage individual segments
and allocate / deallocate extents.

although extent allocation and de-allocation is reduced in this type of tablespace, the st-enqueue
(and subsequent dictionary-generated recursive sql) is still required in these activities when they
occur. since this type of tablespace cannot be configured with local extent management, there is
not easy way to bypass this performance degradation.
temporary tablespaces
this new class of temporary tablespace was introduced in oracle 8i and provides the most robust
and efficient means of disk-based sorting in oracle today. temporary tablespaces are created
using the sql syntax create temporary tablepspace xyz tempfile ….
there are a number of performance benefits of this tablespace option over permanent and
tablespaces of type temporary in the areas of:

· extent management - extents in this tablespace are allocated via a locally-managed


bitmap. therefore use of the st-enqueue and recursive sql for this activity is eliminated.
· segment reuse - sorts assigned to a tablespaces of type temporary use a single sort
segment (multiple segments in an ops environment) that is only dropped at instance start up and
is created during the first disk-based sort.
· extent reuse - sorts using this type of tablespace have the ability to reuse extents that are
no longer active. this added level of reuse reduces the amount of resources necessary to manage
individual segments and allocate / deallocate extents.
note>> if the extent management clause is not specified for temporary tablespaces, the database
will automatically set the tablespace with a uniform extent size of 1 mb.
which do i choose?
whether you are on an existing database application migrating to a newer oracle version or a new
application in the initial development phase, for optimal performance you should use the most
recent temporary tablespace option available to your database version:
· for oracle versions 7.3.4 and below, use permanent tablespaces with temporary
segments
· for oracle versions 8.0.3 - 8.0.6 use tablespaces of type “temporary”
· for oracle versions 8.1.5 - 9.x use temporary tablespaces
selecting the right extent size
regardless of the type of temporary tablespace you use, you should ensure that the extent sizes
selected for the space do not impede system performance. in dictionary-managed temporary
tablespaces, the initial and next extent sizes should be a multiple of sort_area_size and
hash_area_size. pctincrease should be set to 0. in locally-managed temp tablespaces, the
uniform extent size should be a multiple of sort_area_size and hash_area_size.

13. you can use the database configuration assistant to create a template using an existing
database structure.
which three will be included in this template? (choose three.)
a. data files
b. tablespaces
c. user defined schemas
d. user defined schema data
e. initialization parameters
answer: acd
explanation:
oracle9i database administrator’s guide release 2 (9.2) march 2002 part no. a96521-01
(a96521.pdf) 2-10
managing dbca templates
dbca templates are xml files that contain information required to create a database. templates are
used in dbca to create new databases and make clones of existing databases. the information in
templates includes database options, initialization parameters, and storage attributes (for
datafiles, tablespaces, control files and redo logs).
templates can be used just like scripts, and they can be used in silent mode. but they are more
powerful than scripts, because you have the option of cloning a database. this saves time in
database creation, because copying an already created seed database’s files to the correct
locations takes less time than creating them as new.
templates are stored in the following directory:
$oracle_home/assistants/dbca/templates
types of templates
there are two types of templates:
n seed templates
n non-seed templates
the characteristics of each are shown in the following table:

type file extension include datafiles? database structure


seed .dbc yes this type of template contains both the structure and the physical
datafiles of an existing (seed) database. when you select a seed template, database creation is
faster because the physical files and schema of the database have already been created. your
database starts as a copy of the seed database, rather than having to be built. you can change
only the following: n name of the database n destination of the datafiles n number control files n
number redo log groups n initialization parameters other changes can be made after database
creation using custom scripts that can be invoked by dbca, command line sql statements, or the
oracle enterprise manager. the datafiles and redo logs for the seed database are stored in zipped
format in another file with a .dfj extension. usually the corresponding .dfj file of a .dbc file has the
same file name, but this is not a requirement since the corresponding .dfj file’s location is stored
in the .dbc file.
non-seed .dbt no this type of template is used to create a new database from
scratch. it contains the characteristics of the database to be created. seed templates are more
flexible than their seed counterparts because all datafiles and redo logs are created to your
specification (not copied), and names, sizes, and other attributes can be changed as required.
the question is stupid, because it does not define if the template is of seed, or non-seed type, so i
have no idea… i guess i would choose the same, maybe who answered this question had heard
about templates before… j

14. john has issued the following sql statement to create a new user account:
create user john
identified by john
temporary tablespace temp_tbs
quota 1m on system
quota unlimited on data_tbs
profile apps_profile
password expire
default role apps_dev_role;
why does the above statement return an error?
a. you cannot assign a role to a user within a create user statement.
b. you cannot explicitly grant quota on the system tablespace to a user.
c. you cannot assign a profile to a user within a create user statement.
d. you cannot specify password expire clause within a create user statement.
e. you cannot grant unlimited quota to a user within a create user statement.
answer: a
explanation:
oracle9i sql reference release 2 (9.2) march 2002 part no. a96540-01 (a96540.pdf) 16-33

15. which statement is true regarding enabling constraints?


a. enable novalidate is the default when a constraint is enabled.
b. enabling a constraint novalidate places a lock on the table.
c. enabling a unique constraint to validate does not check for constraint violation if the constraint
is deferrable.
d. a constraint that is currently disabled can be enabled in one of two ways: enable novalidate or
enable validate.
answer: d
i don’t remember l
constraint states
table constraints can be enabled and disabled using the create table or alter table statement. in
addition the validate or novalidate keywords can be used to alter the action of the state:
· enable validate is the same as enable. the constraint is checked and is guaranteed to
hold for all rows.
· enable novalidate means the constraint is checked for new or modified rows, but existing
data may violate the constraint.
· disable novalidate is the same as disable. the constraint is not checked so data may
violate the constraint.
· disable validate means the constraint is not checked but disallows any modification of the
constrained columns.

16. which structure provides for statement-level read consistency?


a. undo segments
b. redo log files
c. data dictionary tables
d. archived redo log files
answer: a
explanation:
oracle7 server concepts 10-6
statement level read consistency
oracle always enforces statement-level read consistency. this guarantees that the data returned
by a single query is consistent with respect to the time that the query began. therefore, a query
never sees dirty data nor any of the changes made by transactions that commit during query
execution. as query execution proceeds, only data committed before the query began is visible to
the query. the query does not see changes committed after statement execution begins. a
consistent result set is provided for every query, guaranteeing data consistency, with no action on
the user’s part. the sql statements select, insert with a query, update, and delete all query data,
either explicitly or implicitly, and all return consistent data. each of these statements uses a query
to determine which data it will affect (select, insert, update, or delete, respectively). a select
statement is an explicit query and may have nested queries or a join operation. an insert
statement can use nested queries. update and delete statements can use where clauses or
subqueries to affect only some rows in a table rather than all rows.
while queries used in insert, update, and delete statements are guaranteed a consistent set of
results, they do not see the changes made by the dml statement itself. in other words, the data
the query in these operations sees reflects the state of the data before the operation began to
make changes.

for this purpose only the undo segments are necessary from the possible answers.

17. you just issued the startup command. which file is checked to determine the state of the
database?
a. the control file
b. the first member of redo log file group 1
c. the data file belonging to the system tablespace
d. the most recently created archived redo log file
answer: a
explanation:
oracle9i database administrator’s guide release 2 (9.2) march 2002 part no. a96521-01
(a96521.pdf) 4-16
quiescing a database
there are times when there is a need to put a database into a state where only dba transactions,
queries, fetches, or pl/sql statements are allowed. this is called a quiesced state, in the sense that
there are no ongoing non-dba transactions, queries, fetches, or pl/sql statements in the system.
this quiesced state allows you or other administrators to perform actions that cannot safely be
done otherwise.
placing a database into a quiesced state
to place a database into a quiesced state, issue the following statement:
alter system quiesce restricted;
viewing the quiesce state of an instance
the v$instance view can be queried to see the current state of an instance. it contains a column
named active_state, whose values are shown in the following table:

active_state description
normal normal unquiesced state
quiescing being quiesced, but there are still active non-dba sessions running
quiesced quiesced, no active non-dba sessions are active or allowed

since the state can be queried, i believe it’s really in a control file.

18. john has created a procedure named salary_calc. which sql query allows him to view the text
of the procedure?
a. select text from user_source
where name ='salary_calc';
b. select * from user_source
where source_name ='salary_calc';
c. select * from user_objects
where object_name = 'salary_calc';
d. select * from user procedures
where object_name ='salary_calc';
e. select text from user_source
where name='salary_calc'
and owner ='john';
answer: a
explanation:
sql> desc user_source
name null? type
----------------------------------------- -------- -----------------
name varchar2(30)
type varchar2(12)
line number
text varchar2(4000)

19. you want to limit the number of transactions that can simultaneously make changes to data in
a block, and increase the frequency with which oracle returns a block back on the free list.
which parameters should you set?
a. initrans and pctused
b. maxtrans and pctfree
c. initrans and pctfree
d. maxtrans and pctused
answer: d
explanation:
<http://perun.si.umich.edu/~radev/654/resources/oracledefs.html>

pctfree
specifies the percentage of space in each of the table's data blocks reserved for future updates to
the table's rows. the value of pctfree must be a positive integer from 1 to 99. a value of 0 allows
the entire block to be filled by inserts of new rows. the default value is 10. this value reserves
10% of each block for updates to existing rows and allows inserts of new rows to fill a maximum
of 90% of each block. pctfree has the same function in the commands that create and alter
clusters, indexes, snapshots, and snapshot logs. the combination of pctfree and pctused
determines whether inserted rows will go into existing data blocks or into new blocks.
pctused
specifies the minimum percentage of used space that oracle maintains for each data block of the
table. a block becomes a candidate for row insertion when its used space falls below pctused.
pctused is specified as a positive integer from 1 to 99 and defaults to 40. pctused has the same
function in the commands that create and alter clusters, snapshots, and snapshot logs. the sum
of pctfree and pctused must be less than 100. you can use pctfree and pctused together use
space within a table more efficiently.
initrans
specifies the initial number of transaction entries allocated within each data block allocated to the
table. this value can range from 1 to 255 and defaults to 1. in general, you should not change
the initrans value from its default. each transaction that updates a block requires a transaction
entry in the block. the size of a transaction entry depends on your operating system. this
parameter ensures that a minimum number of concurrent transactions can update the block and
helps avoid the overhead of dynamically allocating a transaction entry. the initrans parameter
serves the same purpose in clusters, indexes, snapshots, and snapshot logs as in tables. the
minimum and default initrans value for a cluster or index is 2, rather than 1.
maxtrans
specifies the maximum number of concurrent transactions that can update a data block allocated
to the table. this limit does not apply to queries. this value can range from 1 to 255 and the
default is a function of the data block size. you should not change the maxtrans value from its
default. if the number concurrent transactions updating a block exceeds the initrans value, oracle
dynamically allocates transaction entries in the block until either the maxtrans value is exceeded
or the block has no more free space. the maxtrans parameter serves the same purpose in
clusters, snapshots, and snapshot logs as in tables.

20. you need to drop two columns from a table. which sequence of sql statements should be used
to drop the columns and limit the number of times the rows are updated?
a. alter table employees
drop column comments
drop column email;
b. alter table employees
drop column comments;
alter table employees
drop column email;
c. alter table employees
set unused column comments;
alter table employees
drop unused columns;
alter table employees
set unused column email;
alter table employees
drop unused columns;
d. alter table employees
set unused column cnasr,nts;
alter table employees
set unused column email;
alter table employees
drop unused columns;
answer: d
explanation:
<http://certcities.com/certs/oracle/columns/story.asp?editorialsid=36>
reorganizing columns
while it has been possible to add new columns to an existing table in oracle for quite a while now,
until oracle 8i it was not possible to drop or remove a column from a table without dropping the
table first and then re-creating it without the column you wanted to drop. with this method, you
needed to perform an export before dropping the table and then an import after creating it without
the column, or issue a create table ... as select statement with all of its associated headaches
(see above).
in oracle 8i, we now have a way of marking columns unused and then dropping them at a later
date. oracle is a little behind the times here compared to sql server, which does not require a
complete rebuild of the table after dropping the column, but i'm just happy that i have the feature
and hope that they'll improve it in oracle 9i.
to get rid of columns with this new method, the first step is to issue the alter table <tablename>
set unused column <columnname>, which sets the column to no longer be used within the table
but does not change the physical structure of the table. all rows physically have the column's data
stored, and a physical place is kept for the column on disk, but the column cannot be queried
and, for all intents and purposes, does not exist. in essence, the column is flagged to be dropped,
though you cannot reverse setting the column to unused.
it is possible to set a number of columns unused in a table before actually dropping them. the
overhead of setting columns unused is fairly minimal and allows you to continue to operate
normally, except that any actions on the unused columns will result in an error. the next step,
when you have configured all the columns you want to get rid of as unused, is to actually
physically reorganize the table so that the data for the unused columns is no longer on disk and
the columns are really gone. this is done by issuing the command alter table … drop column.
physically dropping a column in an oracle table is a process that will prevent anyone from
accessing the table while the removal of the column(s) is processed. the commands that will
affect an actual removal of a column are:
alter table <tablename> drop column <columnname>
alter table <tablename> drop unused columns
the commands will always do the same thing. this means that if you mark two or three columns as
unused in a table, if you decide you want to drop one of them using the alter table … drop column
command, you will drop all columns marked as unused whether you want to or not. the alter table
… drop column can also be used when a column has not previously been marked as unused but
you simply want to drop it right away, but you will also drop any unused columns because that's
the way it works.
if constraints depend on the column being dropped, you can use the cascade constraints option
to deal with them; if you also want to explicitly mark views, triggers, stored procedures or other
stored program units referencing the parent table and force them to be recompiled the next time
they are used, you can also specify the invalidate option.
a problem could arise if you issue the drop column command and the instance crashes during the
rebuild of the table. in this case, the table will be marked as invalid and will not be available to
anyone. oracle forces you to complete the drop column operation before the table can be used
again. to get out of this situation, issue the command alter table … drop columns continue. this
will complete the process and mark the table as valid upon completion.

<http://whizlabs.com/ocp/ocp-1z0-007-tips.html>

34) oracle allows columns to be dropped with the ‘alter table drop columns’ command.
dropping of columns generally takes a lot of time, so an alternative (faster) option would be to
mark the column as unused with the ‘set unused column’ clause and later drop the unused
column.

21. your company hired joe, a dba who will be working from home. joe needs to have the ability to
start the database remotely.
you created a password file for your database and set remote_login_passwordfile = exclusive in
the parameter file. which command adds joe to the password file, allowing him remote dba
access?
a. grant dba to joe;
b. grant sysdba to joe;
c. grant resource to joe;
d. orapwd file=orapwdprod user=joe password=dba
answer: d
explanation:
oracle9i database administrator’s guide release 2 (9.2) march 2002 part no. a96521-01
(a96521.pdf) 1-20
using orapwd
when you invoke the password file creation utility without supplying any parameters, you receive
a message indicating the proper use of the command as shown in the following sample output:
orapwd
usage: orapwd file=<fname> password=<password> entries=<users>
where
file - name of password file (mand),
password - password for sys (mand),.creating and maintaining a password file
entries - maximum number of distinct dbas and opers (opt),
there are no spaces around the equal-to (=) character.
the following command creates a password file named acct.pwd that allows up to 30 privileged
users with different passwords. in this example, the file is initially created with the password
secret for users connecting as sys. orapwd file=acct.pwd password=secret entries=30

22. which command can you use to display the date and time in the form
17:45:01 jul-12-2000 using the default us7ascii character set?
a. alter system set nls_date_format='hh24:mi:ss mon-dd-yyyy';
b. alter session set date_format='hh24:mi:ss mon-dd-yyyy';
c. alter session set nls_date_format='hh24:mi:ss mon-dd-yyyy';
d. alter system set nls_date_format='hh:mi:ss mon-dd-yyyy';
answer: c
explanation:
<http://www.idera.com/support/documentation/oracle_date_format.htm>
alter session set nls_date_format = <date_format>

23. when preparing to create a database, you should be sure that you have sufficient disk space
for your database files. when calculating the space requirements you need to consider that some
of the files may be multiplexed.
which two types of files should you plan to multiplex? (choose two.)
a. data files
b. control file
c. password file
d. online redo log files
e. initialization parameter file
answer: ab
explanation:
multiplex: files are stored at more than one location.

oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01 (a96524.pdf) 3-22
multiplexed control files
as with online redo log files, oracle enables multiple, identical control files to be open concurrently
and written for the same database.
by storing multiple control files for a single database on different disks, you can safeguard against
a single point of failure with respect to control files. if a single disk that contained a control file
crashes, then the current instance fails when oracle attempts to access the damaged control file.
however, when other copies of the current control file are available on different disks, an instance
can be restarted
easily without the need for database recovery.
if all control files of a database are permanently lost during operation, then the instance is aborted
and media recovery is required. media recovery is not straightforward if an older backup of a
control file must be used because a current copy is not available. therefore, it is strongly
recommended that you adhere to the following practices:
_ use multiplexed control files with each database
_ store each copy on a different physical disk
_ use operating system mirroring
_ monitor backups

oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01 (a96524.pdf) 1-7
redo log files
to protect against a failure involving the redo log itself, oracle allows a multiplexed redo log so
that two or more copies of the redo log can be maintained on different disks.
the information in a redo log file is used only to recover the database from a system or media
failure that prevents database data from being written to the datafiles. for.database structure and
space management overview example, if an unexpected power outage terminates database
operation, then data
in memory cannot be written to the datafiles, and the data is lost. however, lost data can be
recovered when the database is opened, after power is restored. by applying the information in
the most recent redo log files to the database datafiles, oracle restores the database to the time
at which the power failure occurred.

24. which is true when considering the number of indexes to create on a table?
a. every column that is updated requires an index.
b. every column that is queried is a candidate for an index.
c. columns that are part of a where clause are candidates for an index.
d. on a table used in a data warehouse application there should be no indexes.
answer: c
no comment

25. you issue the following queries to obtain information about the redo log files:
sql> select group#, type, member from v$logfile;
you immediately issue this command:
alter database drop logfile member
'/databases/db01/oradata/u03/log2b.rdo';
why does the command fail?
a. each online redo log file group must have two members.
b. you cannot delete any members of online redo log file groups.
c. you cannot delete any members of the current online redo log file group
d. you must delete the online redo log file in the operating system before issuing the alter
database command.
answer: c
explanation:
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01 (a96524.pdf) 9-41
drop logfile clause
use the drop logfile clause to drop all members of a redo log file group. specify a redo log file
group as indicated for the add logfile member clause.
- to drop the current log file group, you must first issue an alter system switch logfile statement.
- you cannot drop a redo log file group if it needs archiving.
- you cannot drop a redo log file group if doing so would cause the redo thread to contain less
than two redo log file groups.
see also: alter system on page 10-22 and "dropping log file members: example" on page 9-54

if you execute switch logfile, then the current logfile will be different, so ans c is ok j

26. the orders table has a constant transaction load 24 hours a day, so down time is not allowed.
the indexes become fragmented. which statement is true?
a. the index needs to be dropped, and then re-created.
b. the resolution of index fragmentation depends on the type of index.
c. the index can be rebuilt while users continue working on the table.
d. the index can be rebuilt, but users will not have access to the index during this time.
e. the fragmentation can be ignored because oracle resolves index fragmentation by means of a
freelist.
answer: c
explanation:
<http://www.dbatoolbox.com/wp2001/spacemgmt/reorg_defrag_in_o8i_fo.pdf>
oracle8i can create an index online; users can continue to update and query the base table while
the index is being created. no table or row locks are held during the creation operation. changes
to the base table and index during the build are recorded in a journal table and merged into the
new index at the completion of the operation, as illustrated in figure 1. these online operations
also support parallel index creation and can act on some or all of the partitions of a partitioned
index. online index creation improves database availability by providing users full access to data
in the base table during an index build.

27. the dba can structure an oracle database to maintain copies of online redo log files to avoid
losing database information.
which three are true regarding the structure of online redo log files? (choose three.)
a. each online redo log file in a group is called a member.
b. each member in a group has a unique log sequence number.
c. a set of identical copies of online redo log files is called an online redo log group.
d. the oracle server needs a minimum of three online redo log file groups for the normal operation
of a database.
e. the current log sequence number of a redo log file is stored in the control file and in the header
of all data files.
f. the lgwr background process concurrently writes the same information to all online and archived
redo log files in a group.
answer: bce
explanation:
<http://www.siue.edu/~dbock/cmis565/ch7-redo_log.htm>

each redo log group has identical redo log files. the lgwr concurrently writes identical information
to each redo log file in a group. the oracle server needs a minimum of two online redo log groups
for normal database operation. thus, if disk 1 crashes as shown in the figure above, none of the
redo log files are truly lost because there are duplicates. if the group has more members, you
need more disk drives!

if possible, you should separate the online redo log files from the archive log files as this reduces
contention for the i/o buss path between the arcn and lgwr background processes. you should
also separate datafiles from the online redo log files as this reduces lgwr and dbwn contention. it
also reduces the risk of losing both datafiles and redo log files if a disk crash occurs.

redo log files in a group are called members. each group member has identical log sequence
numbers and is the same size - they cannot be different sizes. the log sequence number is
assigned by the oracle server as it writes to a log group and the current log sequence number is
stored in the control files and in the header information of all datafiles - this enables
synchronization between datafiles and redo log files.

plus: archived redo logs are not used: f is not ok.

28. which type of segment is used to improve the performance of a query?


a. index
b. table
c. temporary
d. boot strap
answer: a
explanation:
<http://vsbabu.org/oracle/sect16.html>

there is a segment_type = ‘index’ condition -> index is a segment. and it also helps queries to be
faster j
29. you have just accepted the position of dba with a new company. one of the first things you
want to do is examine the performance of the database. which tool will help you to do this?
a. recovery manager
b. oracle enterprise manager
c. oracle universal installer
d. oracle database configuration assistant
answer: b
explanation:
<http://www.orafaq.com/faqoem.htm>
what is oem (oracle enterprise manager)?
oem is a set of system management tools provided by oracle for managing the oracle
environment. it provides tools to automate tasks (both one-time and repetitive in nature) to take
database administration a step closer to "lights out" management.
what are the components of oem?
oracle enterprise manager (oem) has the following components:
management server (oms): middle tier server that handles communication with the intelligent
agents. the oem console connects to the management server to monitor and configure the oracle
enterprise.
console: this is a graphical interface from where one can schedule jobs, events, and monitor the
database. the console can be opened from a windows workstation, unix xterm (oemapp
command) or web browser session (oem_webstage).
intelligent agent (oia): the oia runs on the target database and takes care of the execution of jobs
and events scheduled through the console.
data gatherer (dg): the dg runs on the target database and takes care of the gathering database
statistics over time.

30. which steps should you take to gather information about checkpoints?
a. set the log_checkpoints_to_alert initialization parameter to true.
monitor the alert log file.
b. set the log_checkpoint_timeout parameter.
force a checkpoint by using the fast_start_mttr_target parameter.
monitor the alert log file.
c. set the log_checkpoint_timeout parameter.
force a log switch by using the command alter system force logswitch.
force a checkpoint by using the command alter system force checkpoint.
monitor the alert log file.
d. set the fast_start_mttr_target parameter to true.
force a checkpoint by using the command alter system force checkpoint.
monitor the alert log file.
answer: a
explanation:
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosc.htm>

oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01 (a96524.pdf) 3-21
control files also record information about checkpoints. every three seconds, the
checkpoint process (ckpt) records information in the control file about the
checkpoint position in the online redo log. this information is used during database
recovery to tell oracle that all redo entries recorded before this point in the online
redolog groupare notnecessary fordatabaserecovery;they werealreadywritten
to the datafiles.

checkpoint
a checkpoint occurs when the dbwr
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm> (database
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm> writer) process writes all modified
buffers in the sga <http://infoboerse.doag.de/mirror/frank/glossary/faqgloss.htm> buffer cache
<http://infoboerse.doag.de/mirror/frank/glossary/> to the database
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm> data
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm> files. checkpoints occur after (not
during) every redo log switch and also at intervals specified by means of database parameters.
set parameter log_checkpoints_to_alert=true to observe checkpoint start and end times in the
database alert log. checkpoints can be forced with the "alter system
<http://infoboerse.doag.de/mirror/frank/glossary/faqgloss.htm> checkpoint;" command.

dbwr
dbwr (oracle <http://infoboerse.doag.de/mirror/frank/glossary/faqgloso.htm> database
<http://infoboerse.doag.de/mirror/frank/glossary/> writer) is an oracle
<http://infoboerse.doag.de/mirror/frank/glossary/faqgloso.htm> background process created when
you start a database <http://infoboerse.doag.de/mirror/frank/glossary/> instance
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosi.htm>. the dbwr writes data
<http://infoboerse.doag.de/mirror/frank/glossary/> from the sga
<http://infoboerse.doag.de/mirror/frank/glossary/faqgloss.htm> to the oracle
<http://infoboerse.doag.de/mirror/frank/glossary/faqgloso.htm> database
<http://infoboerse.doag.de/mirror/frank/glossary/> files. when the sga
<http://infoboerse.doag.de/mirror/frank/glossary/faqgloss.htm> buffer cache
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosc.htm> fills the dbwr process selects
buffers using an lru algorithm and writes them to disk.

sga
system <http://infoboerse.doag.de/mirror/frank/glossary/> global area (sga) is an area of memory
allocated when an oracle <http://infoboerse.doag.de/mirror/frank/glossary/faqgloso.htm> instance
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosi.htm> starts up. its size and function
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosf.htm> are controlled by init.ora
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosi.htm> (initialization) parameters. dba
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm>'s are mostly concerned with this
area.

31. examine the sql statement:


create tablespace user_data
datafile '/u01/oradata/user_data_0l.dbf' size 100m
locally managed uniform size 1m
automatic segment space management;
which part of the tablespace will be of a uniform size of 1 mb?
a. extent
b. segment
c. oracle block
d. operating system block
answer: a
explanation:
extent_management_clause
the extent_management_clause lets you specify how the extents of the tablespace will be
managed.
- specify local if you want the tablespace to be locally managed. locally managed tablespaces
have some part of the tablespace set aside for a bitmap. this is the default.
- autoallocate specifies that the tablespace is system managed. users cannot specify an extent
size. this is the default if the compatible initialization parameter is set to 9.0.0 or higher.
- uniform specifies that the tablespace is managed with uniform extents of size bytes. use k or m
to specify the extent size in kilobytes or megabytes. the default size is 1 megabyte.
note: once you have specified extent management with this clause, you can change extent
management only by migrating the tablespace.
remark: one tablespace has many segments.segment is space allocated fo a db object-> each
index has its own segment. if the segment of the index fills up, a new extent is created for the
segment. it means, that the tablespace gets 1mb bigger, if the segment for one of the objects fills
up. if another segment fills up, the tablespace gets 1 mb bigger again, no matter how much empty
space we have in the previous extent. the consequence is that data does not become
fragmented.

32. you are in the process of dropping the building_location column from the hr.employees table.
the table has been marked invalid until the operation completes. suddenly the instance fails. upon
startup, the table remains invalid.
which step(s) should you follow to complete the operation?
a. continue with the drop column command:
alter table hr.employees drop columns continue;
b. truncate the invalid column to delete remaining rows in the column and release unused space
immediately.
c. use the export and import utilities to remove the remainder of the column from the table and
release unused space.
d. mark the column as unused and drop the column:
alter table hr.employees
set unused column building location;
alter table hr.employees
dpop unused column building_location
cascade constraints;
answer: d
explanation:
drop unused columns clause
specify drop unused columns to remove from the table all columns currently marked as unused.
use this statement when you want to reclaim the extra disk space from unused columns in the
table. if the table contains no unused columns, then the statement returns with no errors.
column specify one or more columns to be set as unused or dropped. use the column keyword
only if you are specifying only one column. if you specify a column list, then it cannot contain
duplicates.
cascade constraints specify cascade constraints if you want to drop all foreign key constraints
that refer to the primary and unique keys defined on the dropped columns, and drop all
multicolumn constraints defined on the dropped columns. if any constraint is referenced by
columns from other tables or remaining columns in the target table, then you must specify
cascade constraints. otherwise, the statement aborts and an error is returned..alter table
invalidate the invalidate keyword is optional. oracle automatically invalidates all dependent
objects, such as views, triggers, and stored program units. object invalidation is a recursive
process. therefore, all directly dependent and indirectly dependent objects are invalidated.
however, only local dependencies are invalidated, because oracle manages remote
dependencies differently from local dependencies. an object invalidated by this statement is
automatically revalidated when next referenced. you must then correct any errors that exist in that
object before referencing it.
checkpoint specify checkpoint if you want oracle to apply a checkpoint for the drop column
operation after processing integer rows; integer is optional and must be greater than zero. if
integeris greater than the number of rows in the table, then oracle applies a checkpoint after all
the rows have been processed. if you do not specify integer, then oracle sets the default of 512.
checkpointing cuts down the amount of undo logs accumulated during the drop column operation
to avoid running out of rollback segment space. however, if this statement is interrupted after a
checkpoint has been applied, then the table remains in an unusable state. while the table is
unusable, the only operations allowed on it are drop table, truncate table, and alter table drop
columns continue (described in sections that follow). you cannot use this clause with set unused,
because that clause does not remove column data.
drop columns continue clause
specify drop columns continue to continue the drop column operation from the point at which it
was interrupted. submitting this statement while the table is in a valid state results in an error.
see also: oracle9i database concepts for more information on dependencies

if there is a trick in this question, i couldn’t find it. i would answer a.

33. based on the following profile limits, if a user attempts to log in and fails after five tries, how
long must the user wait before attempting to log in again?
alter profile default limit
password_life_time 60
password_grace_time 10
password_reuse_time 1800
password_reuse_max unlimited
failed_login_attempts 5
password_lock_time 1/1440
password_verify_function verify_function;
a. 1 minute
b. 5 minutes
c. 10 minutes
d. 14 minutes
e. 18 minutes
f. 60 minutes
answer: a
explanation:
sql 14-73
password_parameters
failed_login_attempts specify the number of failed attempts to log in to the user account before
the account is locked.
password_life_time specify the number of days the same password can be used for
authentication. the password expires if it is not changed within this period, and further
connections are rejected.
password_reuse_time specify the number of days before which a password cannot be reused. if
you set password_reuse_time to an integer value, then you must set password_reuse_max to
unlimited.
password_reuse_max specify the number of password changes required before the current
password can be reused. if you set password_reuse_max to an integer value, then you must set
password_reuse_time to unlimited.
password_lock_time specify the number of days an account will be locked after the specified
number of consecutive failed login attempts.
password_grace_time specify the number of days after the grace period begins during which a
warning is issued and login is allowed. if the password is not changed during the grace period,
the password expires.
password_verify_function the password_verify_function clause lets a pl/sql password complexity
verification script be passed as an argument to the create profile statement. oracle provides a
default script, but you can create your own routine or use third-party software instead.
- for function, specify the name of the password complexity verification routine.
- specify null to indicate that no password verification is performed.

34. you create a new table named departments by issuing this statement:
create table departments(
department_id number(4),
department_name varchar2(30),
manager_id number(6),
location_id number(4))
storage(initial 200k next 200k
pctincrease 0 minextents 1 maxextents 5);
you realize that you failed to specify a tablespace for the table. you issue these queries:
sql> select username, default_tablespace,
temporary tablespace
2> from user_users;
in which tablespace was your new departments table created?
a. temp
b. system
c. sample
d. user_data
answer: c
explanation:
??? a diagram is missing. in the default_tablespace of the user.

35. an oracle instance is executing in a nondistributed configuration. the instance fails because of
an operating system failure.
which background process would perform the instance recovery when the database is reopened?
a. pmon
b. smon
c. reco
d. arcn
e. ckpt
answer: b
explanation:
smon (oracle system monitor)
smon is an oracle background process created when you start a database
<http://www.orafaq.com/glossary/faqglosd.htm> instance
<http://www.orafaq.com/glossary/faqglosi.htm>. the smon process performs instance
<http://www.orafaq.com/glossary/faqglosi.htm> recovery, cleans up after dirty shutdowns and
coalesces adjacent free extents into larger free extents.
pmon (oracle process monitor)
pmon is an oracle background process created when you start a database
<http://www.orafaq.com/glossary/faqglosd.htm> instance
<http://www.orafaq.com/glossary/faqglosi.htm>. the pmon process will free up resources if a user
<http://www.orafaq.com/glossary/faqglosu.htm> process fails (eg. release database
<http://www.orafaq.com/glossary/faqglosd.htm> locks).
reco (oracle recoverer process)
reco is an oracle background process created when you start an instance
<http://www.orafaq.com/glossary/faqglosi.htm> with distributed_transactions= in the init.ora
<http://www.orafaq.com/glossary/faqglosi.htm> file
<http://www.orafaq.com/glossary/faqglosf.htm>. the reco process will try to resolve in-doubt
transactions across oracle distributed databases.
arch (oracle archiver process)
arch is an oracle background process created when you start an instance
<http://www.orafaq.com/glossary/faqglosi.htm> in archive log mode. the arch process will archive
on-line redo log <http://www.orafaq.com/glossary/faqglosr.htm> files to some backup
<http://www.orafaq.com/glossary/faqglosb.htm> media.
ckpt (oracle <http://www.orafaq.com/glossary/faqgloso.htm> checkpoint
<http://www.orafaq.com/glossary/faqglosc.htm> process)
ckpt (oracle <http://www.orafaq.com/glossary/faqgloso.htm> checkpoint
<http://www.orafaq.com/glossary/faqglosc.htm> process) is the oracle
<http://www.orafaq.com/glossary/faqgloso.htm> background process that timestams all datafiles
and control files to indicate that a checkpoint <http://www.orafaq.com/glossary/faqglosc.htm> has
occurred

36. which type of table is usually created to enable the building of scalable applications, and is
useful for large tables that can be queried or manipulated using several processes concurrently?
a. regular table
b. clustered table
c. partitioned table
d. index-organized table
answer: c
what is scalability?
in the case of web applications, scalability is the capacity to serve additional users or transactions
without fundamentally altering the application's architecture or program design. if an application is
scalable, you can maintain steady performance as the load increases simply by adding additional
resources such as servers, processors or memory.
cluster
a cluster is an oracle <http://infoboerse.doag.de/mirror/frank/glossary/faqgloso.htm> object that
allows one to store related rows from different tables in the same data
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm> block
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosb.htm>. table
<http://infoboerse.doag.de/mirror/frank/glossary/faqglost.htm> clustering is very seldomly used by
oracle <http://infoboerse.doag.de/mirror/frank/glossary/faqgloso.htm> dba
<http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm>'s and developers.

ans is c -> multiprocess access is better, if the 2 processes access different files.

37. how do you enable the hr_clerk role?


a. set role hr_clerk;
b. create role hr_clerk;
c. enable role hr_clerk;
d. set enable role hr_clerk;
answer: a
explanation:
sql 18-47
set role
purpose
use the set role statement to enable and disable roles for your current session.

in the identified by password clause, specify the password for a role. if the role has a password,
then you must specify the password to enable the role.

d is out of question, bad syntax. i would go for a, if the role does not have a password, this
command is ok.

38. your database is currently configured with the database character set to
we8iso8859p1 and national character set to af16utf16.
business requirements dictate the need to expand language requirements beyond the
current character set, for asian and additional western european languages, in the
form of customer names and addresses.
which solution saves space storing asian characters and maintains consistent character
manipulation performance?
a. use sql char data types and change the database character set to utf8.
b. use sql nchar data types and change the national character set to utf8.
c. use sql char data types and change the database character set to af32utf8.
d. use sql nchar data types and keep the national character set to af16utf16.
answer: d
explanation:
i forgot again l
sql nchar
supporting multilingual data often means using unicode. unicode is a universal character
encoding scheme that allows you to store information from any major language using a single
character set. unicode provides a unique code value for every character, regardless of the
platform, program, or language. for many companies with legacy systems making the
commitment to migrating their entire database to support unicode is not practical. an alternative
to storing all data in the database as unicode is to use the sql nchar datatypes. unicode
characters can be stored in columns of these datatypes regardless of the setting of the database
character set. the nchar datatype has been redefined in oracle9i to be a unicode datatype
exclusively. in other words, it stores data in the unicode encoding only. the national character set
supports utf-16 and utf-8 in the following encodings:
- al16utf16 (default)
- utf8
sql nchar datatypes (nchar, nvarchar2, and nclob) can be used in the same way as the sql char
datatypes. this allows the inclusion of unicode data in a non unicode database. some of the key
benefits for using the nchar datatype versus having the entire database as unicode include:
you only need to support multilingual data in a limited number of columns - you can add columns
of the sql nchar datatypes to existing tables or new tables to support multiple languages
incrementally. or you can migrate specific columns from sql char datatypes to sql nchar
datatypes easily using the alter table modify column command.
example:
alter table emp modify (ename nvarchar2(10));
you are building a packaged application that will be sold to customers, then you may want to build
the application using sql nchar datatypes - this is because with the sql nchar datatype the data is
always stored in unicode, and the length of the data is always specified in utf-16 code units. as a
result, you need only test the application once, and your application will run on your customer
databases regardless of the database character set.
you want the best possible performance - if your existing database character set is single-byte
then extending it with sql nchar datatypes may offer better performance then migrating the entire
database to unicode.
your applications native environment is ucs-2 or utf-16 - a unicode database must run as utf-8.
this means there will be conversion between the client and database. by using the nchar
encoding al16utf16, you can eliminate this conversion.

39. which three are the physical structures that constitute the oracle database? (choose three)
a. table
b. extent
c. segment
d. data file
e. log file
f. tablespace
g. control file
answer: deg
explanation:
<http://www.adp-gmbh.ch/ora/notes.html>
physical and logical elements
an oracle server consists of an oracle database and an oracle instance. if you don't want
technical terms, you can think of an instance as the software, and the database as the data that
said software operates on. more technically, the instance is the combination of background
processes and memory buffers (or sga <http://www.adp-gmbh.ch/ora/concepts/sga.html>).
the data (of the database) resides in datafiles. because these datafiles are visible (as files) they're
called physical structures as opposed to logical structures.
one ore more datafiles make up a tablespace <http://www.adp-
gmbh.ch/ora/concepts/tablespaces.html>.
besides of datafiles, there are two other types of physical structures: redo log files and control
files
the logical structures are tablespace, schema objects, data blocks, extends, and segments.
control files
an oracle database must at least have one control file, but usually (for backup und recovery
<http://www.adp-gmbh.ch/ora/concepts/backup_recovery/index.html> reasons) it has more than
one (all of which are exact copies of one control file). the control file contains a number of
important information that the instance needs to operate the database. the following pieces of
information are held in a control file: the name (os path) of all datafiles that the database consists
of, the name of the database, the timestamp of when the database was created, the checkpoint
(all database changes prior to that checkpoint are saved in the datafiles) and information for
rman.
when a database is mounted, its control file is used to find the datafiles and redo log files for that
database. because the control file is so important, it is imperative to back up the control file
whenever a structural change was made in the database.
redo log
whenever something is changed on a datafile, oracle records it in a redo log. the name redo log
indicates its purpose: when the database crashes, oracle can redo all changes on datafiles which
will take the database data back to the state it was when the last redo record was written. use
v$log <http://www.adp-gmbh.ch/ora/misc/dynamic_performance_views.html>, v$logfile
<http://www.adp-gmbh.ch/ora/misc/dynamic_performance_views.html>, v$log_history
<http://www.adp-gmbh.ch/ora/misc/dynamic_performance_views> and v$thread <http://www.adp-
gmbh.ch/ora/misc/dynamic_performance_views.html> to find information about the redo log of
your database.
each redo log file belongs to exactly on group (of which at least two must exist). exactly one of
these groups is the current group (can be queried using the column status of v$log
<http://www.adp-gmbh.ch/ora/misc/dynamic_performance_views>). oracle uses that current
group to write the redo log entries. when the group is full, a log switch occurs, making another
group the current one. each log switch causes checkpoint, however, the converse is not true: a
checkpoint does not cause a redo log switch.

i believe the website - files are usually said physical, and these are basic ones -> d e g

40. you have a database with the db_name set to prod and oracle_sid set to prod.
these files are in the default location for the initialization files:
• init.ora
• initprod.ora
• spfile.ora
• spfileprod.ora
the database is started with this command:
sql> startup
which initialization files does the oracle server attempt to read, and in which order?
a. init.ora, initprod.ora, spfileprod.ora
b. spfile.ora, spfileprod.ora, initprod.ora
c. spfileprod.ora, spfile.ora, initprod.ora
d. initprod.ora, spfileprod.ora, spfile.ora
answer: c
explanation:
http://www.trivadis.ch/publikationen/e/spfile_and_initora.en.pdf <http://www.adp-
gmbh.ch/ora/notes.html>
up to version 8i, oracle traditionally stored initialization parameters in a text file init.ora (pfile). with
oracle9i, server parameter files (spfile) can also be used. an spfile can be regarded as a
repository for initialization parameters which is located on the database server. spfiles are small
binary files that cannot be edited. editing spfiles corrupts the file and either the instance fails to
start or an active instance may crash.

at database startup, if no pfile is specified at the os-dependent default location


($oracle_home/dbs under unix, $oracle_home\database under nt), the startup command
searches for:
1. spfile${oracle_sid}.ora
2. spfile.ora
3. init${oracle_sid}.ora

41. you are in the planning stages of creating a database. how should you plan to influence the
size of the control file?
a. specify size by setting the control_files initialization parameter instead of using the oracle
default value.
b. use the create controlfile command to create the control file and define a specific size for the
control file.
c. define the maxlogfiles, maxlogmembers, maxloghistory, maxdatafiles, maxinstances
parameters in the create database command.
d. define specific values for the maxlogfiles, maxloggroups, maxloghistory, maxdatafiles, and
maxinstances parameters within the initialization parameter file.
answer: c (d?)
explanation:
control_files
is a string -> name of the files->does not influence the size

sql 13-15
create controlfile
use the create controlfile statement to re-create a control file in one of the following cases:
- all copies of your existing control files have been lost through media failure.
- you want to change the name of the database.
- you want to change the maximum number of redo log file groups, redo log file members,
archived redo log files, datafiles, or instances that can concurrently have the database mounted
and open.

<http://coffee.kennesaw.edu/tests/oracle/ch3.doc>
create database
19. which clauses in the create database command specify limits for the database?
the control file size depends on the following limits (maxlogfiles, maxlogmembers, maxloghistory,
maxdatafiles, maxinstances), because oracle pre-allocates space in the control file.
maxlogfiles - specifies the maximum number of redo log groups that can ever be created in the
database.
maxlogmembers - specifies the maximum number of redo log members (copies of the redo logs)
for each redo log group.
maxloghistory - is used only with parallel server configuration. it specifies the maximum number
of archived redo log files for automatic media recovery.
maxdatafiles - specifies the maximum number of data files that can be created in this database.
data files are created when you create a tablespace, or add more space to a tablespace by
adding a data file.
maxinstances - specifies the maximum number of instances that can simultaneously mount and
open this database.
if you want to change any of these limits after the database is created, you must re-create the
control file.

??? i would go for c

42. when is the sga created in an oracle database environment?


a. when the database is created
b. when the instance is started
c. when the database is mounted
d. when a user process is started
e. when a server process is started
answer: b
explanation:
<http://www.dbaoncall.net/references/ht_startup_shutdown_db.html>

to startup oracle database use server manager (srvmgrl), startup command:


startup nomount - starts instance: allocates memory for sga, starts background processes.

43. you need to enforce these two business rules:


1. no two rows of a table can have duplicate values in the specified column.
2. a column cannot contain null values.
which type of constraint ensures that both of the above rules are true?
a. check
b. unique
c. not null
d. primary key
e. foreign key
answer: d
no comment

44. you decided to use oracle managed files (omf) for the control files in your database.
which initialization parameter do you need to set to specify the default location for control files if
you want to multiplex the files in different directories?
a. db_files
b. db_create_file_dest
c. db_file_name_convert
d. db_create_online_log_dest_n
answer: d
explanation:
<http://www.orafaq.net/parms/>

paramter name: db_files


description: max allowable # db files

paramter name: db_create_file_dest


description: default database location

<http://www.orafaq.net/archive/oracle-l/2002/07/08/102823.htm>
db_file_name_convert converts the db file name j
db_file_name_convert=('/vobs/oracle/dbs','/fs2/oracle/stdby')

<http://www.oracle-base.com/articles/9i/oraclemanagedfiles.asp>
managing redo log files using omf
when using omf for redo logs the db_creat_online_log_dest_n parameters in the init.ora file
decide on the locations and numbers of logfile members. for exmple:
db_create_online_log_dest_1 = c:\oracle\oradata\tsh1
db_create_online_log_dest_2 = d:\oracle\oradata\tsh1

45. which three statements about the oracle database storage structure are true? (choose three)
a. a data block is a logical structure
b. a single data file can belong to multiple tablespaces.
c. when a segment is created, it consists of at least one extent.
d. the data blocks of an extent may or may not belong to the same file.
e. a tablespace can consist of multiple data files, each from a separate disk.
f. within a tablespace, a segment cannot include extents from more than one file.
answer: ace
explanation:
a is ok, see q39
b is false:
oracle7 documentation, server concepts, 4-10
a tablespace in an oracle database consists of one or more physical datafiles. a datafile can be
associated with only one tablespace, and only one database.
c is ok:
oracle7 documentation, server concepts, 3-10
an extent is a logical unit of database storage space allocation made up of a number of
contiguous data blocks. each segment is composed of one or more extents.
d is false:
oracle7 documentation, server concepts, 3-3
oracle allocates space for segments in extents. therefore, when the existing extents of a segment
are full, oracle allocates another extent for that segment. because extents are allocated as
needed, the extents of a segment may or may not be contiguous on disk. the segments also can
span files, but the individual extents cannot.
e is ok:
oracle7 documentation, server concepts, 4-3
each tablespace in an oracle database is comprised of one or more operating system files called
datafiles. a tablespace’s datafiles physically store the associated database data on disk.
f is false:
see ans for d

46. when an oracle instance is started, background processes are started.


background processes perform which two functions? (choose two)
a. perform i/o
b. lock rows that are not data dictionary rows
c. monitor other oracle processes
d. connect users to the oracle instance
e. execute sql statements issued through an application
answer: ac
explanation:
oracle9i database administrator’s guide release 2 (9.2) march 2002 part no. a96521-01
(a96521.pdf) 5-11
to maximize performance and accommodate many users, a multiprocess oracle system uses
some additional processes called background processes. background processes consolidate
functions that would otherwise be handled by multiple oracle programs running for each user
process. background processes
asynchronously perform i/o and monitor other oracle processes to provide increased parallelism
for better performance and reliability.

47. the user smith created the sales history table. smith wants to find out the following information
about the sales history table:
• the size of the initial extent allocated to the sales history data segment
• the total number of extents allocated to the sales history data segment
which data dictionary view(s) should smith query for the required information?
a. user_extents
b. user_segments
c. user_object_size
d. user_object_size and user_extents
e. user_object_size and user_segments
answer: b
explanation:
sql> desc user_segments
name null? type
----------------------------------------- -------- --------------
segment_name varchar2(81)
partition_name varchar2(30)
segment_type varchar2(18)
tablespace_name varchar2(30)
bytes number
blocks number
extents number
initial_extent number
next_extent number
min_extents number
max_extents number
pct_increase number
freelists number
freelist_groups number
buffer_pool varchar2(7)

48. a table is stored in a data dictionary managed tablespace.


which two columns are required from dba_tables to determine the size of the extent
when it extends? (choose two)
a. blocks
b. pct_free
c. next_extent
d. pct_increase
e. initial_extent
answer: bc (??)
explanation:
oracle 7 documentation, server concepts, 3-2

data blocks
at the finest level of granularity, oracle stores data in data blocks (also called logical blocks,
oracle blocks, or pages). one data block corresponds to a specific number of bytes of physical
database space on disk. you set the data block size for every oracle database when you create
the database. this data block size should be a multiple of the operating system’s block size within
the maximum limit. oracle data blocks are the smallest units of storage that oracle can use or
allocate.
in contrast, all data at the physical, operating system level is stored in bytes. each operating
system has what is called a block size. oracle requests data in multiples of oracle blocks, not
operating system
blocks. therefore, you should set the oracle block size to a multiple of the operating system block
size to avoid unnecessary i/o.

extents
the next level of logical database space is called an extent. an extent is a specific number of
contiguous data blocks that is allocated for storing a specific type of information.

segments
the level of logical database storage above an extent is called a segment. a segment is a set of
extents that have been allocated for a specific type of data structure, and that all are stored in the
same tablespace. for
example, each table’s data is stored in its own data segment, while each index’s data is stored in
its own index segment.
oracle allocates space for segments in extents. therefore, when the existing extents of a segment
are full, oracle allocates another extent for that segment. because extents are allocated as
needed, the extents of
a segment may or may not be contiguous on disk. the segments also can span files, but the
individual extents cannot.
oracle9i database administrator’s guide release 2 (9.2) march 2002 part no. a96521-01
(a96521.pdf) 11-11
initial
defines the size in bytes (k or m) of the first extent in the segment
next
defines the size of the second extent in bytes (k or m)
pctincrease
specifies the percent by which each extent, after the second (next) extent, grows
pctfree
see q19

oracle7 documentation, server administrator’s guide, 10 - 12


assume the following statement has been executed:
create table test_storage
(...)
storage (initial 100k next 100k
minextents 2 maxextents 5
pctincrease 50);
also assume that the initialization parameter db_block_size is set to 2k. the following table shows
how extents are allocated for the test_storage table. also shown is the value for the incremental
extent, as can be seen in the next column of the user_segments or dba_segments data dictionary
views:.

extent# extent size value for next


1 100k or 50 blocks 100k
2 100k or 50 blocks ceil(100k*1.5)=150k
3 150k or 75 blocks ceil(150k*1.5)=228k
4 228k or 114 blocks ceil(228k*1.5)=342k
5 342k or 171 blocks ceil(342k*1.5)=516k

table 10 - 1 extent allocations

remark: 1.5 means that pctincrease is 50 percent, so if we multiply x by 1.5, that’s 50 percent
increase

if you change the next or pctincrease storage parameters with an alter statement (such as alter
table), the specified value replaces the current value stored in the data dictionary. for example,
the following statement modifies the next storage parameter of the test_storage table before the
third extent is allocated for the table:
alter table test_storage storage (next 500k);
as a result, the third extent is 500k when allocated, the fourth is (500k*1.5)=750k, and so on.

so the answer: next extent gives the value of the current (last) extent, we subtract pctfree from
this value (i mean pctfree percent), and this is the size, when oracle will create a new extent for
this segment.

49. bob is an administrator who has full dba privileges. when he attempts to drop the default
profile as shown below, he receives the error message shown. which option best explains this
error?
sql> drop profile sys.default;
drop profile sys.default
*
error at line 1:
ora-00950: invalid drop option
a. the default profile cannot be dropped.
b. bob requires the drop profile privilege.
c. profiles created by sys cannot be dropped.
d. the cascade option was not used in the drop profile command.
answer: a
explanation:
sql 16-94
restriction on dropping profiles: you cannot drop the default profile.

according to error messages, drop profile is not a valid statement j:


error messages 3-10:
ora-00950 invalid drop option
cause: a drop command was not followed by a valid drop option, such as cluster, database link,
index, rollback segment, sequence, synonym, table, tablespace, or view.
action: check the command syntax, specify a valid drop option, then retry the statement.

50. which is a complete list of the logical components of the oracle database?
a. tablespaces, segments, extents, and data files
b. tablespaces, segments, extents, and oracle blocks
c. tablespaces, database, segments, extents, and data files
d. tablespaces, database, segments, extents, and oracle blocks
e. tablespaces, segments, extents, data files, and oracle blocks
answer: b
see q39

[51]. as sysdba you created the payclerk role and granted the role to bob. bob in turn attempts
to modify the authentication method of the payclerk role from salary to not identified, but when
doing so he receives the insufficient privilege error shown below.
sql> connect bob/crusader
connected.
sql> alter role payclerk not identified;
alter role payclerk not identified
*
error at line 1:
ora-01031: insufficient privileges
which privilege does bob require to modify the authentication method of the payclerk role?
a. alter any role
b. manage any role (doesn’t exist)
c. update any role (doesn’t exist)
d. modify any role (doesn’t exist)
answer: a

ora-01031 insufficient privileges


cause: an attempt was made to change the current username or password
without the appropriate privilege. this error also occurs if attempting to install
a database without the necessary operating system privileges.
action: ask the database administrator to perform the operation or grant the
required privileges.

oracle_9i_mix/administrators guide a96521.pdf page 25-6 managing user roles

to alter the authorization method for a role, you must have the alter any role
system privilege or have been granted the role with the admin option.

[52]. you are going to re-create your database and want to reuse all of your existing database
files.
you issue the following sql statement:
create database sampledb
datafile
'/u01/oradata/sampledb/system0l.dbf'
size 100m reuse
logfile
group 1 ('/u01/oradata/sampledb/logla.rdo',
'/u02/oradata/sampledb/loglb.rdo')
size 50k reuse,
group 2 ('/u01/oradata/sampledb/log2a.rdo',
'/u02/oradata/sampledb/log2b.rdo')
size 50k reuse
maxlogfiles 5
maxloghistory 100
maxdatafiles 10;
why does the create database statement fail?
a. you have set maxlogfiles too low.
b. you omitted the controlfile reuse clause.
c. you cannot reuse the online redo log files. (wrong!!!! - you can reuse)
d. you cannot reuse the data file belonging to the system tablespace. (wrong!!!)
answer: b

- the initial control files of an oracle database are created when you issue the create
database statement. the names of the control files are specified by the control_files parameter in
the initialization parameter file used during database creation. the filenames specified in
control_files should be fully specified and are operating system specific. if control files with the
specified names currently exist at the time of database creation, you must specify the controlfile
reuse clause in the create database statement, or else an error occurs.
- maxlogile min and max value is operating system dependent. but i think the min value is
1.
- you can reuse a online redo log file.
- datafile clause specify one or more files to be used as datafiles. all these files become
part of the system tablespace.

[53]. evaluate this sql command:


grant references (employee_id),
update (employee_id, salary, commission_pct)
on hr.employees
to oe;
which three statements correctly describe what user oe can or cannot do? (choose three.)
a. cannot create a table with a constraint
b. can create a table with a constraint that references hr.employees
c. can update values of the employee_id, salary, and commission_pct columns
d. can insert values of the employee_id, salary, and commission_pct columns
e. cannot insert values of the employee_id, salary, and commission_pct columns
f. cannot update values of the employee_id, salary, and commission_pct columns
answer: bce
granting multiple object privileges on individual columns: example to grant to
user oe the references privilege on the employee_id column and the update
privilege on the employee_id, salary, and commission_pct columns of the
employees table in the schema hr, issue the following statement:
grant references (employee_id),
update (employee_id, salary, commission_pct)
on hr.employees
to oe;
oe can subsequently update values of the employee_id, salary, and
commission_pct columns. oe can also define referential integrity constraints that
refer to the employee_id column. however, because the grant statement lists
only these columns, oe cannot perform operations on any of the other columns of
the employees table.
for example, oe can create a table with a constraint:
create table dependent
(dependno number,
dependname varchar2(10),
employee number
constraint in_emp references hr.employees(employee_id) );
the constraint in_emp ensures that all dependents in the dependent table
correspond to an employee in the employees table in the schema hr.

[54]. a network error unexpectedly terminated a user's database session.


which two events occur in this scenario? (choose two.)
a. checkpoint occurs.
b. a fast commit occurs.
c. reco performs the session recovery.
d. pmon rolls back the user's current transaction.
e. smon rolls back the user's current transaction.
f. smon frees the system resources reserved for the user session.
g. pmon releases the table and row locks held by the user session.
answer: be

smon-the system monitor performs crash recovery when a failed instance starts up
again. in a cluster database (oracle9i real application clusters), the smon
process of one instance can perform instance recovery for other instances that
have failed. smon also cleans up temporary segments that are no longer in use
and recovers dead transactions skipped during crash and instance recovery
because of file-read or offline errors. these transactions are eventually recovered
by smon when the tablespace or file is brought back online.

pmon-the process monitor performs process recovery when a user process fails.
pmon is responsible for cleaning up the cache and freeing resources that the
process was using. pmon also checks on the dispatcher processes (see below)
and server processes and restarts them if they have failed.

reco- the recoverer process is used to resolve distributed transactions that are
pending due to a network or system failure in a distributed database. at timed
intervals, the local reco attempts to connect to remote databases and
automatically complete the commit or rollback of the local portion of any
pending distributed transactions.

checkpoint (ckpt) at specific times, all modified database buffers in the sga are
written to the datafiles by dbwn. this event is called a checkpoint. the checkpoint
process is responsible for signaling dbwn at checkpoints and updating all the
datafiles and control files of the database to indicate the most recent checkpoint.

[55]. evaluate the sql statement:


create tablespace hr_tbs
datafile '/usr/oracle9i/orahomel/hr_data.dbf' size 2m autoextend on
minimum extent 4k
nologging
default storage (initial 5k next 5k pctincrease 50)
extent management dictionary
segment space management auto;
why does the statement return an error?
a. the value of pctincrease is too high.
b. the size of the data file is too small.
c. you cannot specify default storage for dictionary managed tablespaces.
d. segment storage management cannot be set to auto for a dictionary managed tablespace.
e. you cannot specify default storage for a tablespace that consists of an autoextensible data file.
f. the value specified for initial and next storage parameters should be a multiple of the value
specified for minimum extent.
answer: d

-pctincrease
specify the percent by which the third and subsequent extents grow over the
preceding extent. the default value is 50, meaning that each subsequent extent is
50% larger than the preceding extent. the minimum value is 0, meaning all extents
after the first are the same size. the maximum value depends on your operating
system.
-specify the size of the file in bytes. use k or m to specify the size in kilobytes or
megabytes. no minimum size for datafile. no max size either. it can be unlimited. operating
system dependent. if you omit this clause when creating an oracle-managed file, then oracle
creates a 100m file.
-default storage_clause
specify the default storage parameters for all objects created in the tablespace.
for a dictionary-managed temporary tablespace, oracle considers only the next
parameter of the storage_clause. restriction on default storage: you cannot specify this clause for
a locally managed
tablespace.
segment_management_clause
the segment_management_clause is relevant only for permanent, locally
managed tablespaces. it lets you specify whether oracle should track the used and
free space in the segments in the tablespace using free lists or bitmaps.

initial
specify in bytes the size of the object’s first extent. oracle allocates space for this
extent when you create the schema object. use k or m to specify this size in kilobytes
or megabytes.
the default value is the size of 5 data blocks. in tablespaces with manual segmentspace
management, the minimum value is the size of 2 data blocks plus one data
block for each free list group you specify. in tablespaces with automatic segmentspace
management, the minimum value is 5 data blocks. the maximum value
depends on your operating system.
in dictionary-managed tablespaces, if minimum extent was specified for the
tablespace when it was created, then oracle rounds the value of initial up to the
specified minimum extent size if necessary. if minimum extent was not specified,
then oracle rounds the initial extent size for segments created in that tablespace
up to the minimum value (see preceding paragraph), or to multiples of 5 blocks if
the requested size is greater than 5 blocks.
in locally managed tablespaces, oracle uses the value of initial in conjunction
with the size of extents specified for the tablespace to determine the object’s first
extent. for example, in a uniform locally managed tablespace with 5m extents, if you
specify an initial value of 1m, then oracle creates five 1m extents.
restriction on initial: you cannot specify initial in an alter statement.
next
specify in bytes the size of the next extent to be allocated to the object. use k or m to
specify the size in kilobytes or megabytes. the default value is the size of 5 data blocks. the
minimum value is the size of 1 data block. the maximum value
depends on your operating system. oracle rounds values up to the next multiple of
the data block size for values less than 5 data blocks. for values greater than 5 data
blocks, oracle rounds up to a value that minimizes fragmentation, as described in
oracle9i database administrator’s guide.
if you change the value of the next parameter (that is, if you specify it in an alter
statement), then the next allocated extent will have the specified size, regardless of
the size of the most recently allocated extent and the value of the pctincrease
parameter.

temporary specify temporary if the tablespace will be used only to hold


temporary objects, for example, segments used by implicit sorts to handle order
by clauses.
temporary tablespaces created with this clause are always dictionary managed, so
you cannot specify the extent management local clause. to create a locally
managed temporary tablespace, use the create temporary tablespace
statement.

[56]. sales_data is a nontemporary tablespace. you have set the sales_data tablespace offline
by issuing this command:
alter tablespace sales_data offline normal;
which three statements are true? (choose three.)
a. you cannot drop the sales_data tablespace.
b. the sales_data tablespace does not require recovery to come back online.
c. you can read the data from the sales_data tablespace, but you cannot perform any write
operation on the data.
d. when the tablespace sales_data goes offline and comes back online, the event will be
recorded in the data dictionary.

e. when the tablespace sales_data goes offline and comes back online, the event will be
recorded in the control file.
f. when you shut down the database the sales_data tablespace remains offline, and is checked
when the database is subsequently mounted and reopened.
answer: bdf

- you can drop a tablespace regardless of whether it is online or offline. oracle


recommends that you take the tablespace offline before dropping it to ensure that
no sql statements in currently running transactions access any of the objects in the
tablespace.
restriction on the offline clause: you cannot take a temporary tablespace
offline.
-the for recover setting for alter tablespace ...
offline has been deprecated. the syntax is supported for
backward compatibility. however, users are encouraged to use the
transportable tablespaces feature for tablespace recovery.

- normal specify normal to flush all blocks in all datafiles in the tablespace out of
the sga. you need not perform media recovery on this tablespace before bringing it back online.
this is the default.
temporary if you specify temporary, then oracle performs a checkpoint for all
online datafiles in the tablespace but does not ensure that all files can be written.
any offline files may require media recovery before you bring the tablespace back
online.

-specify offline to take the tablespace offline and prevent further access to its
segments. when you take a tablespace offline, all of its datafiles are also offline.
[57]. a table can be dropped if it is no longer needed, or if it will be reorganized.
which three statements are true about dropping a table? (choose three.)
a. all synonyms for a dropped table are deleted.
b. when a table is dropped, the extents used by the table are released.

c. dropping a table removes the table definition from the data dictionary.
d. indexes and triggers associated with the table are not dropped but marked invalid. (wrong!!!)
e. the cascade constraints option is necessary if the table being dropped is the parent table in a
foreign key relationship.
answer: bce

- in general, the extents of a segment do not return to the tablespace until you drop the schema
object whose data is stored in the segment (using a drop table or drop cluster statement).

- dropping a table removes the table definition from the data dictionary. all rows of the table are
no longer accessible.

- all indexes and triggers associated with a table are dropped.

- all synonyms for a dropped table remain, but return an error when used.

- if the table to be dropped contains any primary or unique keys referenced by foreign keys of
other tables and you intend to drop the foreign key constraints of the child tables, include the
cascade clause in the drop table statement

[58]. you query dba_constraints to obtain constraint information on the hr_employees table:
sql> select constraint_name, constraint_type, deferrable,
2> deferred, validated
3> from dba_constraints
4> where owner = 'hr' and table_name='employees';

which type of constraint is emp_job_nn?


a. check
b. unique
c. not null
d. primary key
e. foreign key
answer: a

i think this question has a picture which displays the query results. the results are based on that.
check out: oracle concepts pdf page 21-7 for the explanation of the integrity constraints.
however, if you look at the constraint name used emp_job_nn, i think nn stands for notnull., but
there could also be a check on it.

[59]. tom was allocated 10 mb of quota in the users tablespace. he created database objects
in the users tablespace. the total space allocated for the objects owned by tom is 5 mb.
you need to revoke tom's quota from the users tablespace. you issue this command:
alter user tom quota 0 on users;
what is the result?
a. the statement raises the error: ora-00940: invalid alter command.
b. the statement raises the error: ora-00922: missing or invalid option.
c. the objects owned by tom are automatically deleted from the revoked users tablespace.
d. the objects owned by tom remain in the revoked tablespace, but these objects cannot be
allocated any new space from the users tablespace.
answer: d
use the alter user statement to change the authentication or database resource characteristics of
a database user.

tom quota on the users tablespace is revoked with this statement. the objects are not deleted
from the tablespace.

after you have set the quota to zero, you still can insert and delete records from the table which
was created in the users tablespace. so i guess, it will use the system tablespace for the new
results.

[60]. which background process performs a checkpoint in the database by writing modified
blocks from the database buffer cache in the sga to the data files?
a. lgwr
b. smon
c. dbwn
d. ckpt
e. pmon
answer: c

refer to question 54 for the explanation of some of the words used in the answer.

- database writer (dbw n) the database writer writes modified blocks from the
database buffer cache to the datafiles. although one database writer process
(dbw0) is sufficient for most systems, you can configure additional processes
(dbw1 through dbw9 and dbwa through dbwj) to improve write performance
for a system that modifies data heavily. the initialization parameter db_writer_
processes specifies the number of dbwn processes.

- checkpoint (ckpt) at specific times, all modified database buffers in the sga are
written to the datafiles by dbwn. this event is called a checkpoint. the checkpoint
process is responsible for signaling dbwn at checkpoints and updating all the
datafiles and control files of the database to indicate the most recent checkpoint.

[61]. which command would revoke the role_emp role from all users?
a. revoke role_emp from all;
b. revoke role_emp from public;
c. revoke role_emp from default;
d. revoke role_emp from all_users;
answer: b

privileges and roles can also be granted to and revoked from the user group
public. because public is accessible to every database user, all privileges and
roles granted to public are accessible to every database user.

errors given by the answers:


answer a: ora-00987: missing or invalid username(s)
answer b: (it’s ok as long as the role was granted to public)
answer c: ora-00987: missing or invalid username(s)
answer d: ora-01917: user or role 'all_users' does not exist

[62]. you are experiencing intermittent hardware problems with the disk drive on which your
control file is located. you decide to multiplex your control file.
while your database is open, you perform these steps:
1. make a copy of your control file using an operating system command.
2. add the new file name to the list of files for the control files parameter in your text intialization
parameter file using an editor.
3. shut down the instance.
4. issue the startup command to restart the instance, mount, and open the database.
the instance starts, but the database mount fails. why?
a. you copied the control file before shutting down the instance.
b. you used an operating system command to copy the control file.
c. the oracle server does not know the name of the new control file.
d. you added the new control file name to the control_files parameter before shutting down the
instance.
answer: a

to multiplex or move additional copies of the current control file


1. shutdown the database.
2. exit server manager.
3. copy an existing control file to a different location, using operating system commands.
4. edit the control_files parameter in the database's parameter file to add the new control file's
name, or to change the existing control filename.
5. restart server manager.
6. restart the database.

for more information refer to steps for creating new control files in administrator’s guide on page
6-7

[63]. what determines the initial size of a tablespace?


a. the initial clause of the create tablespace statement
b. the minextents clause of the create tablespace statement
c. the minimum extent clause of the create tablespace statement
d. the sum of the initial and next clauses of the create tablespace statement
e. the sum of the sizes of all data files specified in the create tablespace statement
answer: e
minimum extent clause
specify the minimum size of an extent in the tablespace. this clause lets you control
free space fragmentation in the tablespace by ensuring that every used or free extent
size in a tablespace is at least as large as, and is a multiple of, integer.

the storage_clause is interpreted differently for locally managed tablespaces. at creation, oracle
ignores maxextents and uses the remaining parameter values to calculate the initial size of the
segment.

[64]. the control file defines the current state of the physical database.
which three dynamic performance views obtain information from the control file? (choose three.)
a. v$log
b. v$sga
c. v$thread
d. v$version
e. v$datafile
f. v$parameter
answer: ace

v$log
this view contains log file information from the control files.

v$sga
this view contains summary information on the system global area (sga).
v$thread
this view contains thread information from the control file.

v$version
version numbers of core library components in the oracle server. there is one row for each
component.

v$datafile
this view contains datafile information from the control file.

v$parameter
displays information about the initialization parameters that are currently in effect for the session.
a new session inherits parameter values from the instance-wide values displayed by the
v$system_parameter view.

[65]. which option lists the correct hierarchy of storage structures, from largest to the smallest?
a. segment, extent, tablespace, data block
b. data block, extent, segment, tablespace
c. tablespace, extent, data block, segment
d. tablespace, segment, extent, data block
e. tablespace, data block, extent, segment
answer: d

logical database structures


the logical structures of an oracle database include schema objects, data blocks,
extents, segments, and tablespaces.

oracle data blocks at the finest level of granularity, oracle database data is stored in data blocks.
one data block corresponds to a specific number of bytes of physical database space on disk.

extents the next level of logical database space is an extent. an extent is a specific number of
contiguous data blocks, obtained in a single allocation, used to store a specific type of
information.

segments above extents, the level of logical database storage is a segment. a segment is a set of
extents allocated for a certain logical structure. the following table describes the different types of
segments.

tablespaces
a database is divided into logical storage units called tablespaces, which group related logical
structures together.

[66]. evaluate the following sql:


create user sh identified by sh;
grant
create any materialized view
create any dimension
, drop any dimension
, query rewrite
, global query rewrite
to dw_manager
with admin option;
grant dw_manager to sh with admin option;
which three actions is the user sh able to perform? (choose three.)
a. select from a table
b. create and drop a materialized view
c. alter a materialized view that you created
d. grant and revoke the role to and from other users
e. enable the role and exercise any privileges in the role's privilege domain
answer: bde

create any materialized view create materialized views in any schema


create any dimension create dimensions in any schema
drop any dimension drop dimensions in any schema
query rewrite enable rewrite using a materialized view, or create a functionbased index, when that
materialized view or index references tables and views that are in the grantee’s own schema

global query rewrite enable rewrite using a materialized view, or create a functionbased index,
when that materialized view or index references tables or views in any schema

with admin option


specify with admin option to enable the grantee to:
- grant the role to another user or role, unless the role is a global role
- revoke the role from another user or role
- alter the role to change the authorization needed to access it
- drop the role

[67]. which constraint state prevents new data that violates the constraint from being entered,
but allows invalid data to exist in the table?
a. enable validate
b. disable validate
c. enable novalidate
d. disable novalidate
answer: c
enable validate specifies that all old and new data also complies with the
constraint. an enabled validated constraint guarantees that all data is and will
continue to be valid.

enable novalidate ensures that all new dml operations on the constrained
data comply with the constraint. this clause does not ensure that existing data
in the table complies with the constraint and therefore does not require a table
lock.

disable validate disables the constraint and drops the index on the
constraint, but keeps the constraint valid. this feature is most useful in data
warehousing situations, because it lets you load large amounts of data while
also saving space by not having an index. this setting lets you load data from a
nonpartitioned table into a partitioned table using the exchange_partition_
clause of the alter table statement or using sql*loader. all other
modifications to the table (inserts, updates, and deletes) by other sql
statements are disallowed.

disable novalidate signifies that oracle makes no effort to maintain the


constraint (because it is disabled) and cannot guarantee that the constraint is
true (because it is not being validated).

for more info look at page 7-20 of oracle9 i sql reference document.

[68]. which storage structure provides a way to physically store rows from more than one table
in the same data block?
a. cluster table
b. partitioned table
c. unclustered table
d. index-organized table

answer: a

clusters:
- group of one or more tables physically stored together because they share common columns
and are often used together.
- since related rows are stored together, disk access time improves.
- clusters do not affect application design,
- data stored in a clustered table is accessed by sql in the same way as data stored in a non-
clustered table.

partitioning addresses key issues in supporting very large tables and indexes by
letting you decompose them into smaller and more manageable pieces called
partitions. sql queries and dml statements do not need to be modified in order to
access partitioned tables. however, after partitions are defined, ddl statements can
access and manipulate individuals partitions rather than entire tables or indexes.
this is how partitioning can simplify the manageability of large database objects.
also, partitioning is entirely transparent to applications.

more info look at page 10-64 oracle9 idatabase concepts (nice diagram of cluster and non-
cluster)

[69]. which are considered types of segments?


a. only lobs
b. only nested tables
c. only index-organized tables
d. only lobs and index-organized tables
e. only nested tables and index-organized tables
f. only lobs, nested tables, and index-organized tables
g. nested tables, lobs, index-organized tables, and boot straps
answer: g

2-12 oracle9 idatabase concepts


a single data segment in an oracle database holds all of the data for one of the following:
- a table that is not partitioned or clustered
- a partition of a partitioned table segments overview
- a cluster of tables
oracle databases use four types of segments,
- data segments (1 - a table that is not partitioned or clustered / 2 - a partition of a partitioned
table)
- index segments
every nonpartitioned index in an oracle database has a single index segment to
hold all of its data. for a partitioned index, every partition has a single index
segment to hold its data.

- temporary segments

when processing queries, oracle often requires temporary workspace for


intermediate stages of sql statement parsing and execution. oracle automatically
allocates this disk space called a temporary segment. typically, oracle requires a
temporary segment as a work area for sorting. oracle does not create a segment if
the sorting operation can be done in memory or if oracle finds some other way to
perform the operation using indexes.
operations that require temporary segments
- create index
- select ... order by
- select distinct ...
- select ... group by
- select ... union
- select ... intersect
- select ... minus

(look at segment overview on page 2-12 in the oracle 9i concepts pdf file.)
by the way, we use segments to store data, so everything is stored in segments.

[70]. select the memory structure(s) that would be used to store the parse information and
actual value of the bind variable id for the following set of commands:
variable id number;
begin
:id:=1;
end;
/
a. pga only
b. row cache and pga
c. pga and library cache
d. shared pool only
e. library cache and buffer cache
answer: b

the basic memory structures associated with oracle include:


* system global area (sga), which is shared by all server and background
processes and holds the following:

- database buffer cache


- redo log buffer
- shared pool
- large pool (if configured)

* program global areas (pga), which is private to each server and background
process; there is one pga for each process. the pga holds the following:
- stack areas
- data areas

as i understood, sga is used to store the pl/sql code only.

because stored procedures take advantage the shared memory capabilities of


oracle, only a single copy of the procedure needs to be loaded into memory for
execution by multiple users. sharing the same code amongmany users results
in a substantial reduction in oracle memory requirements for applications.

[71]. the new human resources application will be used to manage employee data in the
employees table. you are developing a strategy to manage user privileges. your strategy should
allow for privileges to be granted or revoked from individual users or groups of users with minimal
administrative effort.
the users of the human resources application have these requirements:
ò a manager should be able to view the personal information of the employees in his/her group
and make changes to their title and salary.
what should you grant to the manager user?
a. grant select on the employees table
b. grant insert on the employees table
c. grant update on the employees table
d. grant select on the employees table and then grant update on the title and salary columns
e. grant select on the employees table and then grant insert on the title and salary columns
f. grant update on the employees table and then grant select on the title and salary columns
g. grant insert on the employees table and then grant select on the title, manager, and salary
columns
answer: d
i suppose this question is logical j

[72]. an insert statement failed and is rolled back. what does this demonstrate?
a. insert recovery
b. read consistency
c. transaction recovery
d. transaction rollback
answer: d

if at any time during execution a sql statement causes an error, all effects of the
statement are rolled back. the effect of the rollback is as if that statement had never
been run. this operation is a statement-level rollback.
errors discovered during sql statement execution cause statement-level rollbacks.
an example of such an error is attempting to insert a duplicate value in a primary
key.

[73]. the database currently has one control file. you decide that three control files will provide
better protection against a single point of failure. to accomplish this, you modify the spfile to point
to the locations of the three control files. the message "system altered" was received after
execution of the statement.
you shut down the database and copy the control file to the new names and locations. on startup
you receive the error ora-00205: error in identifying control file. you look in the alert log and
determine that you specified the incorrect path for the for control file.
which steps are required to resolve the problem and start the database?
a. 1. connect as sysdba.
2. shut down the database.
3. start the database in nomount mode.
4. use the alter system set control_files command to correct the error.
5. shut down the database.
6. start the database.

b. 1. connect as sysdba.
2. shut down the database.
3. start the database in mount mode.
4. remove the spfile by using a unix command.
5. recreate the spfile from the pfile.
6. use the alter system set control_files command to correct the error.
7. start the database.

c. 1. connect as sysdba.
2. shut down the database.
3. remove the control files using the os command.
4. start the database in nomount mode.
5. remove the spfile by using an os command.
6. re-create the spfile from the pfile.
7. use the alter system set control_files command to define the control files.
8. shut down the database.
9. start the database.
answer: a
some parameters can be changed dynamically by using the alter session or
alter system statement while the instance is running. unless you are using a
instance and database startup
server parameter file, changes made using the alter system statement are only in
effect for the current instance. you must manually update the text initialization
parameter file for the changes to be known the next time you start up an instance.
when you use a server parameter file, you can update the parameters on disk, so
that changes persist across database shutdown and startup.

see question number 62.


you do not need to create the spfile again. use alter system to update the control_files parameter
value.

[74]. which process is started when a user connects to the oracle server in a dedicated server
mode?
a. dbwn
b. pmon
c. smon
d. server
answer: d
smon-the system monitor performs crash recovery when a failed instance starts up
again. in a cluster database (oracle9i real application clusters), the smon
process of one instance can perform instance recovery for other instances that
have failed. smon also cleans up temporary segments that are no longer in use
and recovers dead transactions skipped during crash and instance recovery
because of file-read or offline errors. these transactions are eventually recovered
by smon when the tablespace or file is brought back online.

pmon-the process monitor performs process recovery when a user process fails.
pmon is responsible for cleaning up the cache and freeing resources that the
process was using. pmon also checks on the dispatcher processes (see below)
and server processes and restarts them if they have failed.

checkpoint (ckpt) at specific times, all modified database buffers in the sga are
written to the datafiles by dbwn. this event is called a checkpoint. the checkpoint
process is responsible for signaling dbwn at checkpoints and updating all the
datafiles and control files of the database to indicate the most recent checkpoint.

therefore, this leaves us with only one answer j

[75]. you are creating a new database. you do not want users to use the system tablespace for
sorting operations.
what should you do when you issue the create database statement to prevent this?
a. create an undo tablespace.
b. create a default temporary tablespace.
c. create a tablespace with the undo keyword.
d. create a tablespace with the temporary keyword.
answer: b
you can manage space for sort operations more efficiently by designating
temporary tablespaces exclusively for sorts. doing so effectively eliminates
serialization of space management operations involved in the allocation and
deallocation of sort space.
all operations that use sorts, including joins, index builds, ordering, computing
aggregates (group by), and collecting optimizer statistics, benefit from temporary
tablespaces. the performance gains are significant with real application clusters.
specify a default temporary tablespace when you create a database, using the
default temporary tablespace extension to the create database statement.

when a transaction begins, oracle assigns the transaction to an available undo


tablespace or rollback segment to record the rollback entries for the new transaction.
a database administrator creates undo tablespaces individually, using the create
undo tablespace statement. it can also be created when the database is created,
using the create database statement.

to improve the concurrence of multiple sort operations, reduce their overhead, or


avoid oracle space management operations altogether, create temporary
tablespaces. a temporary tablespace can be shared by multiple users and can be
assigned to users with the create user statement when you create users in the
database.
within a temporary tablespace, all sort operations for a given instance and
tablespace share a single sort segment. sort segments exist for every instance that
performs sort operations within a given tablespace. the sort segment is created by
the first statement that uses a temporary tablespace for sorting, after startup, and is
released only at shutdown. an extent cannot be shared by multiple transactions.

[76]. which four statements are true about profiles? (choose four.)
a. profiles can control the use of passwords.
b. profile assignments do not affect current sessions.
c. all limits of the default profile are initially unlimited.
d. profiles can be assigned to users and roles, but not other profiles.

e. profiles can ensure that users log off the database when they have left their session idle for a
period of time.
answer: bcde
my answer: acde

introduction to the oracle server 1-47


each user is assigned a profile that specifies limitations on several system resources available to
the user, including the following:
** number of concurrent sessions the user can establish
** cpu processing time available for:
- the user’s session
- asingle call tooracle made by a sql statement
** amount of logical i/o available for:
- the user’s session
- asingle call tooracle made by a sql statement
** amount of idle time available for the user’s session
** amount of connect time available for the user’s session database security overview
** password restrictions:
- account locking after multiple unsuccessful login attempts
- password expiration and grace period
- password reuse and complexity restrictions

different profiles can be created and assigned individually to each user of the
database. a default profile is present for all users not explicitly assigned a profile.
the resource limit feature prevents excessive consumption of global database
system resources.

to allow for greater control over database security, oracle’s password


management policy is controlled by dbas and security officers through user
profiles.

to alter the enforcement of resource limitation while the database remains open,
you must have the alter system system privilege.

all unspecified resource limits for a new profile take the limit set by a default profile. initially, all
limits of the default profile are set to unlimited.

after a profile has been created, you can assign it to database users. each user can
be assigned only one profile at any given time. if a profile is assigned to a user who
already has a profile, the new profile assignment overrides the previously assigned
profile. profile assignments do not affect current sessions. profiles can be assigned
only to users and not to roles or other profiles.

[77]. the database writer (dbwn) background process writes the dirty buffers from the database
buffer cache into the _______.
a. data files only
b. data files and control files only
c. data files and redo log files only
d. data files, redo log files, and control files
answer: a

database writer (dbwn)


the database writer writes modified blocks from the database buffer cache to the datafiles. oracle
allows a maximum of 20 database writer processes (dbw0-dbw9 and dbwa-dbwj). the initialization
parameter db_writer_processes specifies the number of dbwn processes. oracle selects an
appropriate default setting for this initialization parameter (or might adjust a user specified setting)
based upon the number of cpus and the number of processor groups.

[78]. you used the password file utility to create a password file as follows:
$orapwd file=$oracle_home/dbs/orapwdb01
password=orapass entries=5
you created a user and granted only the sysdba privilege to that user as follows:
create user dba_user
identified by dba_pass;
grant sysdba to dba_user;
the user attempts to connect to the database as follows:
connect dba_user/orapass as sysdba;
why does the connection fail?
a. the dba privilege had not been granted to dba_user.
b. the sysoper privilege had not been granted to dba_user.
c. the user did not provide the password dba_pass to connect as sysdba.
d. the information about dba_user has not been stored in the password file.
answer: c

when prompted, connect as sys (or other administrative user) with the
sysdba system privilege:
connect sys/password as sysdba
where password is the password of the user created. in this example, it is dba_user

[79]. which two methods enforce resource limits? (choose two.)


a. alter system set resource_limit= true
b. set the resource_limit parameter to true
c. create profile sessions limit
sessions_per_user 2
cpu_per_session 10000
idle_time 60
connect_time 480;
d. alter profile sessions limit
sessions_per_user 2
cpu_per_session 10000
idle_time 60
connect_time 480;
answer: ab

resource limitation can be enabled or disabled by the resource_limit initialization parameter in the
database’s
initialization parameter file. valid values for the parameter are true (enables
enforcement) and false. by default, this parameter’s value is set to false. once
the initialization parameter file has been edited, the database instance must be
restarted to take effect. every time an instance is started, the new parameter value
enables or disables the enforcement of resource limitation.

resource limitation feature must be altered temporarily, you can enable or disable the
enforcement of resource
limitation using the sql statement alter system. after an instance is started, an
alter system statement overrides the value set by the resource_limit
initialization parameter.

[80]. for which two constraints are indexes created when the constraint is added? (choose
two.)
a. check
b. unique
c. not null
d. primary key
e. foreign key
answer: bd

oracle enforces all primary key constraints using indexes.


oracle enforces unique integrity constraints with indexes.

[81]. you check the alert log for your database and discover that there are many lines that say
"checkpoint not complete". what are two ways to solve this problem? (choose two.)
a. delete archived log files
b. add more online redo log groups
c. increase the size of archived log files
d. increase the size of online redo log files
answer: bd
checkpoint not complete" means that a checkpoint started, but before it
could finish another higher priority checkpoint was issued (usually from a
log switch), so the first checkpoint was essentially rolled-back.

i found these answers from newsgroups and they sound quite good to me:
- increasing the number of redo logs seems to be most effective
normally, checkpoints occur for 1 of 3 reasons:

1) the log_checkpoint_interval was reached.


2) a log switch occurred.
3) the log_checkpoint_timeout was reached.
the archiver copies the online redo log files to archival storage after a log switch has occurred.
although a single arcn process (arc0) is sufficient for most systems, you can specify up to 10 arcn
processes by using the dynamic initialization parameter log_archive_max_processes. if the
workload becomes too great for the current number of arcn processes, then lgwr automatically
starts another arcn process up to the maximum of 10 processes. arcn is active only when a
database is in archivelog mode and automatic archiving is enabled.

there is no archive log file as far as i know. it is called redo log file.

[82]. which type of index does this syntax create?


create index hr.employees_last_name_idx
on hr.employees(last_name)
pctfree 30
storage(initial 200k next 200k
pctincrease 0 maxextents 50)
tablespace indx;
a. bitmap
b. b-tree
c. partitioned
d. reverse key
answer: b
oracle provides several indexing schemes that provide complementary
performance functionality. these are:
1 b-tree indexes-the default and the most common
2 b-tree cluster indexes-defined specifically for cluster
3 hash cluster indexes-defined specifically for a hash cluster
4 global and local indexes-relate to partitioned tables and indexes
5 reverse key indexes-most useful for oracle real application cluster applications
6 bitmap indexes-compact; work best for columns with a small set of values
7 function-based indexes-contain the precomputed value of a function/expression
8 domain indexes-specific to an application or cartridge.

[83]. which data dictionary view shows the available free space in a certain tablespace?
a. dba_extents
b. v$freespace
c. dba_free_space
d. dba_tablespacfs
e. dba_free_extents
answer: c

[84]. you decide to use oracle managed files in your database.


which two are requirements with respect to the directories you specify in the db_create_file_dest
and db_create_online_log_dest_n initialization parameters? (choose two).
a. the directory must already exist.
b. the directory must not contain any other files.
c. the directory must be created in the $oracle_home directory.
d. the directory must have appropriate permissions that allow oracle to create files in it.
answer: ad
setting the db_create_online_log_dest_ n initialization parameter
you specify the name of a file system directory that becomes the default location for
the creation of the operating system files for these entities. you can specify up to
five multiplexed locations.

setting the db_create_file_dest initialization parameter


you specify the name of a file system directory that becomes the default location for
the creation of the operating system files for these entities

as a conclusion, the directories must exist, but it doesn’t matter what is inside. also, it can be
anywhere since you specify the location of it. and must give permission to oracle to read and
write to those directories.

[85]. in which two situations does the log writer (lgwr) process write the redo entries from the
redo log buffer to the current online redo log group? (choose two.)
a. when a transaction commits
b. when a rollback is executed
c. when the redo log buffer is about to become completely full (90%)
d. before the dbwn writes modified blocks in the database buffer cache to the data files
e. when there is more than a third of a megabyte of changed records in the redo log buffer
answer: ad
my answer: ab
the most crucial structure for recovery operations is the online redo log, which
consists of two or more preallocated files that store all changes made to the database
as they occur. every instance of an oracle database has an associated online redo
log to protect the database in case of an instance failure.

online redo log files are filled with redo records. a redo record, also called a redo

entry, is made up of a group of change vectors, each of which is a description of a


change made to a single block in the database. for example, if you change a salary
value in an employee table, you generate a redo record containing change vectors
that describe changes to the data segment block for the table, the rollback segment
data block, and the transaction table of the rollback segments.
redo entries record data that you can use to reconstruct all changes made to the
database, including the rollback segments. therefore, the online redo log also
protects rollback data. when you recover the database using redo data, oracle reads
the change vectors in the redo records and applies the changes to the relevant
blocks.

a filled online redo log file is available once the changes recorded in it have been written to the
datafiles.

[86]. examine the syntax below, which creates a departments table:


create table hr.departments(
department_id number(4),
department_name varcnar2(30),
manager_id number(6),
location_id number(4))
storage(initial 200k next 200k
pctincrease 50 minextents 1 maxextents 5)
tablespace data;
what is the size defined for the fifth extent?
a. 200 k
b. 300 k
c. 450 k
d. 675 k
e. not defined
answer: d

[87]. after running the analyze index orders cust_idx validate structure command, you query
the index_stats view and discover that there is a high ratio of del_lf_rows to lf_rows values for this
index.
you decide to reorganize the index to free up the extra space, but the space should remain
allocated to the orders_cust_idx index so that it can be reused by new entries inserted into the
index.
which command(s) allows you to perform this task with the minimum impact to any users who run
queries that need to access this index while the index is reorganized?
a. alter index rebuild
b. alter index coalesce
c. alter index deallocate unused
d. drop index followed by create index
answer: b

when you rebuild an index, you use an existing index as the data source. creating
an index in this manner enables you to change storage characteristics or move to a
new tablespace. rebuilding an index based on an existing data source removes
intra-block fragmentation. compared to dropping the index and using the create
index statement, re-creating an existing index offers better performance.

improper sizing or increased growth can produce index fragmentation. to


eliminate or reduce fragmentation, you can rebuild or coalesce the index.

coalescing an index online vs. rebuilding an index online. online index coalesce is an in-place
data reorganization operation, hence does not require additional disk space like index rebuild
does. index rebuild requires temporary disk space equal to the size of the index plus sort space
during the operation. index coalesce does not reduce the height of the b-tree. it only tries to
reduce the number of leaf blocks. the coalesce operation does not free up space for users but
does improve index scan performance.

if a user needs to move an index to a new tablespace, online index rebuild is recommended.
index rebuild also improves space utilization, but the index rebuild operation has higher overhead
than the index coalesce operation

[88]. you started your database with this command:


startup pfile=initsampledb.ora
one of the values in the initsampledb.ora parameter file is:
log_archive_start=false
while your database is open, you issue this command to start the archiver process:
alter system archive log start;
you shut down your database to take a back up and restart it using the initsampledb.ora
parameter file again. when you check the status of the archiver, you find that it is disabled.
why is the archiver disabled?
a. when you take a backup the archiver process is disabled.
b. the archiver can only be started by issuing the alter database archivelog command.
c. log_archive_start is still set to false because the pfile is not updated when you issue the alter
system command.
d. the archiver can only be started by issuing the alter system archive log start command each
time you open the database.
answer: c

if an instance is shut down and restarted after automatic archiving is enabled using the alter
system statement, the instance is reinitialized using the settings of the initialization parameter file.
those settings may or may not enable automatic archiving. if your intent is to always archive redo
log files automatically, then you should include log_archive_start = true in your initialization
parameters.

answer d is correct, since every time the database is started, the init value of the
log_archive_start=false
answer d is somewhat correct, but c is better. since the pfile’s log_archive_start = false, then the
only way you can start the archiver is by issuing the command. there is no other way, (except of
course, if you modify the pfile and the start the database again)

[89]. the credit controller for your organization has complained that the report she runs to show
customers with bad credit ratings takes too long to run. you look at the query that the report runs
and determine that the report would run faster if there were an index on the credit_rating column
of the customers table.
the customers table has about 5 million rows and around 100 new rows are added every month.
old records are not deleted from the table.
the credit_rating column is defined as a varchar2(5) field. there are only 10 possible credit ratings
and a customer's credit rating changes infrequently. customers with bad credit ratings have a
value in the credit_ratings column of 'bad' or 'f'.
which type of index would be best for this column?
a. b-tree
b. bitmap
c. reverse key
d. function-based
answer: d
these are my opinions: i could not find an exact answer:
why b-tree is not good for this problem:
- b-trees provide excellent retrieval performance for a wide range of queries,
including exact match and range searches.
- inserts, updates, and deletes are efficient, maintaining key order for fast
retrieval.
since, we will not update this column, and no records are deleted, also we don’t have a wide
range of queries for this column so b-tree is not a good solution.
*****************

bitmap
bitmap indexes are available only if you have purchased the oracle9i enterprise edition. so this is
definitely not the correct answer ;-)
in a bitmap index, a bitmap for each key value is used instead of a list of rowids. each bit in the
bitmap corresponds to a possible rowid. if the bit is set, then it means that the row with the
corresponding rowid contains the key value. a mapping function converts the bit position to an
actual rowid, so the bitmap index provides the same functionality as a regular index even though
it uses a different representation internally. if the number of different key values is small, then
bitmap indexes are very space efficient. bitmap indexing efficiently merges indexes that
correspond to several conditions in a where clause. rows that satisfy some, but not all, conditions
are filtered out before the table itself is accessed. this improves response time, often dramatically.
the highlighted parts say why bitmap in my opinion is not a good solution. we don’t have too
many different keys (only 5) and also out where close is not very colimicated and big.
*****************

creating a reverse key index, compared to a standard index, reverses the bytes of
each column indexed (except the rowid) while keeping the column order. such an
arrangement can help avoid performance degradation with oracle9i real
application clusters where modifications to the index are concentrated on a small
set of leaf blocks. by reversing the keys of the index, the insertions become
distributed across all leaf keys in the index.
using the reverse key arrangement eliminates the ability to run an index range
scanning query on the index. because lexically adjacent keys are not stored next to
each other in a reverse-key index, only fetch-by-key or full-index (table) scans can
be performed. full index (table) scan for 1 million records???? no way!!!!
*********

so this leaves us with one answer only d


you can create indexes on functions and expressions that involve one or more
columns in the table being indexed. a function-based index computes the value of
the function or expression and stores it in the index. you can create a function-based
index as either a b-tree or a bitmap index.the function used for building the index can be an
arithmetic expression or an
expression that contains a pl/sql function,function-based indexes provide an efficient mechanism
for evaluating statements that contain functions in their where clauses. the value of the
expression is computed and stored in the index.when it processes insert and update statements,
however, oracle must still evaluate the function to process the statement.

since functional based is divided into two groups, bitmap and b-tree, then i would be with the
answer a

[90]. there are three ways to specify national language support parameters:
1. initialization parameters
2. environment variables
3. alter session parameters
match each of these with their appropriate definitions.
a. 1) parameters on the client side to specify locale-dependent behavior overriding the defaults
set for the server
2) parameters on the server side to specify the default server environment
3) parameters override the default set for the session or the server

b. 1) parameters on the server side to specify the default server environment


2) parameters on the client side to specify locale-dependent behavior overriding the defaults set
for the server
3) parameters override the default set for the session or the server

c. 1) parameters on the server side to specify the default server environment


2) parameters override the default set for the session or the server
3) parameters on the client side to specify locale-dependent behavior overriding the defaults set
for the server

d. 1) parameters on the client side to specify locale-dependent behavior overriding the defaults
set for the server
2) parameters override the default set for the session or the server
3) parameters on the server side to specify the default server environment
answer: b
oracle has attempted to provide appropriate values in the starter initialization
parameter file provided with your database software, or as created for you by the
database configuration assistant. you can edit these oracle-supplied initialization
parameters and add others, depending upon your configuration and options and
how you plan to tune the database.

[91]. which graphical dba administration tool would you use to tune an oracle database?
a. sql*plus
b. oracle enterprise manager
c. oracle universal installer
d. oracle database configuration assistant
answer: b

hummm… if you think sql*plus is a graphical tool, then i call microsoft windows an artistic tool ;-)
you can more easily administer the database resource manager through the oracle enterprise
manager (oem). it provides an easy to use graphical interface for administering the database
resource manager. you can choose to use the oracle enterprise manager for administering your
database, including starting it up and shutting it down. the oracle enterprise manager is a
separate oracle product, that combines a graphical console, agents, common services, and tools
to provide an integrated and comprehensive systems management platform for managing oracle
products. it enables you to perform the functions discussed in this book using a gui interface,
rather than command lines.

the database configuration assistant (dbca) an oracle supplied tool that enables
you to create an oracle database, configure database options for an existing oracle
database, delete an oracle database, or manage database templates. dbca is
launched automatically by the oracle universal installer, but it can be invoked
standalone from the windows operating system start menu (under configuration
assistants)

[92]. which method is correct for starting an instance to create a database?


a. startup
b. startup open
c. startup mount
d. startup nomount
answer: d

start an instance without mounting a database. typically, you do this only during
database creation or while performing maintenance on the database. use the
startup command with the nomount option.

[93]. you just created five roles using the statements shown:
create role payclerk;
create role oeclerk identified by salary;
create role hr_manager identified externally;
create role genuser identified globally;
create role dev identified using dev_test;
which statement indicates that a user must be authorized to use the role by the enterprise
directory service before the role is enabled?
a. create role payclerk;
b. create role genuser identified globally;
c. create role oeclerk identified by salary;
d. create role dev identified using dev_test;
e. create role hr_manager identified externally;
answer: b

creating a global user the following statement illustrates the creation of a global
user, who is authenticated by ssl and authorized by the enterprise directory
service:
create user scott
identified globally as 'cn=scott,ou=division1,o=oracle,c=us';
the string provided in the as clause provides an identifier (distinguished name, or
dn) meaningful to the enterprise directory.
in this case, scott is truly a global user. but, the disadvantage here is that user
scott must then be created in every database that he must access, plus the
directory.

[94]. examine the list of steps to rename the data file of a non-system tablespace hr_tbs. the
steps are arranged in random order.
1. shut down the database.
2. bring the hr_tbs tablespace online.
3. execute the alter database rename datafile command
4. use the operating system command to move or copy the file
5. bring the tablespace offline.
6. open the database.

what is the correct order for the steps?


a. 1, 3, 4, 6; steps 2 and 5 are not required
b. 1, 4, 3, 6; steps 2 and 5 are not required
c. 2, 3, 4, 5; steps 1 and 6 are not required
d. 5, 4, 3, 2; steps 1 and 6 are not required
e. 5, 3, 4, 1, 6, 2
f. 5, 4, 3, 1, 6, 2
answer: d
renaming datafiles in a single tablespace
to rename datafiles from a single tablespace, complete the following steps:
- take the non-system tablespace that contains the datafiles offline.
for example: alter tablespace users offline normal;
- rename the datafiles using the operating system.
- use the alter tablespace statement with the rename datafile clause to change the filenames
within the database.
the new files must already exist; this statement does not create the files. also,
always provide complete filenames (including their paths) to properly identify
the old and new datafiles. in particular, specify the old datafile name exactly as
it appears in the dba_data_files view of the data dictionary.
- back up the database. after making any structural changes to a database,
always perform an immediate and complete backup.

- bring the datafile online (this was added by me, i couldn’t find it in the documents)
to use this clause for datafiles and tempfiles, the database must be mounted. the database can
also be open, but the datafile or tempfile being renamed must be offline.
so first make the tablespace offline (step 5) => answers a and b are out.
the alter renames the file, but only on the oracle. the statement does not actually change the
name of the file ’disk. you must perform this operation through your operating system. => use the
step 4 to copy the new file to the specified location.
then execute the alter database
you don’t need to shut down and start the database.

[95]. for a tablespace created with automatic segment-space management, where is free
space managed?
a. in the extent
b. in the control file
c. in the data dictionary
d. in the undo tablespace
answer: d

when you create a table in a locally managed tablespace for which automatic segment-space
management is enabled, the need to specify the pctfree (or freelists) parameter is eliminated.
automatic segment-space management is specified at the tablespace level. the oracle database
server automatically and efficiently manages free and used space within objects created in such
tablespaces.

in my opinion, the free space is managed in the table space itself. a table space consists of
extents. therefore, extents are the actual spaces. so, i recommend answer a. also in answer d,
undo table space is used for undo purpose only and not for space management.
[96]. which two environment variables should be set before creating a database? (choose
two.)
a. db_name
b. oracle_sid
c. oracle_home
d. service_name
e. instance_name
answer: ab
my answer : ad
instance_name
represents the name of the instance and is used to uniquely identify a specific instance when
clusters share common services names. the instance name is identified by the instance_name
parameter in the instance initialization file, initsid.ora. the instance name is the same as the
oracle system identifier (sid).

oracle system identifier (sid)


a name that identifies a specific instance of a running pre-release 8.1 oracle database. for an
oracle9i real application clusters database, each node within the cluster has an instance
referencing the database. the database name, specified by the db_name parameter in the
initdb_name.ora file, and unique thread number make up each node's sid. the thread id starts at 1
for the first instance in the cluster, and is incremented by 1 for the next instance, and so on.

oracle_home
corresponds to the environment in which oracle products run. this environment includes location
of installed product files, path variable pointing to products' binary files, registry entries, net
service name, and program groups.
if you install an ofa-compliant database, using oracle universal installer defaults, oracle home
(known as \oracle_home in this guide) is located beneath x:\oracle_base. it contains
subdirectories for oracle software executables and network files.
oracle corporation recommends that you never set the oracle_home environment variable,
because it is not required for oracle products to function properly. if you set the oracle_home
environment variable, then oracle universal installer will unset it for you.

service_name
a logical representation of a database. this is the way a database is presented to clients. a
database can be presented as multiple services and a service can be implemented as multiple
database instances. the service name is a string that includes:

the global database name


a name comprised of the database name (db_name)
domain name (db_domain)
the service name is entered during installation or database creation.

if you are not sure what the global database name is, you can obtain it from the combined values
of the service_names parameter in the common database initialization file, initdbname.ora.

parameters necessary for initial database creation


the initialization parameter file is read whenever an oracle instance is started, including the very
first start before the database is created. there are very few parameters that cannot be modified
at a later time. the most important parameters to set correctly at database creation time are the
following:

db_block_size
sets the size of the oracle database blocks stored in the database files and cached in the sga. the
range of values depends on the operating system, but it is typically powers of two in the range
2048 to 16384. common values are 4096 or 8192 for transaction processing systems and higher
values for database warehouse systems.

db_name and db_domain


set the name of the database and the domain name of the database, respectively. although they
can be changed at a later time, it is highly advisable to set these correctly before the initial
creation. the names chosen must be reflected in the sql*net configuration as well.

compatible
specifies the release with which the oracle server must maintain compatibility. it lets you take
advantage of the maintenance improvements of a new release immediately in
your production systems without testing the new functionality in your environment. if your
application was designed for a specific release of oracle, and you are actually installing a later
release, then you might want to set this parameter to the version of the previous release.

[97]. more stringent user access requirements have been issued. you need to do these tasks
for the user pward:
1. change user authentication to external authentication.
2. revoke the user's ability to create objects in the test ts tablespace.
3. add a new default and temporary tablespace and set a quota of unlimited.
4. assign the user to the clerk profile.
which statement meets the requirements?
a. alter user pward
identified externally
default tablespace data_ts
temporary tablespace temp_ts
quota unlimited on data_ts
quota 0 on test_ts
grant clerk to pward;

b. alter user pward


identified by pward
default tablespace dsta_ts
temporary tablespace temp_ts
quota unlimited on data_ts
quota 0 on test_ts
profile clerk;

c. alter user pward


identified externally
default tablespace data_ts
temporary tablespace temp_ts
quota unlimited on data_ts
quota 0 on test_ts
profile clerk;

d. alter user pward


identified externally
default tablespace data_ts
temporary tablespace temp_ts
quota unlimited on data_ts
quota 0 on test ts;

grant clerk to pward;


answer: c
creating a user who is authenticated externally:
create user scott identified externally; or use alter instead of create. the important keyword is the
identified externally

check the picture to see how the default and temporary table spaces are set. also the quota
keyword is shown on the picture.
since the alter user also has a profile keyword, then profile can also be used. therefore answer c
is correct.

[98]. extents are a logical collection of contiguous _________________.


a. segments
b. database blocks
c. tablespaces
d. operating system blocks
answer: b
an extent is a specific number of contiguous data blocks, obtained in a single allocation, and used
to store a specific type of information.

[99]. you should back up the control file when which two commands are executed? (choose
two.)
a. create user
b. create table
c. create index
d. create tablespace
e. alter tablespace <tablespace name> add datafile
answer: de

back up control files


it is very important that you back up your control files. this is true initially, and at
any time after you change the physical structure of your database. such structural
changes include:
- adding, dropping, or renaming datafiles
- adding or dropping a tablespace, or altering the read-write state of the tablespace
- adding or dropping redo log files or groups

[100]. you have two undo tablespaces defined for your database. the instance is currently using
the undo tablespace named undotbs_1. you issue this command to switch to undotbs 2 while
there are still transactions using undotbs_1:
alter system set undo_tablespace = undotbs_2
which two results occur? (choose two.)
a. new transactions are assigned to undotbs_2.
b. current transactions are switched to the undotbs_2 tablespace.
c. the switch to undotbs_2 fails and an error message is returned.
d. the undotbs_1 undo tablespace enters into a pending offline mode (status).
e. the switch to undotbs_2 does not take place until all transactions in undotbs_1 are completed.
answer: ad

switching undo tablespaces

you can switch from using one undo tablespace to another. because the undo_
tablespace initialization parameter is a dynamic parameter, the alter system
set statement can be used to assign a new undo tablespace.

the database is online while the switch operation is performed, and user
transactions can be executed while this command is being executed. when the
switch operation completes successfully, all transactions started after the switch
operation began are assigned to transaction tables in the new undo tablespace.
the switch operation does not wait for transactions in the old undo tablespace to
commit. if there are any pending transactions in the old undo tablespace, the old
undo tablespace enters into a pending offline mode (status). in this mode,
existing transactions can continue to execute, but undo records for new user
transactions cannot be stored in this undo tablespace.

101. which two statements grant an object privilege to the user smith? (choose two.)
a. grant create table to smith;
b. grant create any table to smith;
c. grant create database link to smith;
d. grant alter rollback segment to smith;

e. grant all on scott.salary view to smith;


f. grant create public database link to smith;
g. grant all on scott.salary_view to smith with grant option;
answer: de
object privileges:
alter, delete, execute, index, insert, references, select, update

see oracle8 administrator’s guide release 8.0 december, 1997 part no. a58397-01 pg. 60.
(a58397.pdf)

102. which memory structure contains the information used by the server process to validate the
user privileges?
a. buffer cache
b. library cache
c. data dictionary cache
d. redo log buffer cache
answer: c
(a) false the database buffer cache is the portion of the sga that holds copies of data blocks read
from datafiles. all user processes concurrently connected to the instance share access to the
database buffer cache. see (a58227.pdf) pg 155. (6-3)
(b) false library cache the library cache includes the shared sql areas, private sql areas, pl/sql
proce-dures and packages, and control structures such as locks and library cache handles.
(c) true one of the most important parts of an oracle database is its data dictionary, which is a
read-only set of tables that provides information about its associated database. a data dictionary
contains:
- the definitions of all schema objects in the database (tables, views, indexes, clusters, synonyms,
sequences, procedures, functions, packages, triggers, and so on)
- how much space has been allocated for, and is currently used by, the schema objects
- default values for columns
- integrity constraint information
- the names of oracle users
- privileges and roles each user has been granted
- auditing information, such as who has accessed or updated various schema objects
- in trusted oracle, the labels of all schema objects and users (see your trustedoracle
documentation)
- other general database information
see (a58227.pdf) pg. 134. (4-2)
(d) false the information in a redo log file is used only to recover the database from a system or
media failure that prevents database data from being written to a database’s datafiles. see
(a58227.pdf) pg. 46. (1-12)

103. click the exhibit button and examine the tablespace requirements for a new database.
which three tablespaces can be created in the create database statement? (choose three.)
a. temp
b. users
c. system
d. app_ndx
e. undotbs
f. app_data
answer: bdf

104. examine these statements:


1) mount mounts the database for certain dba activities but does not provide user access to the
database.
2) the nomount command creates only the data buffer but does not provide access to the
database.
3) the open command enables users to access the database.
4) the startup command starts an instance.
which option correctly describes whether some or all of the statements are true or false?
a. 2 and 3 are true
b. 1 and 3 are true
c. 1 is true, 4 is false
d. 1 is false, 4 is true
e. 1 is false, 3 is true
f. 2 is false, 4 is false
answer: f
explanation:
(1) is true:
mounted database: a database associated with an oracle instance. the database can be opened
or closed. a database must be both mounted and opened to be accessed by users. a database
that has been mounted but not opened can be accessed by dbas for some maintenance
purposes. see oracle8™ enterprise edition
getting started release 8.0.5 for windows nt june 19, 1998 part no. a64416-01 pg. 446.
(2) is false:
after selecting the startup nomount, the instance starts. at this point, there is no database. only an
sga (system global area is a shared memory region that contains data and control information for
one oracle instance) and background processes are started in preparation for the creation of a
new database. see oracle8 administrator’s guide release 8.0 december, 1997 part no. a58397-01
pg. 60. (a58397.pdf)
(3) is true:
opening a mounted database makes it available for normal database operations. any valid user
can connect to an open database and access its information. when you open the database,
oracle opens the online datafiles and online redo log files. if a tablespace was offline when the
database was previously shut down, the tablespace and its corresponding datafiles will still be
offline when you reopen the database. if any of the datafiles or redo log files are not present when
you attempt to open the database, oracle returns an error. see oracle8
concepts release 8.0 december, 1997 part no. a58227-01 pg. 149. (a58227.pdf)
(4) is true:
startup: purpose start an oracle instance with several options, including mounting and
opening a database. prerequisites you must be connected to a database as internal, sysoper, or
sysdba. you cannot be connected via a multi-threaded server. see oracle ® enterprise manager
administrator’s guide release 1.6.0
june, 1998 part no. a63731-01 (oemug.pdf) pg. 503. (b-31)

105. you created a tablespace sh_tbs. the tablespace consists of two data files: sh_tbs_datal .dbf
and sh_tbs_data2.dbf. you created a nonpartitioned table sales_det in the sh_tbs tablespace.
which two statements are true? (choose two.)
a. the data segment is created as soon as the table is created.
b. the data segment is created when the first row in the table is inserted.
c. you can specify the name of the data file where the data segment should be stored.
d. the header block of the data segment contains a directory of the extents in the segment.
answer: ad
expl:
(a) true every nonclustered table or partition and every cluster in an oracle database has a single
data segment to hold all of its data. oracle creates this data segment when you create the
nonclustered table or cluster with the create command. if the table or index is partitioned, each
partition is stored in its own segment. see: oracle8 concepts release 8.0 december, 1997 part no.
a58227-01 (a58227.pdf) pg. 107. (2-15)
(b) false because of the previous.
(c) false
(d) true for maintenance purposes, the header block of each segment contains a directory of the
extents in that segment. see: oracle8 concepts release 8.0 december, 1997 part no. a58227-01
(a58227.pdf) pg. 103. (2-11)

106. which two statements are true about rebuilding an index? (choose two.)
a. the resulting index may contain deleted entries.
b. a new index is built using an existing index as the data source.
c. queries cannot use the existing index while the new index is being built.
d. during a rebuild, sufficient space is needed to accommodate both the old and the new index in
their respective tablespaces.
answer: bd(ac) ?may be misspelled question?
(a) false
(b) true you can create an index using an existing index as the data source. creating an index in
this manner allows you to change storage characteristics or move to a new tablespace. re-
creating an index based on an existing data source also removes intra-block fragmentation. in
fact, compared to dropping the index and using the create index command, re-creating an
existing index offers better performance. (58246.pdf) pg. 178. (10-10)
(c) false a further advantage of this approach is that the old index is still available for queries
(58246.pdf) pg. 178. (10-10)
(d) true

107. consider this sql statement:


update employees set first_name = 'john'
where emp_id = 1009;
commit;
what happens when a user issues the commit in the above sql statement?
a. dirty buffers in the database buffer cache are flushed.
b. the server process places the commit record in the redo log buffer.
c. log writer (lgwr) writes the redo log buffer entries to the redo log files and data files.
d. the user process notifies the server process that the transaction is complete.
e. the user process notifies the server process that the resource locks can be released.
answer: c (b)
expl:
whenever a transaction is committed, lgwr writes the transaction’s redo entries from the redo log
buffer of the system global area (sga) to an online redo log file, and a system change number
(scn) is assigned to identify the redo entries for each committed transaction. however, redo
entries can be written to an online redo log file before the corresponding transaction is committed.
if the redo log buffer fills, or another transaction commits, lgwr flushes all of the redo log entries in
the redo log buffer to an online redo log file, even though some redo entries may not be
committed. see oracle8ä backup and recovery guide release 8.0 december, 1997 part no.
a58396-01 pg. 32 (2-2)

108. a new user, psmith, has just joined the organization. you need to create psmith as a valid
user in the database. you have the following requirements:
1. create a user who is authenticated externally.
2. make sure the user has connect and resource privileges.
3. make sure the user does not have drop table and create user privileges.
4. set a quota of 100 mb on the default tablespace and 500 k on the temporary tablespace.
5. assign the user to the data_ts default tablespace and the temp_ts temporary tablespace.
which statement would you use to create the user?
a. create user psmith
identified externally
default tablespace data_ts
quota 100m on data_ts
quota 500k on temp_ts
temporary tablespace temp_ts;
revoke drop_table, create_user from psmith;
b. create user psmith

identified externally
default tablespace data_ts
quota 500k on temp_ts
quota 100m on data_ts
temporary tablespace temp_ts;
grant connect, resource to psmith;
c. create user psmith
identified externally
default tablespace data_ts
quota 100m on data_ts
quota 500k on temp_ts
temporary tablespace temp_ts;
grant connect to psmith;
d. create user psmith
indentified globally as ‘’
default tablespace data_ts
quota 500k on temp_ts
quota 100m on data_ts
temporary tablespace temp_ts;
grant connect, resource to psmith;
revoke drop_table, create_user from psmith;
answer: b
expl:
(d) is false, because the user must be identified by the operating system, while globally as
’external_name’ indicates that a user must be authenticated by the oracle security service.
(a) and (c) has no connect and resource privileges
create user:
purpose to create a database user, or an account through which you can log in to the database
and establish the means by which oracle permits access by the user. you can assign the following
optional properties to the user:
- default tablespace
- temporary tablespace
- quotas for allocating space in tablespaces
- profile containing resource limits
prerequisites you must have create user system privilege.
see oracle8™ sql reference release 8.0 december 1997 part no. a58225-01 pg. 541. (4-357)
(a588225.pdf)

109. you are logged on to a client. you do not have a secure connection from your client to the
host where your oracle database is running. which authentication mechanism allows you to
connect to the database using the sysdba privilege?
a. control file authentication
b. password file authentication
c. data dictionary authentication
d. operating system authentication
answer: b

remote database administration local database administration


| |
| |
\|/ yes \|/ yes
do you have a ------------> do you want to use -------> use os
secure connection? os authentication? authentication

| |
| no | no
\|/ \|/
------------------------------------------------------------ use a password file

see: oracle8 administrator’s guide release 8.0 december, 1997 part no. a58397-01 pg. 37.
(a58397.pdf)

110. which type of file is part of the oracle database?


a. control file
b. password file
c. parameter files
d. archived log files
answer: a
(a) true: control file is an administrative file required to start and run the database. the control file
records the physical structure of the database. for example, a control file contains the database
name, and the names and locations of the database’s data files and redo log files. see: oracle8™
enterprise edition getting started release 8.0.5 for windows nt june 19, 1998 part no. a64416-01
(a55928.pdf) pg. 109. (5-9)
(b)
(c) false” initialization parameter file contains information to initialize the database and instance.

111. you issue these queries to obtain information about the regions table:
sql> select segment_name, tablespace_name
2> from user_segments
3> where segment_name = 'regions';
you then issue this command to move the regions table:
alter table regions
move tablespace user_data;
what else must you do to complete the move of the regions table?
a. you must rebuild the reg_id_pk index.
b. you must re-create the region_id_nn and reg_id_pk constraints.
c. you must drop the regions table that is in the sample tablespace.
d. you must grant all privileges that were on the regions table in the sample tablespace to the
regions table in the user_data tablespace.
answer: a
each table’s data is stored in its own data segment, while each index’s data
is stored in its own index segment. so after move indexes must be rebuilt.

112. examine this truncate table command:


truncate table departments;
which four are true about the command? (choose four.)
a. all extents are released.
b. all rows of the table are deleted.
c. any associated indexes are truncated.
d. no undo data is generated for the table's rows.
e. it reduces the number of extents allocated to the departments table to the original setting for
minextents.
answer: bcde
to remove all rows from a table or cluster and reset the storage parameters to
the values when the table or cluster was created.
you can use the truncate command to quickly remove all rows from a table or cluster. removing
rows with the truncate command is faster than removing them with the delete command for the
following reasons:
the truncate command is a data definition language (ddl) command and generates no rollback
information.
truncating a table does not fire the table’s delete triggers.
the truncate command allows you to optionally deallocate the space freed by the deleted rows.
the drop storage option deallocates all but the space specified by the table’s minextents
parameter. deleting rows with the truncate command is also more convenient than dropping and
re-creating a table because dropping and re-creating:
invalidates the table’s dependent objects, while truncating does not
requires you to regrant object privileges on the table, while truncating does not requires you to re-
create the table’s indexes, integrity constraints, and triggers and respecify its storage parameters.
see: oracle8 sql reference release 8.0 december 1997 part no. a58225-01 (a58225.pdf) pg.722.
(4-538)

113. which data dictionary view would you use to get a list of object privileges for all database
users?
a. dba_tab_privs
b. all_tab_privs
c. user_tab_privs
d. all_tab_privs_made
answer: a
(a) true dba_tab_privs this view lists all grants on objects in the database. (a58242.pdf) pg. 261.
(2-91)
(b) false all_tab_privs this view lists the grants on objects for which the user or public is the
grantee. (a58242.pdf) pg. 203. (2-33)
(c) false user_tab_privs this view contains information on grants on objects for which the user is
the owner, grantor, or grantee. (a58242.pdf) pg. 333. (2-163)
(d) false all_tab_privs_made this view lists the user’s grants and grants on the user’s objects.
(a58242.pdf) pg. 204. (2-34)

114. your database is in archivelog mode


which two must be true before the log writer (lgwr) can reuse a filled online redo log file? (choose
two).
a. the redo log file must be archived.
b. all of the data files must be backed up.
c. all transactions with entries in the redo log file must complete.
d. the data files belonging to the system tablespace must be backed up.
e. the changes recorded in the redo log file must be written to the data files.
answer: bd
archivelog the filled online redo log files are archived before they are reused in the cycle.
noarchivelog the filled online redo log files are not archived.
(a58227.pdf) pg. 72. (1-38)
when you run a database in archivelog mode, the archiving of the online redo log is enabled.
information in a database control file indicates that a group of filled online redo log files cannot be
used by lgwr until the group is archived (b true). a filled group is immediately available to the
process performing the archiving after a log switch occurs (when a group becomes inactive). the
process performing the archiving does not have to wait for the checkpoint of a log switch to
complete before it can access the inactive group for archiving (c false).
see: oracle8 administrator’s guide release 8.0 december, 1997 part no. a58397-01 (a58397.pdf)
pg. 454. (23-2)

115. which two statements are true about the control file? (choose two.)
a. the control file can be multiplexed up to eight times.
b. the control file is opened and read at the nomount stage of startup.
c. the control file is a text file that defines the current state of the physical database.
d. the control file maintains the integrity of the database, therefore loss of the control file requires
database recovery.
answer: (c?)d
(a) false. no limit(?) control_files indicates one or more names of control files separated by
commas. the instance startup procedure recognizes and opens all the listed files. the instance
maintains all listed control files during database operation. see: oracle8 administrator’s guide
release 8.0 december, 1997 part no. a58397-01 (a58397.pdf) pg. 126. (6-2)
(b) false after mounting the database, the instance finds the database control files and opens
them. (control files are specified in the control_files initialization parameter in the parameter file
used to start the instance.) oracle then reads the control files to get the names of the database’s
datafiles and redo log files. (a58227.pdf) pg. 148. (5-6)
(c) false the control file of a database is a small binary file necessary for the database to start
and operate successfully. a control file is updated continuously by oracle during database use, so
it must be available for writing whenever the database is open. if for some reason the control file
is not accessible, the database will not function properly. (a58227.pdf) pg. 693. (28-19)
(d) true see previous

116. your developers asked you to create an index on the prod_id column of the sales_history
table, which has 100 million rows.
the table has approximately 2 million rows of new data loaded on the first day of every month. for
the remainder of the month, the table is only queried. most reports are generated according to the
prod_id, which has 96 distinct values.
which type of index would be appropriate?
a. bitmap
b. reverse key
c. unique b-tree
d. normal b-tree
e. function based
f. non-unique concatenated
answer: a
regular b*-tree indexes work best when each key or key range references only a few records,
such as employee names. bitmap indexes, by contrast, work best when each key references
many records, such as employee gender.
bitmap indexes can substantially improve performance of queries with the following
characteristics:
- the where clause contains multiple predicates on low- or medium-cardinality columns.
- the individual predicates on these low- or medium-cardinality columns select a large number of
rows.
- bitmap indexes have been created on some or all of these low- or medium-cardinality columns.
- the tables being queried contain many rows.
you can use multiple bitmap indexes to evaluate the conditions on a single table. bitmap indexes
are thus highly advantageous for complex ad hoc queries that contain lengthy where clauses.
bitmap indexes can also provide optimal performance for aggregate queries. 96<<100 million low
cardinality ==> bitmap indexes, lot of rows ==> bitmap indexes.
see oracle8 tuning release 8.0 december, 1997 part no. a58246-01 (a58246.pdf) pg. 181. (10-13)
117. the server parameter file (spfile) provides which three advantages when managing
initialization parameters? (choose three.)
a. the oracle server maintains the server parameter file.

b. the server parameter file is created automatically when the instance is started.
c. changes can be made in memory and/or in the spfile with the alter system command.
d. the use of spfile provides the ability to make changes persistent across shut down and start up.
e. the oracle server keeps the server parameter file and the text initialization parameter file
synchronized.
answer: bde

118. you examine the alert log file and notice that errors are being generated from a sql*plus
session. which files are best for providing you with more information about the nature of the
problem?
a. control file
b. user trace files
c. background trace files
d. initialization parameter files
answer: b
expl:
(a) false the control file of a database is a small binary file necessary for the database to start
and operate successfully. a control file is updated continuously by oracle during database use, so
it must be available for writing whenever the database is open. if for some reason the control file
is not accessible, the database will not function properly. (a58227.pdf) pg. 693. (28-19)
a trace file is created each time an oracle instance starts or an unexpected event occurs in a user
process or background process. the name of the trace file includes the instance name, the
process name, and the oracle process number. the file extension or file type is usually trc, and, if
different, is noted in your operating system-specific oracle documentation. the contents of the
trace file may include dumps of the system global area, process global area, supervisor stack,
and registers. two initialization parameters specify where the trace files are stored:
(b) false background_dump_des specifies the location for trace files created by the oracle
background processes pmon, dbwr, lgwr, and smon.
(c) true user_dump_dest specifies the location for trace files created by user processes such as
sql*dba, sql*plus, or pro*c.
see: oracle8™ error messages release 8.0.4 december 1997 part no. a58312-01 (a58312.pdf)
pg. 27. (1-5)
(d) false parameter file contains initialization parameters. these parameters specify the name of
the database, the amount of memory to allocate, the names of control files, and various limits and
other system parameters. (a58227.pdf) pg. 61. (1-27)

119. user smith created indexes on some tables owned by user john.
you need to display the following:
• index names
• index types
which data dictionary view(s) would you need to query?
a. dba_indexes only
b. dba_ind_columns only
c. dba_indexes and dba_users
d. dba_ind columns and dba_users
e. dba_indexes and dba_ind_expressions
f. dba_indexes, dba_tables, and dba_users
answer: a
(a) dba_indexes this view contains descriptions for all indexes in the database. to gather
statistics for this view, use the sql command analyze. this view supports parallel partitioned index
scans. (a58242.pdf) pg. 230. (2-60)
(b) dba_ind_columns this view contains descriptions of the columns comprising the indexes on
all tables and clusters. (a58242.pdf) pg. 232. (2-62)
(c) dba_users this view lists information about all users of the database. (a58242.pdf) pg. 267.
(2-97)
(e) dba_ind_expressions does not exist
(f) dba_tables this view contains descriptions of all relational tables in the database. to gather
statistics for this view, use the sql command analyze. (a58242.pdf) pg. 262. (2-92)

120. the users pward and psmith have left the company. you no longer want them to have access
to the database. you need to make sure that the objects they created in the database remain.
what do you need to do?
a. revoke the create session privilege from the user.
b. drop the user from the database with the cascade option.
c. delete the users and revoke the create session privilege.
d. delete the users by using the drop user command from the database.
answer: d
(a) true create session right: connect to the database.
(b) if the user’s schema contains any schema objects, use the cascade option to drop the user
and all associated objects and foreign keys that depend on the tables of the user successfully. if
you do not specify cascade and the user’s schema contains objects, an error message is
returned and the user is not dropped. before dropping a user whose schema contains objects,
thoroughly investigate which objects the user’s schema contains and the implications of dropping
them before the user is dropped. pay attention to any unknown cascading effects. for example, if
you intend to drop a user who owns a table, check whether any views or procedures depend on
that particular table. see: oracle8 administrator’s guide release 8.0 december, 1997 part no.
a58397-01 (a58397.pdf) pg. 385. (20-17)
(c) false after deleted one can not revoke privilege
(d) when a user is dropped, the user and associated schema is removed from the data dictionary
and all schema objects contained in the user’s schema, if any, are immediately dropped. see:
oracle8 administrator’s guide release 8.0 december, 1997 part no. a58397-01 (a58397.pdf) pg.
385. (20-17)

121. you need to create an index on the customer_id column of the customers table. the index
has these requirements:
1. the index will be called cust_pk.
2. the index should be sorted in ascending order.
3. the index should be created in the index01 tablespace, which is a dictionary managed
tablespace.
4. all extents of the index should be 1 mb in size.
5. the index should be unique.
6. no redo information should be generated when the index is created.
7. 20% of each data block should be left free for future index entries.
which command creates the index and meets all the requirements?
a. create unique index cust_pk on customers(customer_id)
tablespace index0l
pctfree 20
storage (initial lm next lm pctincrease 0);
b. create unique index cust_pk on customer_id)
tablespace index0l
pctfree 20
storage (initial 1m next 1m pctincrease 0)
nologging;
c. create unique index cust_pk on customers(customer_id)
tablespace index0l
pctused 80
storage (initial lm next lm pctincrease 0)
nologging;
d. create unique index cust_pk on customers(customer_id)
tablespace index0l
pctused 80
storage (initial lm next lm pctincrease 0);
answer: b ?d?
expl:
pctfree is the percentage of space to leave free for updates and insertions within each of the
index’s data blocks.
tablespace is the name of the tablespace to hold the index or index partition. if you omit this
option, oracle creates the index in the default tablespace of the owner of the schema containing
the index.
logging / nologging specifies that the creation of the index will be logged (logging) or not logged
(nologging) in the redo log file.
storage pctincrease specifies the percent by which the third and subsequent extents grow over
the preceding extent. the default value is 50, meaning that each subsequent extent is 50% larger
than the preceding extent.
next specifies the size in bytes of the next extent to be allocated to the object. you can use k or m
to specify the size in kilobytes or megabytes.
initial specifies the size in bytes of the object’s first extent. oracle allocates space for this extent
when you create the schema object. you can use k or m to specify this size in kilobytes or
megabytes.
asc / desc are allowed for db2 syntax compatibility, although indexes are always created in
ascending order.
(a58225.pdf) pg. 421. (4-237)
(a) false, nologging missing
(b) true (but customers( is missing misspelling???)
(c) false pctused clause does not exist.
(d) false nologging missing, pctused clause does not exist.

122. you need to know how many data files were specified as the maximum for the database
when it was created. you did not create the database and do not have the script used to create
the database.
how could you find this information?
a. query the dba_data_files data dictionary view.
b. query the v$datafile dynamic performance view.
c. issue the show parameter control_files command.
d. query the v$controlfile_record_section dynamic performance view.
answer: d
(a) false dba_data_files this view contains information about database files. we need information
about max number of datafiles. see (a58242.pdf) pg. 225. (2-55)
(b) v$datafile this view contains datafile information from the control file. (a58242.pdf) pg. 363.
(3-23)
(c) ?
(d) v$controlfile_record_section this view displays information about the controlfile record
sections. (a58242.pdf) pg. 360. (3-20)

123. which two actions cause a log switch? (choose two.)


a. a transaction completes.
b. the instance is started.
c. the instance is shut down
d. the current online redo log group is filled
e. the alter system switch logfile command is issued.
answer: de (?ab?)
a log switch, by default, takes place automatically when the current online redo log file group fills.
see: oracle8 administrator’s guide release 8.0 december, 1997 part no. a58397-01 (a58397.pdf)
pg. 118. (5-10)
to force a log switch, you must have the alter system privilege. to force a log switch, use either
the switch logfile menu item of enterprise manager or the sql command alter system with the
switch logfile option. the following statement forces a log switch: alter system switch logfile; see:
oracle8 administrator’s guide release 8.0 december, 1997 part no. a58397-01 (a58397.pdf) pg.
121. (5-13)

124. evaluate the sql command:


create temporary tablespace temp_tbs
tempfile '/usr/oracle9i/orahomel/temp_data.dbf'
size 2m
autoextend on;
which two statements are true about the temp_tbs tablespace? (choose two.)
a. temp_tbs has locally managed extents.
b. temp_tbs has dictionary managed extents.
c. you can rename the tempfile temp_data.dbf.
d. you can add a tempfile to the temp_tbs tablespace.
e. you can explicitly create objects in the temp_tbs tablespace.
answer: bc
(a) true use the create temporary tablespace statement to create a locally managed temporary
tablespace, which is an allocation of space in the database that can contain schema objects for
the duration of a session. if you subsequently assign this temporary tablespace to a particular
user, then oracle will also use this tablespace for sorting operations in transactions initiated by
that user. (a96540.pdf) pg. 1258. (15-92)
(b) false because of previous
(c) ?
(d) ?
(e) ?

125. examine the command:


create table employee
( employee_id number constraint employee_empid_pk
primary key,
employee_name varcnar2(30),
manager_id number constraint employee_mgrid_fk
references employee(employee_id));
the emp table contains self referential integrity requiring all not null values inserted in the
manager_id column to exist in the employee_id column.
which view or combination of views is required to return the name of the foreign key constraint
and the referenced primary key?
a. dba_tables only
b. dba_constraints only
c. dba_tabs_columns only [misspelled, dba_tab_columns]
d. dba_cons_columns only
e. dba_tables and dba_constraints
f. dba_tables and dba_cons_columns
answer: b
(a) false dba_tables this view contains descriptions of all relational tables in the database. to
gather statistics for this view, use the sql command analyze. no constraint information see
(a58242.pdf) pg. 262 (2-92)
(b) true dba_constraints this view contains constraint definitions on all tables. see (a58242.pdf)
pg. 253 (2-83)
(c) false dba_tab_columns this view contains information which describes columns of all tables,
views, and clusters. no constraint name information. see (a58242.pdf) pg. 259 (2-89)
(d) false dba_cons_columns this view contains information about accessible columns in
constraint definitions. see (a58242.pdf) pg. 224 (2-54)
(e) false don’t need the dba_tables
(f) false

126. which statement about the shared pool is true?


a. the shared pool cannot be dynamically resized.
b. the shared pool contains only fixed structures
c. the shared pool consists of the library cache and buffer cache.
d. the shared pool stores the most recently executed sql statements and the most recently
accessed data definitions.
answer: d
(c) false the shared pool portion of the sga contains three major areas: library cache, dictionary
cache, and control structures.
(d) true in general, any item (shared sql area or dictionary row) in the shared pool remains until it
is flushed according to a modified lru algorithm. the memory for items that are not being used
regularly is freed if space is required for new items that must be allocated some space in the
shared pool.
(a58227.pdf) pg. 158. (6-6)

127. as a dba, one of your tasks is to periodically monitor the alert log file and the background
trace files. in doing so, you notice repeated messages indicating that log writer (lgwr) frequently
has to wait for a redo log group because a checkpoint has not completed or a redo log group has
not been archived.
what should you do to eliminate the wait lgwr frequently encounters?
a. increase the number of redo log groups to guarantee that the groups are always available to
lgwr.
b. increase the size of the log buffer to guarantee that lgwr always has information to write.
c. decrease the size of the redo buffer cache to guarantee that lgwr always has information to
write.
d. decrease the number of redo log groups to guarantee that checkpoints are completed prior to
lgwr writing.
answer: a

128. which privilege is required to create a database?


a. dba
b. sysdba
c. sysoper
d. resource
answer: b
you must have the osdba role enabled.
the roles connect, resource, dba, exp_full_database, and imp_full_database are defined
automatically for oracle databases. these roles are provided for backward compatibility to earlier
versions of oracle and can be modified in the same manner as any other role in an oracle
database.
see (a58227.pdf) pg. 622. (26-16)
(c) false sysoper permits you to perform startup, shutdown, alter database open/mount, alter
database backup, archive log, and recover, and includes the restricted session privilege.
(b) true sysdba contains all system privileges with admin option, and the sysoper system
privilege; permits create database and time-based recovery.
see (a58227.pdf) pg. 637. (25-7)

129. you need to create an index on the passport_records table. it contains 10 million rows of
data. the key columns have low cardinality. the queries generated against this table use a
combination of multiple where conditions involving the or operator.
which type of index would be best for this type of table?
a. bitmap
b. unique
c. partitioned
d. reverse key
e. single column
f. function-based
answer: a
bitmap indexes can substantially improve performance of queries with the following
characteristics:
- the where clause contains multiple predicates on low- or medium-cardinality columns.
- the individual predicates on these low- or medium-cardinality columns select a large number of
rows.
- bitmap indexes have been created on some or all of these low- or medium-cardinality columns.
- the tables being queried contain many rows.
you can use multiple bitmap indexes to evaluate the conditions on a single table. bitmap indexes
are thus highly advantageous for complex ad hoc queries that contain lengthy where clauses.
bitmap indexes can also provide optimal performance for aggregate queries.
(a) true low cardinality ==> bitmap indexes, lot of rows ==> bitmap indexes.
see oracle8 tuning release 8.0 december, 1997 part no. a58246-01 (a58246.pdf) pg. 181. (10-13)

130. you need to determine the location of all the tables and indexes owned by one user. in which
dba view would you look?
a. dba_tables
b. dba_indexes
c. dba_segments
d. dba_tablespaces
answer: c

(a) false dba_tables this view contains descriptions of all relational tables in the database. to
gather statistics for this view, use the sql command analyze. no index information see
(a58242.pdf) pg. 262 (2-92)
(b) false dba_indexes this view contains descriptions for all indexes in the database. to gather
statistics for this view, use the sql command analyze. this view supports parallel partitioned index
scans. no table information. see (a58242.pdf) pg. 230 (2-60)
(c) true dba_segments this view contains information about storage allocated for all database
segments. username of the segment owner, type of segment: … table, index …. see (a58242.pdf)
pg. 254 (2-84)
(d) false dba_tablespaces this view contains descriptions of all tablespaces. no table and index
information. see (a58242.pdf) pg. 264 (2-94)

131. which two are true about the data dictionary views with prefix user_? (choose two.)
a. the column owner is implied to be the current user.
b. a user needs the select any table system privilege to query these views.
c. the definitions of these views are stored in the user's default tablespace.
d. these views return information about all objects to which the user has access.
e. users can issue an insert statement on these views to change the value in the underlying base
tables.
f. a user who has the create public synonym system privilege can create public synonyms for
these views.
answer: cd
(a) true. views with the prefix user usually exclude the column owner. this column is implied in
the user views to be the user issuing the query. see (a58227.pdf) pg. 137. (4-5) have columns
identical to the other views, except that the column owner is implied the current user. see
(a58227.pdf) pg. 138. (4-6)
(b) false the data dictionary views accessible to all users of an oracle server. most views can be
accessed by any user with the create_session privilege. the data dictionary views that begin with
dba_ are restricted. these views can be accessed only by users with the select_any_table
privilege. this privilege is assigned to the dba role when the system is initially installed. see
(a58242.pdf) pg. 171 (2-1)
(c) false the data dictionary is always available when the database is open. it resides in the
system tablespace, which is always online. see (a58227.pdf) pg. 137. (4-5)
(d) true
(e) false any oracle user can use the data dictionary as a read-only reference for information
about the database. see (a58227.pdf) pg. 135. (4-3)
(f) ?

132. which two statements about segments are true? (choose two.)
a. each table in a cluster has its own segment.
b. each partition in a partitioned table is a segment.
c. all data in a table segment must be stored in one tablespace.
d. if a table has three indexes only one segment is used for all indexes.
e. a segment is created when an extent is created, extended, or altered.
f. a nested table of a column within a table uses the parent table segment.
answer: af
(a) true a segment is a set of extents that contains all the data for a specific logical storage
structure within a tablespace. for example, for each table, oracle allocates one or more extents to
form that table’s data segment;
(b) false every nonclustered table or partition and every cluster in an oracle database has a
single data segment to hold all of its data.
(d) false for each index, oracle allocates one or more extents to form its index segment.
(e) false oracle creates this data segment when you create the nonclustered table or cluster with
the create command.
oracle databases use four types of segments:
- data segments
- index segments
- temporary segments
- rollback segments
see: (a58227.pdf) pg. 107. (2-15)

133. your database contains a locally managed uniform sized tablespace with automatic
segment-space management, which contains only tables. currently, the uniform size for the
tablespace is 512 k.
because the tables have become so large, your configuration must change to improve
performance. now the tables must reside in a tablespace that is locally managed, with uniform
size of 5 mb and automatic segment-space management.
what must you do to meet the new requirements?
a. the new requirements cannot be met.
b. re-create the control file with the correct settings.
c. use the alter tablespace command to increase the uniform size.
d. create a new tablespace with correct settings then move the tables into the new tablespace.
answer: d

134. a dba has issued the following sql statement:


select max_blocks
from dba_ts_quotas
where tablespace_name='user_tbs'
and username='jenny';
user jenny has unlimited quota on the user_tbs tablespace. which value will the query return?
a. 0
b. 1
c. -1
d. null
e. 'unlimited'
answer: c

135. you set the value of the os_authent_prefix initialization parameter to ops$ and created a
user account by issuing this sql statement:
create user ops$smith
identified externally;
which two statements are ?not? true? (choose two.)
a. oracle server assigns the default profile to the user.
b. you can specify the password expire clause for an external user account.
c. the user does not require create session system privilege to connect to the database.
d. if you query the dba_users data dictionary view the username column will contain the value
smith.
e. the user account is maintained by oracle, but password administration and user authentication
are performed by the operating system or a network service.
answer: cd
with external authentication, your database relies on the underlying operating system or network
authentication service to restrict access to database accounts. a database password is not used
for this type of login. if your operating system or network service permits, you can have it
authenticate users. if you do so, set the parameter os_authent_prefix, and use this prefix in oracle
usernames. this parameter defines a prefix that oracle adds to the beginning of every user’s
operating system account name. oracle compares the prefixed username with the oracle
usernames in the database when a user attempts to connect. if a user with an operating system
account named tsmith” is to connect to an oracle database and be authenticated by the operating
system, oracle checks that there is a corresponding database user “ops$tsmith” and, if so, allows
the user to connect.
see: (a58397.pdf) pg. 377. (20-9)
(a) ?true? profile reassigns the profile named to the user. the profile limits the amount of
database resources the user can use. if you omit this clause, oracle assigns the default profile to
the user. see (a58225.pdf) pg. 541. (4-357)
(b) ?true? from the create user syntax. see (a58225.pdf) pg. 541. (4-357)
(c) ?false?
(d) ?false
(e) ?true?

136. which three statements are true about the use of online redo log files? (choose three.)
a. redo log files are used only for recovery.
b. each redo log within a group is called a member.
c. redo log files are organized into a minimum of three groups.
d. an oracle database requires at least three online redo log members.
e. redo log files provide the database with a read consistency method.
f. redo log files provide the means to redo transactions in the event of an instance failure.
answer: abd
(a) true the information in a redo log file is used only to recover the database from a system or
media failure that prevents database data from being written to a database’s datafiles. see
(a58227.pdf) pg. 46. (1-12)
(c) false every oracle database has a set of two or more redo log files. 2 files can not be
organized to 3 groups see (a58227.pdf) pg. 46. (1-12)
(e) false every database contains one or more rollback segments, which are portions of the
database that record the actions of transactions in the event that a transaction is rolled back. you
use rollback segments to provide read consistency, rollback transactions, and recover the
database. (a58227.pdf) pg. 109. (2-17)

137. which steps should you follow to increase the size of the online redo log groups?
a. use the alter database resize logfile group command for each group to be resized.
b. use the alter database resize logfile member command for each member within the group
being resized.
c. add new redo log groups using the alter database add logfile group command with the new
size.
drop the old redo log files using the alter database drop logfile group command.
d. use the alter datbase resize logfile group command for each group to be resized.
use the alter database resize logfile member command for each member within the group.
answer: c

138. oracle guarantees read-consistency for queries against tables. what provides read-
consistency?
a. redo logs
b. control file
c. undo ?rollback? segments
d. data dictionary
answer: c
(a) false the information in a redo log file is used only to recover the database from a system or
media failure that prevents database data from being written to a database’s datafiles. see
(a58227.pdf) pg. 46. (1-12)
(b) false the control file of a database is a small binary file necessary for the database to start
and operate successfully. (a58227.pdf) pg. 693. (28-19)
(c) every database contains one or more rollback segments, which are portions of the database
that record the actions of transactions in the event that a transaction is rolled back. you use
rollback segments to provide read consistency, rollback transactions, and recover the database.
(a58227.pdf) pg. 109. (2-17)
(d) false each oracle database has a data dictionary. an oracle data dictionary is a set of tables
and views that are used as a read-only reference about the database. for example, a data
dictionary stores information about both the logical and physical structure of the database.
(a58227.pdf) pg. 81, 134 (1-47, 4-1)

139. you need to shut down your database. you want all of the users who are connected to be
able to complete any current transactions. which shutdown mode should you specify in the
shutdown command?
a. abort
b. normal
c. immediate
d. transactional
answer: d
(a) ??
(b) false normal database shutdown proceeds with the following conditions:
- no new connections are allowed after the statement is issued.
- before the database is shut down, oracle waits for all currently connected users to disconnect
from the database.
- the next startup of the database will not require any instance recovery proce-dures.
(c) false immediate database shutdown proceeds with the following conditions:
- current client sql statements being processed by oracle are terminated immediately.
- any uncommitted transactions are rolled back. (if long uncommitted transactions exist, this
method of shutdown might not complete quickly, despite its name.)
- oracle does not wait for users currently connected to the database to disconnect;
- oracle implicitly rolls back active transactions and disconnects all connected users.
(d) true after submitting this statement, no client can start a new transaction on this particular
instance. if a client attempts to start a new transaction, they are disconnected. after all
transactions have either committed or aborted, any client still connected to the instance is
disconnected. at this point, the instance shuts down just as it would when a shutdown immediate
statement is submitted. a transactional shutdown prevents clients from losing work, and at the
same time, does not require all users to log off.
see (a58397.pdf) pg. 78. (3-8)
140. you decided to use multiple buffer pools in the database buffer cache of your database. you
set the sizes of the buffer pools with the db_keep_cache_size and db_recycle_cache_size
parameters and restarted your instance.
what else must you do to enable the use of the buffer pools?
a. re-create the schema objects and assign them to the appropriate buffer pool.
b. list each object with the appropriate buffer pool initialization parameter.
c. shut down the database to change the buffer pool assignments for each schema object.
d. issue the alter statement and specify the buffer pool in the buffer_pool clause for the schema
objects you want to assign to each buffer pool.
answer: d

141. a user calls and informs you that a 'failure to extend tablespace' error was received while
inserting into a table. the tablespace is locally managed.
which three solutions can resolve this problem? (choose three.)
a. add a data file to the tablespace
b. change the default storage clause for the tablespace
c. alter a data file belonging to the tablespace to autoextend
d. resize a data file belonging to the tablespace to be larger
e. alter the next extent size to be smaller, to fit into the available space
answer: abc

142. which table type should you use to provide fast key-based access to table data for queries
involving exact matches and range searches?
a. regular table
b. clustered table
c. partitioned table
d. index-organized table
answer: d
(b) false clusters are an optional method of storing table data. clusters are groups of one or more
tables physically stored together because they share common columns and are often used
together. because related rows are physically stored together, disk access time improves.
(a58227.pdf) pg. 79. (1-45)
(c) false partitioning addresses the key problem of supporting very large tables and indexes by
allowing you to decompose them into smaller and more manageable pieces called partitions.
once partitions are defined, sql statements can access and manipulate the partitions rather than
entire tables or indexes. partitions are especially useful in data warehouse applications, which
commonly store and analyze large amounts of historical data. all partitions of a table or index
have the same logical attributes, although their physical attributes can be different. for example,
all partitions in a table share the same column and constraint definitions; and all partitions in an
index share the same index columns. however, storage specifications and other physical
attributes such as pctfree, pctused, initrans, and maxtrans can vary for different partitions of the
same table or index. each partition is stored in a separate segment. optionally, you can store
each partition in a separate tablespace. see (a58227.pdf) pg. 244. (9-2)
(d) true an index-organized table differs from a regular table in that the data for the table is held
in its associated index. changes to the table data, such as adding new rows, updating rows, or
deleting rows, result only in updating the index. the index-organized table is like a regular table
with an index on one or more of its columns, but instead of maintaining two separate storages for
the table and the b*-tree index, the database system only maintains a single b*-tree index which
contains both the encoded key value and the associated column values for the corresponding
row. benefits of index-organized tables because rows are stored in the index, index-organized
tables provide a faster key-based access to table data for queries involving exact match and/or
range search. (a58227.pdf) pg. 229. (8-29)

143. during a checkpoint in an oracle9i database, a number of dirty database buffers covered by
the log being checkpointed are written to the data files by dbwn.
which parameter determines the number of buffers being written by dbwn?
a. log_checkpoint_target
b. fast_start_mttr_target
c. log_checkpoint_io_target
d. fast_start_checkpoint_target
answer: d

144. which statement about an oracle instance is true?


a. the redo log buffer is not part of the shared memory area of an oracle instance.
b. multiple instances can execute on the same computer, each accessing its own physical
database.
c. an oracle instance is a combination of memory structures, background processes, and user
processes.

d. in a shared server environment, the memory structure component of an instance consists of a


single sga and a single pga.
answer: d
(a) false the redo log buffer is a circular buffer in the sga that holds information about changes
made to the database. see (a58227.pdf) pg. 158, 144. (6-6, 5-2)

(c) false oracle allocates a memory area called the system global area (sga) and starts one or
more oracle processes. this combination of the sga and the oracle processes is called an oracle
instance. see (a58227.pdf) pg. 144. (5-2)
(d) true a pga is nonshared memory area to which a process can write. one pga is allocated for
each server process; the pga is exclusive to that server process and is read and written only by
oracle code acting on behalf of that process. a pga is allocated by oracle when a user connects to
an oracle database and a session is created, though this varies by operating system and
configuration. the basic memory structures associated with oracle include:
- software code areas
- system global area (sga):
– the database buffer cache
– the redo log buffer
– the shared pool
- program global areas (pga):
– the stack areas
– the data areas
-sort areas
see (a58227.pdf) pg. 154. (6-2)

145. the current password file allows for five entries. new dbas have been hired and five more
entries need to be added to the file, for a total of ten. how can you increase the allowed number
of entries in the password file?
a. manually edit the password file and add the new entries.
b. alter the current password file and resize if to be larger.
c. add the new entries; the password file will automatically grow.
d. drop the current password file, recreate it with the appropriate number of entries and add
everyone again.
answer: d
you can create a password file using the password file creation utility, orapwd or, for selected
operating systems, you can create this file as part of your standard installation.
entries this parameter sets the maximum number of entries allowed in the password file. this
corresponds to the maximum number of distinct users allowed to connect to the database as
sysdba or sysoper. if you ever need to exceed this limit, you must create a new password file. it is
safest to select a number larger than you think you will ever need. see (a58397.pdf) pg. 39, 41.
(1-9, 1-11)

146. abc company consolidated into one office building, so the very large employees table no
longer requires the office_location column. the dba decided to drop the column using the syntax
below:
alter table hr.employees
drop column building_location
cascade constraints;
dropping this column has turned out to be very time consuming and is requiring a large amount of
undo space.
what could the dba have done to minimize the problem regarding time and undo space
consumption?
a. use the export and import utilities to bypass undo.
b. mark the column as unused.
remove the column at a later time when less activity is on the system.
c. drop all indexes and constraints associated with the column prior to dropping the column.
d. mark the column invalid prior to beginning the drop to bypass undo.
remove the column using the drop unused columns command.
e. add a checkpoint to the drop unused columns command to minimize undo space.
answer: e

147. user a issues this command:


update emp
set id=200
where id=1
then user b issues this command:
update emp
set id=300
where id=1
user b informs you that the update statement seems to be hung. how can you resolve the
problem so user b can continue working?
a. no action is required
b. ask user b to abort the statement
c. ask user a to commit the transaction
d. ask user b to commit the transaction
answer: d
because of the consistency, while a transaction not committed no one else can modify the same
columns.

148. anne issued this sql statement to grant bill access to the customers table in anne's schema:
grant select on customers to bill with grant option;
bill issued this sql statement to grant claire access to the customers table in anne's schema:
grant select on anne.customers to claire;
later, anne decides to revoke the select privilege on the customers table from bill.
which statement correctly describes both what anne can do to revoke the privilege, and the effect
of the revoke command?
a. anne can run the revoke select on customers from bill statement. both bill and claire lose their
access to the customers table.
b. anne can run the revoke select on customers from bill statement. bill loses access to the
customers table, but claire will keep her access.
c. anne cannot run the revoke select on customers from bill statement unless bill first revokes
claire's access to the customers table.
d. anne must run the revoke select on customers from bill cascade statement. both bill and claire
lose their access to the ri istomers table.
answer: b

149. which data dictionary view would you use to get a list of all database users and their default
settings?
a. all_users
b. users_users [misspelled user_users the right]
c. dba_users
d. v$session
answer: c(d)
(a) false all_users this view contains information about all users of the database: name of the
user, id number of the user, user creation date, but no default settings. see (a58242.pdf) pg. 209
(2-39)
(b) false user_users this view contains information about the current user. not all user. see
(a58242.pdf) pg. 339 (2-169)
(c) true dba_users this view lists information about all users of the database. default tablespace
for data, default tablespace for temporary table see (a58242.pdf) pg. 267 (2-97)
(d) false v$session this view lists session information for each current session. see (a58242.pdf)
pg. 417 (3-77)

150. which statement should you use to obtain information about the number, names, status, and
location of the control files?
a. select name, status from v$parameter;
b. select name, status from v$controlfile;
c. select name, status, location from v$control_files;
d. select status, location from v$parameter where parameter=control_files;
answer: ?b? c
(a) false v$parameter this view lists information about initialization parameters. see (a58242.pdf)
pg. 402.
(b) true v$controlfile this view lists the names of the control files. see (a58242.pdf) pg. 360.
(c) false v$control_files does not exist see (a58242.pdf)
(d) false v$parameter this view lists information about initialization parameters, it has no
parameter column. see (a58242.pdf) pg. 402.

Das könnte Ihnen auch gefallen