Beruflich Dokumente
Kultur Dokumente
Run Job1 Run Job2 Run Job3 Run Job3 Run Job4
… with with with with with …
urgent urgent high medium medium
priority priority priority priority priority
urgent MMON
Stats high
Maintenance medium
Window ABP
Space
Job1 … Jobn
SQL
DBA_AUTOTASK_TASK
The Automatic Maintenance Task feature decides when and in what order tasks are performed. As
a DBA, you can control the following:
• If the maintenance window turns out to be inadequate for the maintenance workload, you can
reconfigure the maintenance window.
• You also have the ability to control the percentage of resources allocated to the automated
maintenance tasks during each window.
• You can enable/disable individual task in some or all Maintenance Windows.
• In RAC environment, you have the ability to shift maintenance work to one or more instances
by mapping maintenance work to a service. Enabling the service on a subset of instances will
shift maintenance work to these instances.
As shown on the slide, Enterprise Manager is the preferred way for Automatic Maintenance Tasks
control. However, you can also use the DBMS_AUTO_TASK_ADMIN package.
OLTP OLAP
(Small random I/Os) (Large sequential I/Os)
V$IOSTAT_FUNCTION
AWR & EM
.
V$IOSTAT_FILE
V$IOSTAT_CONSUMER_GROUP
Default Plan
The above slide shows you how DEFAULT_PLAN is created.
Note that there are no limits for its thresholds.
As you can see, Oracle Database 11g introduced two new I/O limits that you can define as
thresholds in a resource manager plan.
The main Enhancements to Scheduler in Oracle database 11g are listed. The feature on the
scheduling propagation jobs is discussed in the Oracle Streams e-study
Schedule
Remote jobs Jobs
• Operating system level jobs
• Scripts, binaries, etc
• No Oracle database
required
• Agent starts and manages
jobs
Distributed jobs
• Database jobs on other Agent
servers
Execute Execute
OS job DB Job
Oracle Scheduler has added support for remote and multi-node jobs. It provides the ability of
running a job on a host without a database
Additionally users are allowed to provide a list of databases on which to execute a job. Creation
and maintenance are done on a single database, but at run time exact replicas are executed on all
the databases specified.
The agent is a separately installable component but it is included in every database. During
installation of the agent as part of the database there is no configuration necessary. If the agent
installed as part of the database is required to run jobs from another database an additional step is
necessary to register with that database and to start the agent in the background.
During standalone installation the agent should be registered with at least one database. It is
possible to automate this registration if the user is willing to include the database registration
password in the installer file. This allows for silent automated installs.
Optional information includes
• Path to install the agent into
• Whether to automatically start the agent
• Whether to setup the agent to automatically start on every computer startup
If after installation of the agent, another database is required to run jobs on the agent, the agent
must be registered with that database.
Job 2
Job 3
A remote database job refers to a collection of jobs with the following properties:
• The jobs are created on one database.
• The jobs share their metadata with the exception of run-related attributes, thus making the
jobs copies of each other.
• Each copy of the job executes on a different database, independently of the others. For
example, a job may execute successfully several times on one database while failing to
execute at all on another database. (The success or failure of a job on one database does not
affect any of the other copies).
• All copies of the job can be altered or manipulated from the database where the job was
originally created.
The copies of the job that are running on the various databases are, in effect, independent jobs.
However, they are still linked in the sense that they can be manipulated as a group. The original
job created by the user is called the parent job. To distinguish the copies of the job from other
copied/cloned jobs, the copies are called job instances.
The database on which a remote database job is created is called the source database of the job.
The databases on which the job is executed are called destination databases. If the job is
configured to execute on the database on which it was created, then that database is both the
source database for the job as well as one of its destinations.
• MAX_CONCURRENT_JOBS
• MAX_CONCURRENT_JOBS_PER_USER
• max_concurrent_jobs
MAX_CONCURRENT_JOBS: This is the absolute maximum number of jobs allowed to run on the
host simultaneously. This only includes jobs run through and controlled by the agent. Default
value is 100. If jobs run through the agent are taking up too much CPU, memory or IO, the
administrator can reduce this number.
MAX_CONCURRENT_JOBS_PER_USER: Allowable Values are1-1000. If multiple users run
jobs on a remote host this parameter allows limiting the maximum number of jobs a single user
can run simultaneously. Default value is 100. Since this is the same as the default value for
MAX_CONCURRENT_JOBS it does not have any effect by default. If several users use a remote
host and one is using more CPU/memory/IO than he should, this number can be reduced.
LIMITING_CPU_THRESHOLD:Allowable Values are10-100. This is the CPU threshold, above
which new jobs cannot run. Default value is 100 This value effectively turns limiting based on
CPU usage off. If a remote host should never be completely loaded or should have CPU always
reserved for another use then this value should be set to ensure this.
CREATE_CREDENTIAL
CREATE_CREDENTIAL(
credential_name IN VARCHAR2,
user IN VARCHAR2,
password IN VARCHAR2,
domain IN VARCHAR2 DEFAULT NULL,
db_role IN VARCHAR2 DEFAULT NULL,
comments IN VARCHAR2 DEFAULT NULL);
This is used to create a stored username/password pair called a credential. Credentials reside in a
particular schema and can be created by any user with the CREATE JOB system privilege.
DROP_CREDENTIAL
DROP_CREDENTIAL(
credential_name IN VARCHAR2,
force IN BOOLEAN DEFAULT FALSE);
This is used to drop a stored username/password pair called a credential. To drop a public
credential, the SYS schema must be explicitly given. Only a user with the MANAGE SCHEDULER
system privilege is able to drop a public credential. For a regular credential only the owner of the
credential or a user with the CREATE ANY JOB system privilege is able to drop the credential.
SET_AGENT_REGISTRATION_PASS
SET_AGENT_REGISTRATION_PASS(
registration_password IN VARCHAR2,
expiration_date IN TIMESTAMP DEFAULT NULL,
max_uses IN NUMBER DEFAULT 1);
Oracle Database 11g: New Features for Administrators 7 - 49
This is used to set the agent registration password for a database. Remote agents must register
with the database before the database can submit jobs to the agent. To prevent abuse, this
Dictionary Views
• New Views:
– *_SCHEDULER_CREDENTIALS
– *_SCHEDULER_JOB_DESTINATIONS
– *_SCHEDULER_PREFERRED_CREDS
• These views are modified to contain the following
additional columns.
– *_SCHEDULER_JOBS:
– *_SCHEDULER_JOB_LOG
– *_SCHEDULER_JOB_RUN_DETAILS
– *_SCHEDULER_REMOTE_JOB_STATE
• Regular job
– Highest Overhead
– Best recovery
The advantages and disadvantages of the three types of jobs are as follows
• A regular job offers the maximum flexibility but does entail a significant overhead in
create/drop performance. They can be created with a single command, the user can have fine-
grained control of the privileges on the job. He can also use a program or a stored procedure
owned by another user. The downside is, as mentioned before is slow create and drop time
because of the overhead necessitated by database objects. If the user is creating a relatively
small number of jobs that run relatively infrequently then he should choose regular jobs.
• A persistent lightweight job has a significant improvement in create and drop time since it
does not have the overhead of creating a database object. Since persistent lightweight jobs
write state to disk at run-time, their run-time overhead isn’t likely to be much better than for
regular jobs but these is a small improvement here. There are several drawbacks to persistent
lightweight jobs. First, the user cannot set privileges on these jobs - they inherit their
privileges from the parent job template; since the use of a template is mandatory, it is not
possible to create a fully self-contained persistent lightweight job. If the user needs to create a
large number of jobs in a very short time (from 10-100 jobs a second) and he has a library of
programs available that he can use then he should use persistent lightweight jobs.
• Volatile lightweight jobs write as little to disk as possible. Creates and drops may not be
written to disk at all and no state is written at run time. Thus the creation overhead is even
lesser than that for persistent lightweight jobs and there is minimal run-time overhead. On the
downside, volatile lightweight jobs share all the drawbacks of persistent lightweight jobs. In
addition, they have additional drawbacks in that they cannot be recovered if the database
crashes and cannot be load-balanced across RAC instances. Volatile jobs should be used in
those situations
Oracle where the user
Database 11g:is creating very large
New Features fornumbers of frequently
Administrators 7 -executing
52 jobs
and wants to bring overhead (both CPU as well as redo) to an absolute minimum.
New PL/SQL APIs
Lightweight jobs are created modified and dropped using DBMS_SCHEDULER APIs. There are
minimal changes to the APIs – other than for creating lightweight jobs, no new APIs have been
added. The CREATE_LIGHTWEIGHT_JOB call has been added to create lightweight jobs.
Most calls to manipulate regular jobs work on lightweight jobs. The methods
SET_ATTRIBUTE, GET_ATTRIBUTE, SET_JOB_ARGUMENT_VALUE,
RESET_JOB_ARGUMENT_VALUE, STOP_JOB, DROP_JOB and RUN_JOB work with light
weight jobs. The job-related dbms_scheduler calls that do not apply to lightweight jobs are
CREATE_JOB (which is used only to create regular jobs) and SET_JOB_ANYDATA_VALUE
since lightweight jobs cannot take ANYDATA arguments.
Oracle Database 10g introduced the advisor framework and various advisors to help DBAs
manage databases efficiently. These advisors provide feed back in the form of findings. Oracle
database 11g now classifies these findings, so that you can query the Advisor views to understand
how often a given type of finding is recurring in the database. A FINDING_NAME column has
been added to the following Advisor views:
• DBA_ADVISOR_FINDINGS
• USER_ADVISOR_FINDINGS
A new DBA_ADVISOR_FINDING_NAMES view displays all the finding names.
Self-diagnostic Engine
Instance ADDM
AWR
…
Inst1 Instn
• Specified in
DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER
procedure:
Value of INSTANCE Value of INSTANCES ADDM Analysis Mode
‘0’ or ‘UNUSED’(default) Database ADDM (all instances)
‘UNUSED’(default)
‘0’ or Comma separated list Partial analysis ADDM. Only
‘UNUSED’(default) of instance numbers instances specified in the
(1,2,5..) INSTANCES parameter are
analyzed.
A positive integer Any value Instance ADDM. The instance
(eg. ‘1’) specified in the INSTANCE
parameter is analyzed.
SGA_TARGET