Sie sind auf Seite 1von 6

What is SSFS:

As of kernel release 7.20, SAP has therefore introduced a new method of securely
storing the database password and for connecting to the database: "Secure
Storage in File System" (SSFS). The encrypted password for the SAP
database user is then no longer stored in the database, but in the file
system.

Prior to SSFS, the connection between the SAP system (AS ABAP) and the SAP tools
that use the ABAP database interface (R3trans, R3load etc.) to the database via
SQLNet (using the database alias name, for like configured in TNS) worked in such a
way that an OPS$ connection (with the database user OPS$<SID>ADM) that was
authorized by the operating system user sidadm was created first ( via "connect
/@TNS"). With this approach access to the table OPS$<SID>ADM.SAPUSER, and to
this table was only allowed. It contains the encrypted password for the actual
database connection of the SAP database user (default name Schema User).

Referred SAP Notes:


1. 1639578 = Documents Basis instructions/steps below.
2. 1623922 = Provides overview on how to connect to the Database using SSFS.
3. 1622837 = Documents steps/instructions to be taken care of by DBA team.

High Level Sequence:


Step 1: Basis change internal connect. This is all accomplished in Note #1 above.
Step 2: DBA change the external connect. This is all accomplished in Note #3
above.
Step 3: DBA drop OPS$ and old authentication

NOTE: No application validation required.

SW pre-requisites:

We should at a minimum, needs to be at Kernel 720, Patch 98. Referred SAP


Note 1611877.

Basis Steps to be performed for to enable SSFS authentication:

1. Preparing and securing the file system


Create dir rsecssfs/data and rsecssfs/key Under
/sapmnt/SID/global/security.Make sure to give appropriate permission.

<server:sidadm> mkdir rsecssfs


<server:sidadm> chmod 775 rsecssfs
<server:sidadm> rsecssfs
<server:sidadm> mkdir data
<server:sidadm> mkdir key
<server:sidadm> chmod 777 data key

2. Maintaining the SSFS profile parameters:


Add below mentioned parameter in Default profile-
rsdb_ssfs_connect = 0
RSEC_SSFS_DATAPATH = /sapmnt/<SID>/global/security/rsecssfs/data
RSEC_SSFS_KEYPATH = /sapmnt/<SID>/global/security/rsecssfs/key

3. Maintaining the SSFS environment variable:


Update below mentioned env variables in .sapenv * profiles in home directory
of sidadm ( Eg: /home/sapsys/<SID>adm)

setenv rsdb_ssfs_connect 0
setenv RSEC_SSFS_DATAPATH /sapmnt/<SID>/global/security/rsecssfs/data
setenv RSEC_SSFS_KEYPATH /sapmnt/<SID>/global/security/rsecssfs/key

NOTE: Make sure you are updating these env variable for app server also
else app server will not come up after this activity.

4. Change the ORA SCHEMA user password to your password from


BR-Tools:
Find out schema user from env : dbs_ora_schema=<schema>

5. Bounce the System

Bounce the system to reflect the parameter changes .

Note: For doing bounce come out from sidadm and do again sudo su
sidadm and then perform system bounce.

6. Setting up the SSFS data storage and checking the access rights
and perform check:

After the system bounce execute below mentioned commands as sidadm.

rsecssfx put DB_CONNECT/DEFAULT_DB_USER <schema name> -plain - Use


SAPSCHEMA NAME
rsecssfx put DB_CONNECT/DEFAULT_DB_PASSWORD <pwd> - password of
schema User

7. Changing to the new connection method:

Change the Profile parameter : rsdb/ssfs_connect = 1 in Default profile


.Initially we set it =0.
Update below mentioned env variable in .sapenv * profiles in home directory
of sidadm ( Eg: /home/sapsys/<SID>adm) of all servers.

rsdb_ssfs_connect=1 initially it was set 0.

8. Bounce the System

Bounce the system to reflect the parameter changes .

Note: For doing bounce come out from sidadm and do again sudo su
sidadm and then perform system bounce.

9. Checking the successful changeover/Validation

Check below entries in SM50 work process log --- B read_con_info_ssfs(): DBSL
supports extended connect protocol
B ==> connect info for default DB will be read from ssfs

10. DBA Teams Activity-

DBA will perform below tasks:

Steps Team Details Phase


drop sapuser table DBA drop table ops$<sid>adm.sapuser; execution - uptime (5
note 1622837 minutes)

remove the DBA alter system reset execution - uptime (5


parameter remote_os_authent scope=spfile; minutes)
REMOTE_OS_AUTHEN note 1622837
T

download BRTOOLS DBA per note 1764043, require version execution - uptime
7.20 patch 28 is required (10 minutes)

create brtools user DBA create user brt$adm execution - uptime 5


minutes

create storage DBA Create dir rsecssfs/data and execution - uptime 5


directories rsecssfs/key Under minutes
/oracle/ECS/security

drop OPS$ users DBA lock OPS$ users execution - uptime 5


minutes

modify BRTOOLS DBA modify scripts in /oracle/local/bin execution - uptime 1


scripts and run to that uses BRTOOLS hour
make sure working
fine
After getting system back from DBA team perform a clean system bounce
and perform system validation:

Oracle Background Processes


Background processes are created from the oracle binary when an instance is started. As
the name suggests background processes run in background and they are meant to perform
certain specific activities or to deal with abnormal scenarios that arise during the runtime of
an instance.

From SAP perspective, the following are the 6 most important background processes of
oracle database.

Database Writer (DBWR) :

The database writer writes dirty blocks from the database buffer to the datafiles.

Dirty blocks need to be flushed out to disk to make room for new blocks in the cache. When
a buffer in the database buffer cache is modified it is marked as dirty buffer. A cold
buffer is a buffer that has not been recently used according to the least recently used
(LRU) algorithm. The database writer writes cold, dirty buffers to disk so that new blocks
can be read into the cache.

The initialization parameter DB_WRITER_PROCESSES specifies the number of database


writer processes. The maximum number of database writer processes is 20.

The database writer writes the dirty buffers to disk under the following conditions :

1. When a checkpoint occurs


2. Every 3 seconds
3. When a server process couldnt find a clean reusable buffer after scanning a
threshold number of buffers

Log Writer (LGWR) :

The log writer process writes data from the redolog buffers to the redolog files on disk.

The redolog buffer is a circular buffer. When LGWR writes redo entries from the redolog
buffer to the redolog file, server processes can overwrite the entries that are already copied
with new entries in redolog buffer. LGWR writes at a faster pace so that space is always
available in the buffer for new entries.
The log writer gets activated under the following conditions :

1. When a transaction is commited


2. Every 3 seconds
3. When the redo-log buffer is 0ne third full
4. When a database writer writes modified buffers to disk, if necessary
Note : Before database writer can write dirty blocks to disk it should make sure that all redo
entries are written from the redolog buffer to the disk. This is also known as write-ahead
logging. If database writer finds that some redo-records are not written, it signals LGWR to
write to disk and waits for LGWR to complete writing the redolog buffer before it can write
out the databuffers.

Checkpoint (CKPT) :

Checkpoint signals the synchronization of all database files with the check point information.
It ensures data consistency and faster database recovery in case of a crash.

The checkpoint process regularly initiates a checkpoint. Whenever a check point occurs
following things are carried out :

1. Updating the file headers of the data files with information about the last checkpoint
performed
2. Update control files about the last checkpoint
3. Initiates LGWR to flush the redolog buffer entries to redolog files.
4. Writes the checkpoint record to the redolog file
5. Initiates DBWR to write all dirty blocks to disk and thus synchronizes database

Archiver Process(ARCH) :

The archiver process copies online redolog files to the designation archive log location after
the occurrence of a log switch. It is an optional process. Archiver is present only when
database is running in archive log mode and automatic archiving is enabled.

You can specify multiple archiver processes with initialization parameter


LOG_ARCHIVE_MAX_PROCESSES. ALTER SYSTEM command can be used to increase or
decrease the number of archiver processes.

However it is not recommended for us to change this value, as Log writer starts a new
archiver process automatically when the current archive processes are insufficient to handle
the workload

Process Monitor (PMON) :


The process monitor performs process recovery when a user process fails. PMON is
responsible for cleaning up the database buffer cache and freeing resources that the user
process was using like releasing locks, removing process ids from active processes list etc.

PMON checks the running status of dispatcher and server processes periodically and restarts
in case any have stopped. Please note that this wont start processes that are intentionally
stopped by Oracle.

PMON also registers information about the instance and dispatcher processes with the
network listener.

PMON wakes up every 3 seconds to perform house keeping activities and should be running
always for an instance.

System Monitor (SMON) :

The system monitor performs instance recovery, if necessary, at instance startup. SMON is
also responsible for cleaning up temporary segments that are no longer in use and for
coalescing contiguous free extents within dictionary managed tablespaces.

SMON can be called by other processes in cases of need. SMON wakes up every 5 seconds
to perform house keeping activities. SMON must always be running for an instance.

Query to view background processes of Oracle :

Goto SQL prompt of oracle database system and provide following command to view
background processes.

SQL>

Select * from v$session where type = BACKGROUND;

Das könnte Ihnen auch gefallen