Sie sind auf Seite 1von 6

Unit 5

Architecture and Scenarios

1. Port No is used to identify the database process in most


monitoring view of the persistence layer.
2. Core Processes on Single Node Instance
Daemon 3xx00
o Starts all other processes and keeps them running.
Name server 3xx01
o Monitoring service
o Knows data distribution
Pre-processors 3xx02
o To feed unstructured data into HANA
Index server -- 3xx03
o The main database process
o Data loads, queries, calculations,
Statistics server 3xx05
o Monitoring service
o Pro-active alerting
SAP Web Dispatcher 3xx06
o Entry point for HTTP(s) request
XS-Engine 3xx07
o Web service component
Compiler Server 3xx10
o Performs the compilation of the stored procedures
and programs
SAP start Service -- Nil
o Is responsible for starting and stopping the other
service
3. Each data-persistent process has a volume ID that is unique
within the database system.
4. Volume ID containing one data file.
M_VOLUMES
/hana/data
5. Architecture of The SAP HANA INDEX Server

Unit 5
Architecture and Scenarios

Simplified Architecture
HANA Core Process
External Interfaces
o Communicate with HANA
o Queries, Data Loads, Admin, etc
Processing Engines
o Operate on data
o Execute Queries,
Relational Engines
o Store data (In memory)
Storage Engine
o Handles data pages
o Handles transfer RAM Disk
Disk Storage
o Non-volatile data storage
6. Any modification of the data content of HANA DB will be
written to a file-based transaction log at the latest when the
write transaction is committed.
7. DATA Volume:
SQL data and undo log information
Additional HANA information, such as modelling data
Kept in-memory to ensure maximum performance
Write process is asynchronous

Unit 5
Architecture and Scenarios

8. DATA files are optimized for the task of rebuilding the inmemory data image following a system start. Writing to the
data file is handling in an asynchronous background process
named SAVEPOINT.
9. The content of data file is organized in pages. Pages are also
used to exchange data between the persistence layer and inmemory store.
10. The convertor is part of the persistence layer that maps
the logical pages of the database stores to the physical
pages of the volume in the so-called CONVERTER TABLE.
Page sizes different from 4 KB up to 16 MB.
M_DATA_VOLUME_PAGE_STATISTICS and
M_CONVERTABLE_STATISTICS
11. Any write transaction in the database system will trigger
the writing of a redo log entry in the database transaction
log.
Information about data changes (redo log)
Directly saved to persistent storage when transaction is
committed
Cyclical overwrite (only after backup)
12. Each data persistency process has a log volume contain
the log files also known as log segment.
/hana/log
Logsegment_<partition>_<segment_no>.dat
13. Log segment are preallocated and preformatted files with
a fixed size that determine by the parameter
log_segment_size.
System
Service
Default log
Component
Segment Size
Name server
nameserver
64 MB
Index server
indexserver
1024 MB
Statistics server
statisticsserver
64 MB
SAP HANA XS
xsengine
8 MB
Script server
scriptserver
8 MB
Default Size for Log Segment

Unit 5
Architecture and Scenarios

State
Formatting

Definition
Segment is being prepared but not yet ready
for use.
Preallocated Segment is ready for use but not yet in use.
writing
Segment is in use (being written to).
Closed
Segment is closed, no longer needed for
restart, but has not yet been backed up.
Truncated
Segment is closed, and backed up but still
needed for system restart.
BackedUP
Segment is closed and backed up but still
needed for system restart.
Free
Segment is closed, backed up and no longer
needed for system restart. It can be reused or
removed.
Possible State of Log Segments
14. Information on the log segment is available from the
volume monitor in the SAP HANA Studio.
Log_mode =overwrite/normal
Enable_auto_log_backup = yes/no
15. The savepoint operation must be transferring all data
manipulation records in the log segment from main memory
to the data files.
Changed data and undo log is written from memory to
persistent storage.
Automatic
16. ALTER SYSTEM RECLAIM LOG
17. Manually trigger save point : ALTER SYSTEM SAVEPOINT
18. System Views Related to the Transaction Logs
View Name
M_LOG_SEGMENTS
M_LOG_PARTITIONS
M_LOG_BUFFERS

Description
Display all log segments with
state, size, log position, and so on.
Various performance statistics for
each log partition.
Information about the in-memory
log buffers, such as sizes and wait

Unit 5
Architecture and Scenarios
counts.
Display all data and log volumes
M_VOLUMES
for all database services that
persist data.
M_LOG_IO_TOTAL_STATIS File access statistics for all data
and log volumes.
TICS
Disk configuration and usage
M_DISKS
statistics for all data, log, trace,
and backup file system.
19.
Relevant Database Parameters for Transaction Logs
Parameter
log_mode

Description
Governs how the database handles
transaction logs.
Governs whether or not log backups
enable_auto_log_ba
are created in log mode normal.
ckup
Time after which the database will
enable_auto_timeou
close the currently open log segment
t_s
and back it up.
logsegment_Size_m Fixed size of log segments of a given
b
service.
log_buffer_count
Number of log buffers per service.
log_buffer_size_kb
Size of log each log buffer.
20.

Relevant System view for Data Volumes and savepoint

View Name
M_DATA_VOLUMES
M_VOLUME_FILES
M_DATA_VOLUME_
SUPERBLOCK_STATIS
TICS
M_DATA_VOLUME_
PAGE_STATISTICS
M_SAVEPOINTS

Description
File names and sizes of data volumes.
Total and used size of data and log
volumes.
Number of allocated and used super
blocks per data file.
Usage statistics on pages and
superblocks
Information on savepoint operations
since system start, including
duration, number of pages written, or
resulting size of data file

Unit 5
Architecture and Scenarios
M_SAVEPOINT_STATIS Aggregated information from view M_
TICS
SAVEPOINTs
M_EVENTS
Details of current disk-full events.
Time between two regular savepoint
savepoint_interval_s
operation
21.

Start Procedure
1. Open the data volume file
2. Load the converter table from the last completed
savepoint.
3. Load the list of open transaction from the last
completed savepoint.
4. Load row store tables.
5. Replay redo log entries.
6. Roll back uncommitted transaction.
7. Perform a savepoint.

Das könnte Ihnen auch gefallen